It’s totally understandable why many companies prefer to use prerecorded video. Live streaming your video content might feel risky. It opens you up to a whole world of unknowns. What if I say the wrong thing? What if my kids barge in on me while I’m streaming? Broadcasting your events and content live can feel intimidating, and for some organizations, these unknowns prevent them from using live-streamed video entirely.
But by passing up live streaming, you’re also missing out on some massive benefits. Live content can build anticipation and buzz like no other form of media. Viewers experiencing your event together can be a part of what’s happening right now, as soon as it happens. And with additional features like live chat, your viewers can reflect with others during the moments your live story unfolds.
So, how can you take advantage of the momentum a live stream provides without exposing your organization to the risk of things going wrong?
The answer: Live stream your prerecorded content as if you were broadcasting live.
You may have heard this approach called simulated live, prerecorded live, or VOD-to-live. The idea goes by many names, but they’re all trying to achieve the same goal: removing the unknowns by streaming your prerecorded videos as if they were live.
It’s easy enough to stream one video as if it were live. If you only need to broadcast a single video asset, you can simply stream it directly to Mux using a tool like OBS or FFmpeg:
Streaming multiple videos back to back as if they were live is a bit trickier. There are a few platforms out there that offer this as a solution. But if you want to build it into your own application as a native feature, you’re going to have to get your hands dirty. For this post, we’ll offer up one option for implementing this functionality by leaning on our trusty friend, FFmpeg. Fair warning: While this approach is effective, it’s not the most perfect solution we can think of — but more on that below.
Step 1: Prepare your source files
If you want to stream multiple files one after another, then all source files must have the same streams (same codecs, same time base, same resolution). If you have assets with different specifications, you’ll need to first transcode them using the following FFmpeg command:
What do these FFmpeg parameters do?
Flag | Description |
---|---|
-i <myfile.mov> | Defines your input file. |
-c:v libx264 | Selects the h264 codec. |
-crf 17 | Sets the output quality using crf (constant rate factor); 0 is lossless and 51 is the worst quality possible. |
-c:a aac | Selects the aac audio codec. |
-b:a 192K | Sets the bitrate of the audio. |
Once you’ve completed this initial processing, all of your required video files should be prepped for simulated live streaming.
Step 2: Create a text file
To play back a series of videos, FFmpeg requires a .txt file that contains a list of source videos and their full locations. The videos will be streamed in order from top to bottom. The .txt file will be used as an input source into FFmpeg. Make sure you use the same files that you prepared earlier!
In this example, myfile_1.mp4 will be streamed first, followed seamlessly by myfile_2.mp4 with no interruption, and so on.
Step 3: Live stream to Mux
Use this command to start live streaming your video files to Mux. Make sure to specify the location of the text file as the input.
What do these FFmpeg parameters do?
Flag | Description |
---|---|
-re | Encodes each source in real time and not faster than. |
-f concat | Requires concat demuxer for specifying a list of files as an input. |
-safe 0 | Accepts all input file names and locations |
-x264-params | Manually sets x264 encoder parameters. |
keyint=60 | Sets the keyframe interval to every 60 frames. |
scenecut=0 | Tells the encoder to not add a keyframe when there’s a scene change. |
-preset fast | Reduces quality in favor of encoding speeds. |
-b:v 5M | Specifies the target (average) bit rate for the encoder to use. |
-maxrate 6M | Specifies a minimum tolerance to be used. |
-bufsize 3M | Specifies the decoder buffer size, which determines the variability of the output bitrate. |
Where this falls short
If it isn’t clear by now, managing FFmpeg commands can quickly get hairy. If you want any chance of turning this into a scalable solution for many users to take advantage of, you will need to build and manage your own computing infrastructure. Now, suddenly, you’re required to become a video expert instead of focusing on what makes your product unique.
FFmpeg also doesn’t make error handling very easy. If you experience any network disconnects, FFmpeg will fall over. You will have to implement your own retry solution to take care of restarting the stream, playing the correct video, playing the video from where it left off, and more. Whew. I told you it gets hairy.
Can we do better than this?
We’re experimenting with some ways to make simulated live streaming easier here at Mux. If this is something you’re interested in, reach out and let’s chat.