Creates a new live stream. Once created, an encoder can connect to Mux via the specified stream key and begin streaming to an audience.
curl https://api.mux.com/video/v1/live-streams \
-X POST \
-d '{ "playback_policy": "public", "new_asset_settings": { "playback_policy": "public" } }' \
-H "Content-Type: application/json" \
-u ${MUX_TOKEN_ID}:${MUX_TOKEN_SECRET}
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
drm
playback IDs are protected with DRM technologies. See DRM documentation for more details.
An array of playback policy objects that you want applied to this asset and available through playback_ids
. advanced_playback_policies
must be used instead of playback_policy
when creating a DRM playback ID.
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
drm
playback IDs are protected with DRM technologies. See DRM documentation for more details.
The DRM configuration used by this playback ID. Must only be set when policy
is set to drm
.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include:
"public"
(anyone with the playback URL can stream the asset)."signed"
(an additional access token is required to play the asset).If no playback_policy
is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
drm
playback IDs are protected with DRM technologies. See DRM documentation for more details.
An array of playback policy objects that you want applied to this asset and available through playback_ids
. advanced_playback_policies
must be used instead of playback_policy
when creating a DRM playback ID.
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
drm
playback IDs are protected with DRM technologies. See DRM documentation for more details.
The DRM configuration used by this playback ID. Must only be set when policy
is set to drm
.
You can set this field to anything you want. It will be included in the asset details and related webhooks. If you're looking for more structured metadata, such as title
or external_id
, you can use the meta
object instead. Max: 255 characters.
Deprecated. See the Static Renditions API for the updated API.
Specify what level of support for mp4 playback. You may not enable both mp4_support
and static_renditions
.
capped-1080p
option produces a single MP4 file, called capped-1080p.mp4
, with the video resolution capped at 1080p. This option produces an audio.m4a
file for an audio-only asset.audio-only
option produces a single M4A file, called audio.m4a
for a video or an audio-only asset. MP4 generation will error when this option is specified for a video-only asset.audio-only,capped-1080p
option produces both the audio.m4a
and capped-1080p.mp4
files. Only the capped-1080p.mp4
file is produced for a video-only asset, while only the audio.m4a
file is produced for an audio-only asset.The standard
(deprecated) option produces up to three MP4 files with different levels of resolution (high.mp4
, medium.mp4
, low.mp4
, or audio.m4a
for an audio-only asset).
MP4 files are not produced for none
(default).
In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8
) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
This field is deprecated. Please use video_quality
instead. The encoding tier informs the cost, quality, and available platform features for the asset. The default encoding tier for an account can be set in the Mux Dashboard. See the video quality guide for more details.
The video quality controls the cost, quality, and available platform features for the asset. The default video quality for an account can be set in the Mux Dashboard. This field replaces the deprecated encoding_tier
value. See the video quality guide for more details.
An array of static renditions to create for this asset. You may not enable both static_renditions
and mp4_support (the latter being deprecated)
Arbitrary user-supplied metadata set for the static rendition. Max 255 characters.
The video title. Max 512 code points.
This is an identifier you provide to keep track of the creator of the video. Max 128 code points.
This is an identifier you provide to link the video to your own data. Max 128 code points.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Defaults to 60 seconds on the API if not specified.
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
Force the live stream to only process the audio track when the value is set to true. Mux drops the video track if broadcasted.
Describe the embedded closed caption contents of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
Marks the live stream as a test live stream when the value is set to true. A test live stream can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test live streams created. Test live streams are watermarked with the Mux logo and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
Arbitrary user-supplied metadata set by you when creating a simulcast target.
Stream Key represents a stream identifier on the third party live streaming service to send the parent live stream to. Only used for RTMP(s) simulcast destinations.
The RTMP(s) or SRT endpoint for a simulcast destination.
rtmp://live.example.com/app
.srt://srt-live.example.com:1234?streamid={stream_key}&passphrase={srt_passphrase}
.Note: SRT simulcast targets can only be used when an source is connected over SRT.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
{
"playback_policy": [
"public"
],
"new_asset_settings": {
"playback_policy": [
"public"
]
}
}
{
"data": {
"stream_key": "abcdefgh",
"status": "idle",
"reconnect_window": 60,
"playback_ids": [
{
"policy": "public",
"id": "HNRDuwff3K2VjTZZAPuvd2Kx6D01XUQFv02GFBHPUka018"
}
],
"new_asset_settings": {
"playback_policies": [
"public"
]
},
"id": "ZEBrNTpHC02iUah025KM3te6ylM7W4S4silsrFtUkn3Ag",
"created_at": "1609937654",
"latency_mode": "standard",
"max_continuous_duration": 43200
}
}