Use the Mux Model Context Protocol (MCP) Server to bring Mux's Video and Data platform capabilities directly to your AI tools.
The Mux MCP (Model Context Protocol) Server brings Mux's Video and Data platform capabilities directly to your AI tools. Once set up, you can upload videos, manage live streams, analyze video performance, and access practically all of Mux's video infrastructure through natural language prompts in supported AI clients.
This guide walks you through the functionality in Mux's MCP server, and connecting it to various AI clients.
Here are the following tools and API routes supported in the local Mux MCP Server:
Here are the tools and routes we don't currently support in the local Mux MCP Server. Generally speaking, these are composed of endpoints which can execute deletions, and are disabled for safety:
Video Management
ASSET_ID
Mux Data Analytics and Performance
Before utilizing the Mux MCP Server, make sure you meet the following prerequisites:
Mux's MCP server is hosted at https://mcp.mux.com, and when using this remote MCP server, authentication should be handled automatically, with no need for grabbing Access Token information from the Dashboard. In order to configure the Mux MCP server in your client, you need to add an MCP server, which is sometimes called a "connector" (Claude/Claude Code/ChatGPT), an "extension" (Goose), or simply an MCP Server (VSCode), and enter the URL https://mcp.mux.com
as the location.
Once configured, the LLM client and our MCP server should negotiate authentication and authorization, prompting you automatically to:
When you're already logged in, your experience will look something like this:
And that's it, you're good to go!
By default, https://mcp.mux.com will be configured in the simplest manner (though this may change in the future), exposing access to the full set of tools available to Mux. That said, depending on your workflow, you may want to limit this set of tools in some way. For that reason, Mux supports query parameters to configure the MCP server. A more complete set of configuration options can be seen here, and most of those work simply as query params. However, a few bear mentioning directly:
tools
: options are all
(default), and dynamic
.
dynamic
if you want to expose tools mean to allow the LLMs to dynamically discover endpoints and tools, which can aid in controlling context windows and speeding up processing if a lot of tools are available.resource
: array of resources (sets of APIs) to expose, such as video.*
. These act as an inclusion set, rather than excluding, so you can chain multiple to expand the list of tools. Some options include:
video.*
: all Mux Video APIsdata.*
: all Mux Data APIssystem.*
: all System APIs, such as managing Signing Keysvideo.asset.*
: the APIs used to manage Mux Video assetsclient
: options are claude
(default), claude-code
, cursor
, openai-agents
.
You can also chain these together. For instance, if you want to configure an MCP server that exposes only the Video APIs, but does it in a dynamic way, for Cursor, you'd just use https://mcp.mux.com?client=cursor&resource=video.*&tools=dynamic
as your remote URL.
These days, most LLM clients directly support remote MCP servers (rather than locally installed ones), so you shouldn't have much trouble getting set up. That said, there are still some clients (particularly older versions) that don't have built-in remote MCP support (such as Goose as of the time I wrote this guide). For those situations, you have two options:
If you run into issues or have questions: