<SYSTEM>This documentation covers examples of how developers might use Mux Video</SYSTEM>

# Use Mux in AI Workflows
Learn how to use AI to automatically generate chapters, translate audio, and create summaries for your Mux videos
<Callout type="info">
  The workflows below are all powered by [@mux/ai](https://github.com/muxinc/ai), our open-source library that provides prebuilt workflows for common video AI tasks. It works with your favorite LLM provider (OpenAI, Anthropic, or Google). Check out the [GitHub repository](https://github.com/muxinc/ai) for more details!
</Callout>

<GuideCard
  imageSrc="/docs/images/ai-chapters@2x.png"
  imageWidth={536}
  imageHeight={300}
  title="AI Chapters"
  description="Automatically generate chapters for your video using AI."
  links={[
    {
      title: "View the Guide →",
      href: "/docs/examples/ai-generated-chapters",
    },
  ]}
/>

<GuideCard
  imageSrc="/docs/images/ai-translation@2x.png"
  imageWidth={536}
  imageHeight={300}
  title="AI Dubbing"
  description="Automatically dub your video into different languages."
  links={[
    {
      title: "View the Guide →",
      href: "/docs/examples/ai-translation-dubbing",
    },
  ]}
/>

<GuideCard
  imageSrc="/docs/images/ai-summarizing@2x.png"
  imageWidth={536}
  imageHeight={300}
  title="AI Summarization"
  description="Automatically summarize your video using AI."
  links={[
    {
      title: "View the Guide →",
      href: "/docs/examples/ai-summarizing-and-tagging",
    },
  ]}
/>

<GuideCard
  imageSrc="/docs/images/ai-subtitle-translations@2x.png"
  imageWidth={536}
  imageHeight={300}
  title="AI Subtitle Translation"
  description="Automatically convert Mux's auto-generated captions into another language by leveraging the power of an LLM."
  links={[
    {
      title: "View the Guide →",
      href: "/docs/examples/ai-translation-subtitles",
    },
  ]}
/>

<GuideCard
  imageSrc="/docs/images/ai-recommendation-engine@2x.png"
  imageWidth={536}
  imageHeight={300}
  title="AI Recommendation Engine"
  description="Nearest neighbor search for similar videos"
  links={[
    {
      title: "View the Guide →",
      href: "/docs/examples/ai-recommendation-engine",
    },
  ]}
/>

<GuideCard
  imageSrc="/docs/images/ai-moderation@2x.png"
  imageWidth={536}
  imageHeight={300}
  title="AI Moderation"
  description="Automatically moderate video content using AI to detect violence or nudity."
  links={[
    {
      title: "View the Guide →",
      href: "/docs/examples/ai-moderation",
    },
  ]}
/>


# Generating video chapters with AI
Use the @mux/ai library to automatically generate chapters for your videos via LLMs
<Callout type="info">This guide uses [@mux/ai](https://github.com/muxinc/ai), our open-source library that provides prebuilt workflows for common video AI tasks. It works with your favorite LLM provider (OpenAI, Anthropic, or Google). Check out the [GitHub repository](https://github.com/muxinc/ai) for more details!</Callout>

If you're using a player that supports visualizing chapters during playback, like [Mux Player](https://www.mux.com/player) does, you can automatically generate chapter markers using AI. The `@mux/ai` library makes this straightforward by handling all the complexity of fetching transcripts, formatting prompts, and parsing AI responses.

## Prerequisites

Before starting, make sure you have:

* A Mux account with API credentials (token ID and token secret)
* An API key for your preferred AI provider (OpenAI, Anthropic, or Google)
* Node.js installed
* Videos with captions enabled (human-generated captions are best, but [auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) work great too)

## Installation

```bash
npm install @mux/ai
```

## Configuration

Set your environment variables:

```bash
# Required
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# You only need the API key for the provider you're using
OPENAI_API_KEY=your_openai_api_key # OR
ANTHROPIC_API_KEY=your_anthropic_api_key # OR
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
```

## Basic usage

### Backend: Generate chapters

```javascript
import { generateChapters } from "@mux/ai/workflows";

// Generate chapters for a video
const result = await generateChapters("your-mux-asset-id", "en", {
  provider: "openai" // or "anthropic" or "google"
});
```

The function returns chapters in the exact format Mux Player expects:

```json
{
  "chapters": [
    { "startTime": 0, "title": "Introduction" },
    { "startTime": 15, "title": "Setting Up the Live Stream" },
    { "startTime": 29, "title": "Adding Functionality with HTML and JavaScript" },
    { "startTime": 41, "title": "Identifying Favorite Scene for Clipping" }
  ]
}
```

### Frontend: Add chapters to player

Once you have the chapters from your backend, you can add them to Mux Player:

```javascript
const player = document.querySelector('mux-player');
player.addChapters(result.chapters);
```

## Provider options

`@mux/ai` supports three AI providers:

* **OpenAI** (default): Uses `gpt-5.1` model - Fast and cost-effective
* **Anthropic**: Uses `claude-sonnet-4-5` model - Great for nuanced understanding
* **Google**: Uses `gemini-3-flash-preview` model - Balance of speed and quality

You can override the default model:

```javascript
const result = await generateChapters("your-mux-asset-id", "en", {
  provider: "openai",
  model: "gpt-4o" // Use a different model
});
```

## Custom prompts

You can override specific parts of the prompt to tune the output:

```javascript
const result = await generateChapters("your-mux-asset-id", "en", {
  provider: "anthropic",
  promptOverrides: {
    system: "You are a professional video editor. Create concise, engaging chapter titles.",
    instructions: "Generate 5-8 chapters with titles under 50 characters each."
  }
});
```

## Webhook integration

For automated chapter generation when videos are uploaded, you should trigger the call to generate chapters from the [`video.asset.track.ready` webhook](/docs/core/listen-for-webhooks):

```javascript
export async function handleWebhook(req, res) {
  const event = req.body;

  if (event.type === 'video.asset.track.ready' &&
      event.data.type === 'text' &&
      event.data.language_code === 'en') {
    const result = await generateChapters(event.data.asset_id, "en");
    await db.saveChapters(event.data.asset_id, result.chapters);
  }
}
```

## Visualizing in Mux Player

Once you have chapters, you can display them in Mux Player:

```javascript
const player = document.querySelector('mux-player');
player.addChapters(result.chapters);
```

Here's an interactive example:

Sandpack interactive code example configuration JSON.stringified:
```json
{
  "customSetup": {
    "dependencies": {
      "@mux/mux-player": "latest"
    }
  },
  "files": {
    "/index.html": {
      "code": "<mux-player playback-id=\"LOMMdhiOET521ZEsVVyM01blbZXEgfgxj\"></mux-player>"
    },
    "/index.js": {
      "active": true,
      "code": "import \"./styles.css\";\nimport \"@mux/mux-player\";\n\nconst generatedChapters = [\n  { start: \"00:00:00\", title: \"Instant Clipping Introduction\" },\n  { start: \"00:00:15\", title: \"Setting Up the Live Stream\" },\n  {\n    start: \"00:00:29\",\n    title: \"Adding Functionality with HTML and JavaScript\",\n  },\n  {\n    start: \"00:00:41\",\n    title: \"Identifying Favorite Scene for Clipping\",\n  },\n  { start: \"00:00:52\", title: \"Selecting Start and End Time for Clip\" },\n  { start: \"00:01:10\", title: \"Generating Clip URL\" },\n  { start: \"00:01:16\", title: \"Playing the Clipped Video\" },\n  { start: \"00:01:24\", title: \"Encouragement to Start Clipping\" },\n];\n\nconst playerEl = document.querySelector(\"mux-player\");\n\nconst parsedChapters = generatedChapters.map(({ start, title }) => {\n  // we need to turn our timestamps into seconds\n  const split = start.split(\":\").map((n) => parseInt(n));\n  const seconds = split[0] * 3600 + split[1] * 60 + split[2];\n  return { startTime: seconds, value: title };\n});\n\nplayerEl.addChapters(parsedChapters);"
    }
  }
}
```

## How it works

Under the hood, `@mux/ai` handles:

1. Fetching the video transcript from Mux using the asset ID
2. Formatting the transcript for the AI provider
3. Sending optimized prompts to generate chapter markers
4. Parsing and validating the AI response
5. Converting timestamps to the format Mux Player expects

## Mux features used

* [Auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) - `@mux/ai` fetches these automatically
* [Mux Player](/docs/guides/mux-player-web) - For displaying the generated chapters

## Best practices

* **Enable captions**: Human-generated captions provide the best results, but auto-generated captions work great too
* **Choose the right provider**: OpenAI's `gpt-5.1` is cost-effective for most use cases
* **Validate output**: While `@mux/ai` validates JSON structure, review chapter quality for your use case
* **Cache results**: Store generated chapters in your database to avoid regenerating them

## Resources

* [@mux/ai GitHub Repository](https://github.com/muxinc/ai)
* [@mux/ai Workflows Documentation](https://github.com/muxinc/ai/blob/main/docs/WORKFLOWS.md)
* [Mux Auto-generated Captions](/docs/guides/add-autogenerated-captions-and-use-transcripts)
* [Mux Player Web Component](/docs/guides/mux-player-web)


# Building a video recommendation engine with AI
Use the @mux/ai library to generate video embeddings and build a recommendation engine
<Callout type="info">This guide uses [@mux/ai](https://github.com/muxinc/ai), our open-source library that provides prebuilt workflows for common video AI tasks. It works with your favorite LLM provider (OpenAI, Anthropic, or Google). Check out the [GitHub repository](https://github.com/muxinc/ai) for more details!</Callout>

You can build a content-based recommendation system that suggests similar videos by converting video transcripts into AI embeddings and performing vector similarity search. The `@mux/ai` library makes this straightforward by handling transcript fetching, chunking, and embedding generation.

## Overview

The core concept is to convert text (your video transcripts) into high-dimensional vectors (embeddings) that capture semantic meaning. Videos with similar content will have embeddings that are close together in vector space, allowing you to find and recommend similar content.

## Prerequisites

Before starting, make sure you have:

* A Mux account with API credentials (token ID and token secret)
* An API key for OpenAI or Google (the providers that support embeddings)
* A vector database (Pinecone, Supabase with pgvector, etc.)
* Node.js installed
* Videos with captions enabled (human-generated captions are best, but [auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) work great too)

## Installation

```bash
npm install @mux/ai
```

## Configuration

Set your environment variables:

```bash
# Required
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# You only need the API key for the provider you're using
OPENAI_API_KEY=your_openai_api_key # OR
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
```

## Basic usage

```javascript
import { generateVideoEmbeddings } from "@mux/ai/workflows";

const result = await generateVideoEmbeddings("your-mux-asset-id", {
  provider: "openai",  // or "google"
  languageCode: "en"
});

// Use the averaged embedding for video-level search
console.log(result.averagedEmbedding);
// Array of 1536 numbers (for OpenAI's text-embedding-3-small)

// Or use individual chunks for timestamp-accurate search
console.log(result.chunks.length);
console.log(result.chunks[0].embedding);
```

The function returns:

```javascript
{
  "assetId": "your-asset-id",
  "averagedEmbedding": [0.123, -0.456, ...],  // Single vector representing the whole video
  "chunks": [
    {
      "chunkId": "chunk_0",
      "embedding": [0.234, -0.567, ...],
      "metadata": {
        "tokenCount": 450,
        "startTime": 0,
        "endTime": 30.5,
        "text": "Welcome to our tutorial..."
      }
    }
    // ... more chunks
  ],
  "metadata": {
    "totalChunks": 12,
    "totalTokens": 5432,
    "embeddingDimensions": 1536,
    "languageCode": "en"
  }
}
```

## Provider options

`@mux/ai` supports two embedding providers:

* **OpenAI** (default): Uses `text-embedding-3-small` model (default 1536 dimensions) - Fast and cost-effective
* **Google**: Uses `text-embedding-004` model (768 dimensions) - Alternative option

```javascript
// Using OpenAI (default)
const result = await generateVideoEmbeddings("your-mux-asset-id", {
  provider: "openai"
});

// Using Google
const result = await generateVideoEmbeddings("your-mux-asset-id", {
  provider: "google"
});

// Override the default model
const result = await generateVideoEmbeddings("your-mux-asset-id", {
  provider: "openai",
  model: "text-embedding-3-large"  // 3072 dimensions, higher quality
});
```

## Chunking strategies

For long videos, transcripts are split into chunks to fit within embedding model token limits. Chunking strategy affects the granularity of your search results:

* **Token-based chunking**: Splits text by token count, maximizing information density. Best for general video-level recommendations.
* **VTT-based chunking**: Preserves caption boundaries and timing metadata. Best when you need timestamp-accurate search or want to recommend specific video segments.

```javascript
// Token-based chunking (default)
const result = await generateVideoEmbeddings("your-mux-asset-id", {
  chunkingStrategy: {
    type: "token",
    maxTokens: 500,
    overlap: 100  // Tokens of overlap between chunks
  }
});

// VTT-based chunking (preserves caption boundaries)
const result = await generateVideoEmbeddings("your-mux-asset-id", {
  chunkingStrategy: {
    type: "vtt",
    maxTokens: 500,
    overlapCues: 2  // Number of caption cues to overlap
  }
});
```

## Storing embeddings in a vector database

### Using Pinecone

```javascript
import { Pinecone } from '@pinecone-database/pinecone';
import { generateVideoEmbeddings } from "@mux/ai/workflows";

const pinecone = new Pinecone({ apiKey: process.env.PINECONE_API_KEY });
const index = pinecone.index('video-recommendations');

// Generate and store embeddings
const result = await generateVideoEmbeddings("your-mux-asset-id");

// Store the averaged embedding for video-level search
await index.upsert([{
  id: "your-mux-asset-id",
  values: result.averagedEmbedding,
  metadata: {
    title: "Video Title",
    duration: 300,
    totalChunks: result.metadata.totalChunks
  }
}]);
```

### Using Supabase with pgvector

```javascript
import { createClient } from '@supabase/supabase-js';
import { generateVideoEmbeddings } from "@mux/ai/workflows";

const supabase = createClient(
  process.env.SUPABASE_URL,
  process.env.SUPABASE_KEY
);

// Generate embeddings
const result = await generateVideoEmbeddings("your-mux-asset-id");

// Store in Supabase
await supabase.from('video_embeddings').insert({
  asset_id: "your-mux-asset-id",
  embedding: result.averagedEmbedding,
  metadata: result.metadata
});
```

## Finding similar videos

Once embeddings are stored, use vector similarity search:

```javascript
// Generate embedding for the query video
const queryResult = await generateVideoEmbeddings(queryAssetId);

// Search for similar videos in Pinecone
const searchResults = await index.query({
  vector: queryResult.averagedEmbedding,
  topK: 5,  // Return 5 most similar videos
  includeMetadata: true
});

// Display recommendations
searchResults.matches.forEach(match => {
  console.log(`Similar video: ${match.id}`);
  console.log(`Similarity score: ${match.score}`);
});
```

## Webhook integration

For automated embedding generation when videos are uploaded, you should trigger the call to generate video embeddings from the [`video.asset.track.ready` webhook](/docs/core/listen-for-webhooks):

```javascript
export async function handleWebhook(req, res) {
  const event = req.body;

  if (event.type === 'video.asset.track.ready' &&
      event.data.type === 'text' &&
      event.data.language_code === 'en') {
    const result = await generateVideoEmbeddings(event.data.asset_id);
    await vectorDB.upsert({ id: event.data.asset_id, embedding: result.averagedEmbedding });
  }
}
```

## How it works

Under the hood, `@mux/ai` handles:

1. **Fetching transcript**: Downloads the VTT file from Mux
2. **Chunking**: Splits long transcripts into manageable pieces
3. **Token counting**: Ensures chunks fit within model limits
4. **Batch processing**: Sends chunks to the embedding API efficiently
5. **Averaging**: Computes a single vector for the whole video
6. **Metadata tracking**: Preserves timing and token information

## Mux features used

* [Auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) - Source transcripts for embeddings

## Best practices

* **Enable captions**: Human-generated captions provide the best results, but auto-generated captions work great too
* **Choose appropriate chunking**: Token-based for most cases, VTT-based for caption-aligned chunks
* **Use averaged embeddings for video-level search**: Faster and simpler for basic recommendations
* **Use chunk embeddings for precise matching**: Better for finding specific segments within videos
* **Consistent embedding model**: Always use the same model for queries and storage
* **Set quality thresholds**: Only recommend videos with similarity scores above a minimum threshold

## Vector database options

Here are a few popular options for storing embeddings:

* **[Pinecone](https://www.pinecone.io/)**: Managed vector database, easy to use
* **[Supabase with pgvector](https://supabase.com/docs/guides/database/extensions/pgvector)**: PostgreSQL extension, good for existing Postgres users
* **[Weaviate](https://weaviate.io/)**: Open-source vector database
* **[Milvus](https://milvus.io/)**: Scalable vector database
* **[Qdrant](https://qdrant.tech/)**: High-performance vector search

## Resources

* [@mux/ai GitHub Repository](https://github.com/muxinc/ai)
* [@mux/ai Workflows Documentation](https://github.com/muxinc/ai/blob/main/docs/WORKFLOWS.md)
* [Mux Auto-generated Captions](/docs/guides/add-autogenerated-captions-and-use-transcripts)
* [OpenAI Embeddings](https://platform.openai.com/docs/guides/embeddings)
* [Pinecone Documentation](https://docs.pinecone.io/)
* [Supabase pgvector Guide](https://supabase.com/docs/guides/database/extensions/pgvector)


# Automatic translation and dubbing with AI
Use the @mux/ai library to automatically translate and dub video audio into different languages
<Callout type="info">This guide uses [@mux/ai](https://github.com/muxinc/ai), our open-source library that provides prebuilt workflows for common video AI tasks. It works with your favorite LLM provider (OpenAI, Anthropic, or Google). Check out the [GitHub repository](https://github.com/muxinc/ai) for more details!</Callout>

While Mux's [auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) can produce transcripts in the original language, you may want to translate and dub the audio itself into different languages. The `@mux/ai` library integrates with [ElevenLabs' dubbing API](https://elevenlabs.io/docs/overview/capabilities/dubbing) to automatically create translated audio tracks and add them to your videos as [multi-track audio](/docs/guides/add-alternate-audio-tracks-to-your-videos).

## Prerequisites

Before starting, make sure you have:

* A Mux account with API credentials (token ID and token secret)
* An ElevenLabs API key with dubbing access
* An S3-compatible storage bucket (required for audio file hosting during upload)
* Node.js installed

## Installation

```bash
npm install @mux/ai
```

## Configuration

Set your environment variables:

```bash
# Required for Mux
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# Required for ElevenLabs dubbing
ELEVENLABS_API_KEY=your_elevenlabs_api_key
# Required for uploading dubbed audio back to Mux
S3_ENDPOINT=https://your-s3-endpoint.com
S3_REGION=auto
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-key
```

## Enabling audio static renditions

The `@mux/ai` library automatically requests audio-only static renditions if they don't already exist on your asset. However, you can pre-emptively enable them when creating videos to avoid waiting for rendition generation during the dubbing workflow.

To enable when creating a video:

```javascript
import Mux from '@mux/mux-node';

const mux = new Mux();

const asset = await mux.video.assets.create({
  input: "https://example.com/video.mp4",
  playback_policy: ['public'],
  static_renditions: [
    { resolution: 'audio-only' }  // Enable audio.m4a rendition
  ]
});
```

Or add to an existing asset:

```javascript
await mux.video.assets.createStaticRendition("your-mux-asset-id", {
  resolution: 'audio-only'
});
```

## Basic usage

```javascript
import { translateAudio } from "@mux/ai/workflows";

// Dub video audio to Spanish
const result = await translateAudio(
  "your-mux-asset-id",
  "es"  // target language (source language is auto-detected)
);

console.log(result.uploadedTrackId);
// The new Mux audio track ID for the dubbed audio

console.log(result.dubbingId);
// ElevenLabs dubbing ID for tracking

console.log(result.targetLanguageCode);  // "es"
```

The function automatically:

1. Fetches the audio.m4a static rendition from Mux
2. Sends it to ElevenLabs for dubbing (source language auto-detected)
3. Waits for the dubbing to complete
4. Uploads the dubbed audio to your S3 bucket
5. Creates a new audio track on your Mux asset
6. Returns the new track ID

## Language support

The library uses [ISO 639-1 language codes](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes). Common target languages include:

```javascript
await translateAudio("your-mux-asset-id", "es");  // Spanish
await translateAudio("your-mux-asset-id", "fr");  // French
await translateAudio("your-mux-asset-id", "de");  // German
await translateAudio("your-mux-asset-id", "ja");  // Japanese
await translateAudio("your-mux-asset-id", "zh");  // Chinese
await translateAudio("your-mux-asset-id", "pt");  // Portuguese
await translateAudio("your-mux-asset-id", "it");  // Italian
// etc.
```

<Callout type="info">The source language is automatically detected by ElevenLabs. You only need to specify the target language.</Callout>

## Speaker detection

You can specify the number of speakers for better dubbing quality:

```javascript
// Auto-detect number of speakers (default)
const result = await translateAudio("your-mux-asset-id", "es", {
  numSpeakers: 0
});

// Specify exact number of speakers
const result = await translateAudio("your-mux-asset-id", "es", {
  numSpeakers: 2  // For videos with 2 distinct speakers
});
```

## Download without uploading

If you want to handle the upload yourself or just get the dubbed audio file:

```javascript
const result = await translateAudio("your-mux-asset-id", "es", {
  uploadToMux: false
});

console.log(result.presignedUrl);
// URL to download the dubbed audio file for manual review before uploading to Mux
```

## Webhook integration

For automated dubbing when videos are uploaded, you should trigger the call to translate audio from the [`video.asset.static_rendition.ready` webhook](/docs/core/listen-for-webhooks):

```javascript
export async function handleWebhook(req, res) {
  const event = req.body;

  if (event.type === 'video.asset.static_rendition.ready') {
    const result = await translateAudio(event.data.id, "es");
    await db.saveDubbedTrack(event.data.id, result.uploadedTrackId);
  }
}
```

## Playing multi-language content

Mux Player (and most other common video players), automatically detects multiple audio tracks and shows an audio selector. Users can switch between audio languages using the audio menu in the player controls.

## How it works

Under the hood, `@mux/ai` handles:

1. **Fetching source audio**: Downloads the audio.m4a static rendition from Mux
2. **ElevenLabs dubbing**: Submits the audio to ElevenLabs with language parameters
3. **Polling**: Waits for the dubbing job to complete (can take several minutes)
4. **Download**: Retrieves the dubbed audio file
5. **S3 upload**: Uploads the dubbed file to your S3 bucket with a presigned URL
6. **Mux track creation**: Creates a new audio track on your asset

## Mux features used

* [Audio-only static renditions](/docs/guides/enable-static-mp4-renditions) - Source audio for dubbing
* [Multi-track audio](/docs/guides/add-alternate-audio-tracks-to-your-videos) - Adding dubbed tracks
* [Webhooks](/docs/core/listen-for-webhooks) - Trigger dubbing automatically
* [Mux Player](/docs/guides/mux-player-web) - Play videos with language-selectable audio

## Best practices

* **Enable audio-only renditions**: Required for the dubbing workflow
* **Sequential processing**: Process one language at a time to avoid rate limits
* **Error handling**: Dubbing can fail or take time; implement retries and timeouts
* **Cost management**: Dubbing is more expensive than caption translation and takes several minutes per video
* **Quality review**: AI dubbing quality varies - voices may not match the original tone, lip sync can be off, and nuances like humor or cultural references may be lost. Consider human review for important or high-visibility content
* **Set user expectations**: Add labels like "Auto-dubbed" in your UI to indicate the content is AI-generated

## Video demo

Here's an example of AI-dubbed audio in action:

<Player playbackId="aapRSmSRhrwPUV1vj8NnwTA2bH2SBMsCrHjucA02QSQA" thumbnailTime="0" title="AI dubbing demo" />

## Resources

* [@mux/ai GitHub Repository](https://github.com/muxinc/ai)
* [@mux/ai Workflows Documentation](https://github.com/muxinc/ai/blob/main/docs/WORKFLOWS.md)
* [ElevenLabs Dubbing API](https://elevenlabs.io/dubbing)
* [Mux Multi-track Audio](/docs/guides/add-alternate-audio-tracks-to-your-videos)
* [Mux Static Renditions](/docs/guides/enable-static-mp4-renditions)


# Summarizing and tagging videos with AI
Use the @mux/ai library to automatically generate titles, descriptions, and tags for your videos via LLMs
<Callout type="info">This guide uses [@mux/ai](https://github.com/muxinc/ai), our open-source library that provides prebuilt workflows for common video AI tasks. It works with your favorite LLM provider (OpenAI, Anthropic, or Google). Check out the [GitHub repository](https://github.com/muxinc/ai) for more details!</Callout>

Automatically generating video metadata like titles, descriptions, and tags helps you build better search experiences, improve content discovery, and save time on manual content curation. The `@mux/ai` library makes this straightforward by analyzing video transcripts and storyboard images to generate metadata.

## Prerequisites

Before starting, make sure you have:

* A Mux account with API credentials (token ID and token secret)
* An API key for your preferred AI provider (OpenAI, Anthropic, or Google)
* Node.js installed
* Videos with captions enabled (human-generated captions are best, but [auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) work great too)

## Installation

```bash
npm install @mux/ai
```

## Configuration

Set your environment variables:

```bash
# Required
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# You only need the API key for the provider you're using
OPENAI_API_KEY=your_openai_api_key # OR
ANTHROPIC_API_KEY=your_anthropic_api_key # OR
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
```

## Basic usage

```javascript
import { getSummaryAndTags } from "@mux/ai/workflows";

const result = await getSummaryAndTags("your-mux-asset-id", {
  tone: "professional" // or "neutral" or "playful"
});

console.log(result.title);
// "How to Build a Video Platform in 2025"

console.log(result.description);
// "Learn the fundamentals of building a modern video platform..."

console.log(result.tags);
// ["video streaming", "web development", "tutorial", "javascript"]
```

## Tone options

You can control the style of generated content with the `tone` option:

```javascript
// Professional tone - formal and business-appropriate
const professional = await getSummaryAndTags("your-mux-asset-id", {
  tone: "professional"
});

// Neutral tone - balanced and conversational (default)
const neutral = await getSummaryAndTags("your-mux-asset-id", {
  tone: "neutral"
});

// Playful tone - playful and engaging
const playful = await getSummaryAndTags("your-mux-asset-id", {
  tone: "playful"
});
```

Here's some example titles for each tone, based on the [same demo video of Mux's thumbnail API](https://player.mux.com/7EEx1HzJyg02Az7dSQ9CxyKmL9mnNwXNj4nJhdAExN7E):

* **Neutral**: Effortless Thumbnails & GIFs with Mux API
* **Playful**: Developer Snags Thumbnails and GIFs with Mux API
* **Professional**: Mux API Simplifies Video Thumbnail and GIF Creation

## Provider options

`@mux/ai` supports three AI providers:

* **OpenAI** (default): Uses `gpt-5.1` model - Fast and cost-effective
* **Anthropic**: Uses `claude-sonnet-4-5` model - Great for nuanced understanding
* **Google**: Uses `gemini-3-flash-preview` model - Balance of speed and quality

```javascript
const result = await getSummaryAndTags("your-mux-asset-id", {
  provider: "anthropic", // or "openai" or "google"
  model: "claude-opus-4-5" // Optional: override default model
});
```

## Including transcript

By default, `@mux/ai` analyzes both the storyboard images and transcript. Storyboard images are always included, but you can optionally exclude the transcript:

```javascript
// Exclude transcript (faster, uses only visual analysis)
const result = await getSummaryAndTags("your-mux-asset-id", {
  includeTranscript: false
});
```

## Custom prompts

You can override specific parts of the prompt to tune the output:

```javascript
const result = await getSummaryAndTags("your-mux-asset-id", {
  promptOverrides: {
    system: "You are a video content specialist focused on technical tutorials.",
    instructions: "Create a title under 60 characters and exactly 5 tags focused on technical concepts."
  }
});
```

## Webhook integration

For automated metadata generation when videos are uploaded, you should trigger the call to get the summary and tags from the [`video.asset.track.ready` webhook](/docs/core/listen-for-webhooks):

```javascript
export async function handleWebhook(req, res) {
  const event = req.body;

  if (event.type === 'video.asset.track.ready' &&
      event.data.type === 'text' &&
      event.data.language_code === 'en') {
    const result = await getSummaryAndTags(event.data.asset_id, { tone: "professional" });
    await db.updateVideo(event.data.asset_id, { title: result.title, description: result.description, tags: result.tags });
  }
}
```

## Use cases

Once you have automatically generated metadata, you can:

* **Improve search and discovery**: Use titles, descriptions, and tags to build better search experiences with tools like Algolia or Elasticsearch
* **Content filtering**: Allow users to filter videos by auto-generated tags
* **Analytics and insights**: Track content trends across your video library by analyzing tag distributions

## How it works

Under the hood, `@mux/ai` handles:

1. Fetching storyboard images for visual analysis
2. Optionally fetching the video transcript from Mux
3. Sending optimized multimodal prompts to the AI provider
4. Parsing and validating the structured response
5. Returning clean, ready-to-use metadata

## Mux features used

* [Storyboard images](/docs/guides/get-images-from-a-video#storyboards) - Always used for visual analysis
* [Auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) - Optionally included for additional context

## Best practices

* **Enable captions**: Human-generated captions provide the best results, but auto-generated captions work great too
* **Choose appropriate tone**: Match the tone to your brand voice
* **Validate critical metadata**: Review auto-generated titles for high-visibility content
* **Cache results**: Store generated metadata to avoid regenerating it
* **Consider cost vs. quality**: `gpt-5.1` is cost-effective for most use cases

## Resources

* [@mux/ai GitHub Repository](https://github.com/muxinc/ai)
* [@mux/ai Workflows Documentation](https://github.com/muxinc/ai/blob/main/docs/WORKFLOWS.md)
* [Mux Auto-generated Captions](/docs/guides/add-autogenerated-captions-and-use-transcripts)
* [Mux Storyboard Images](/docs/guides/get-images-from-a-video#storyboards)


# AI Moderation
Use the @mux/ai library to automatically moderate video content and detect inappropriate material
<Callout type="info">This guide uses [@mux/ai](https://github.com/muxinc/ai), our open-source library that provides prebuilt workflows for common video AI tasks. It works with your favorite LLM provider (OpenAI, Anthropic, or Google). Check out the [GitHub repository](https://github.com/muxinc/ai) for more details!</Callout>

## Overview

This guide demonstrates how to automatically screen video content for inappropriate material using AI. The `@mux/ai` library handles all the complexity of extracting thumbnails, analyzing them with moderation APIs, and returning actionable results. If content exceeds your defined thresholds for sexual or violent content, you can automatically remove access to protect your platform.

This approach provides an automated first line of defense against inappropriate content, helping you maintain content standards at scale without manual review of every upload.

### Prerequisites

Before starting this guide, make sure you have:

* A Mux account with API credentials (token ID and token secret)
* An API key for OpenAI or Hive (depending on your chosen provider)
* Node.js installed
* Basic familiarity with webhooks and async JavaScript

## Installation

```bash
npm install @mux/ai
```

## Configuration

Set your environment variables:

```bash
# Required
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# You only need the API key for the provider you're using
OPENAI_API_KEY=your_openai_api_key # OR
HIVE_API_KEY=your_hive_api_key
```

## Basic usage

```javascript
import { getModerationScores } from "@mux/ai/workflows";

const result = await getModerationScores("your-mux-asset-id", {
  provider: "openai", // or "hive"
  thresholds: {
    sexual: 0.7,   // Flag content with 70%+ confidence
    violence: 0.8  // Flag content with 80%+ confidence
  }
});

console.log(result.exceedsThreshold); // true if content flagged
console.log(result.maxScores.sexual);  // Highest sexual content score
console.log(result.maxScores.violence); // Highest violence score
```

The function analyzes multiple thumbnails from your video and returns:

```javascript
{
  "assetId": "your-asset-id",
  "exceedsThreshold": false,
  "maxScores": {
    "sexual": 0.12,
    "violence": 0.05
  },
  "thresholds": {
    "sexual": 0.7,
    "violence": 0.8
  },
  "thumbnailScores": [
    { "sexual": 0.12, "violence": 0.05, "error": false },
    { "sexual": 0.08, "violence": 0.03, "error": false }
    // ... more thumbnails
  ]
}
```

## Provider options

`@mux/ai` supports two moderation providers:

* **OpenAI** (default): Uses `omni-moderation-latest` model - Multi-modal moderation with vision support
* **Hive**: Visual moderation using Hive's specialized content safety models

```javascript
// Using OpenAI (default)
const result = await getModerationScores("your-mux-asset-id", {
  provider: "openai"
});

// Using Hive
const result = await getModerationScores("your-mux-asset-id", {
  provider: "hive"
});
```

## Configuring thresholds

Thresholds use a 0-1 scale where higher values mean stricter moderation:

```javascript
const result = await getModerationScores("your-mux-asset-id", {
  thresholds: {
    sexual: 0.7,   // Flag content with 70%+ confidence of sexual content
    violence: 0.8  // Flag content with 80%+ confidence of violence
  }
});
```

Adjust these based on your content policies and user base. Lower thresholds catch more content but may increase false positives.

## Webhook integration

For automated moderation when videos are uploaded, you should trigger the call to get moderation scores from the [`video.asset.ready` webhook](/docs/core/listen-for-webhooks):

```javascript
export async function handleWebhook(req, res) {
  const event = req.body;

  if (event.type === 'video.asset.ready') {
    const result = await getModerationScores(event.data.id, { thresholds: { sexual: 0.7, violence: 0.8 } });
    if (result.exceedsThreshold) {
      await mux.video.assets.deletePlaybackId(event.data.id, event.data.playback_ids[0].id);
    }
  }
}
```

## How it works

Under the hood, `@mux/ai` handles:

1. **Thumbnail extraction**: Selects representative frames based on video duration
   * Videos under 50 seconds: 5 evenly-spaced thumbnails
   * Longer videos: One thumbnail every 10 seconds
2. **Concurrent analysis**: Sends all thumbnails to the moderation API in parallel
3. **Score aggregation**: Tracks the highest scores across all thumbnails
4. **Threshold evaluation**: Compares max scores against your configured thresholds
5. **Error handling**: Gracefully handles API failures and returns partial results

## Mux features used

* [Mux Thumbnail API](/docs/guides/get-images-from-a-video) - Extracts frames for moderation analysis
* [Webhooks](/docs/core/listen-for-webhooks) - Trigger moderation automatically

## Best practices

* Maintain a database of automated moderation actions to fine-tune thresholds
* Add notifications to users or moderators when content is flagged
* Implement manual review queues for borderline content
* Use transcriptions or captions for additional moderation
* Be mindful of AI API rate limits and implement moderation queueing if needed

## Resources

* [@mux/ai GitHub Repository](https://github.com/muxinc/ai)
* [@mux/ai Workflows Documentation](https://github.com/muxinc/ai/blob/main/docs/WORKFLOWS.md)
* [Mux Webhooks](https://docs.mux.com/guides/video/listen-for-webhooks)
* [OpenAI Moderation API](https://platform.openai.com/docs/guides/moderation)
* [Mux Thumbnail API](https://docs.mux.com/guides/video/get-images-from-a-video)


# Automatic subtitle translations with AI
Use the @mux/ai library to automatically translate video captions into different languages via LLMs
<Callout type="info">This guide uses [@mux/ai](https://github.com/muxinc/ai), our open-source library that provides prebuilt workflows for common video AI tasks. It works with your favorite LLM provider (OpenAI, Anthropic, or Google). Check out the [GitHub repository](https://github.com/muxinc/ai) for more details!</Callout>

Mux uses [OpenAI's Whisper model](https://openai.com/index/whisper) to create auto-generated captions, which must be generated in the same language as your source audio. To make your content accessible globally, you can use AI to translate captions into other languages. The `@mux/ai` library makes this straightforward by handling caption fetching, translation, and re-uploading to Mux.

## Prerequisites

Before starting, make sure you have:

* A Mux account with API credentials (token ID and token secret)
* An API key for your preferred AI provider (OpenAI, Anthropic, or Google)
* An S3-compatible storage bucket (required for caption file hosting during upload)
* Node.js installed
* Videos with captions enabled (human-generated captions are best, but [auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) work great too)

## Installation

```bash
npm install @mux/ai
```

## Configuration

Set your environment variables:

```bash
# Required for Mux
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# You only need the API key for the provider you're using
OPENAI_API_KEY=your_openai_api_key # OR
ANTHROPIC_API_KEY=your_anthropic_api_key # OR
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
# Required for uploading translated captions back to Mux
S3_ENDPOINT=https://your-s3-endpoint.com
S3_REGION=auto
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-key
```

## Basic usage

```javascript
import { translateCaptions } from "@mux/ai/workflows";

// Translate English captions to Spanish
const result = await translateCaptions(
  "your-mux-asset-id",
  "en",  // source language
  "es",  // target language
  {
    provider: "anthropic" // or "openai" or "google"
  }
);

console.log(result.uploadedTrackId);
// The new Mux track ID for the translated captions
```

The function automatically:

1. Fetches the source captions from Mux
2. Translates them using your chosen AI provider
3. Uploads the translated VTT file to your S3 bucket
4. Creates a new caption track on your Mux asset
5. Returns the new track ID

## Language support

`@mux/ai` uses [ISO 639-1 language codes](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes) and automatically converts them to full language names. It supports all standard language codes, but the translation capability of your chosen AI provider may vary.

```javascript
// Common translations
await translateCaptions("your-mux-asset-id", "en", "es");  // English → Spanish
await translateCaptions("your-mux-asset-id", "en", "fr");  // English → French
await translateCaptions("your-mux-asset-id", "en", "de");  // English → German
await translateCaptions("your-mux-asset-id", "en", "ja");  // English → Japanese
await translateCaptions("your-mux-asset-id", "en", "zh");  // English → Chinese
await translateCaptions("your-mux-asset-id", "en", "ar");  // English → Arabic
// etc.
```

## Provider options

`@mux/ai` supports three AI providers:

* **OpenAI** (default): Uses `gpt-5.1` model - Fast and cost-effective
* **Anthropic**: Uses `claude-sonnet-4-5` model - Excellent for nuanced translations
* **Google**: Uses `gemini-3-flash-preview` model - Balance of speed and quality

```javascript
const result = await translateCaptions("your-mux-asset-id", "en", "es", {
  provider: "anthropic",
  model: "claude-opus-4-5" // Optional: override default model
});
```

## Translate without uploading

If you want to handle the upload yourself or just get the translated file:

```javascript
const result = await translateCaptions("your-mux-asset-id", "en", "es", {
  uploadToMux: false
});

console.log(result.presignedUrl);
// URL to download the translated VTT file for review before uploading to Mux
```

## Webhook integration

For automated translation when videos are uploaded, you should trigger the call to translate captions from the [`video.asset.track.ready` webhook](/docs/core/listen-for-webhooks) for your source language:

```javascript
export async function handleWebhook(req, res) {
  const event = req.body;

  if (event.type === 'video.asset.track.ready' &&
      event.data.type === 'text' &&
      event.data.language_code === 'en') {
    const result = await translateCaptions(event.data.asset_id, "en", "es");
    await db.saveTranslationTrack(event.data.asset_id, result.uploadedTrackId);
  }
}
```

## Using with Mux Player

Mux Player automatically detects multiple caption tracks and shows a language selector:

```html
<mux-player
  playback-id="your-playback-id"
  metadata-video-title="My Video"
></mux-player>
```

Users can switch between languages using the captions menu in the player controls.

## Complete example

Here's a complete webhook handler that translates captions:

```javascript
import express from 'express';
import { translateCaptions } from "@mux/ai/workflows";

const app = express();
app.use(express.json());

app.post('/webhook', async (req, res) => {
  const event = req.body;

  if (event.type === 'video.asset.track.ready' &&
      event.data.type === 'text' &&
      event.data.language_code === 'en') {

    const assetId = event.data.asset_id;

    try {
      // Translate to Spanish
      const result = await translateCaptions(assetId, "en", "es");

      console.log(`Spanish captions created: ${result.uploadedTrackId}`);

      res.status(200).json({ success: true });
    } catch (error) {
      console.error('Translation error:', error);
      res.status(500).json({ error: error.message });
    }
  } else {
    res.status(200).json({ message: 'Event ignored' });
  }
});

app.listen(3000, () => {
  console.log('Webhook server running on port 3000');
});
```

## How it works

Under the hood, `@mux/ai` handles:

1. **Fetching source captions**: Downloads the VTT file from Mux
2. **Translation**: Sends the captions to your chosen AI provider with optimized prompts
3. **VTT preservation**: Maintains timing information and formatting
4. **S3 upload**: Uploads the translated file to your S3 bucket with a presigned URL
5. **Mux track creation**: Creates a new caption track on your asset
6. **Cleanup**: Optionally cleans up temporary files

## Mux features used

* [Auto-generated captions](/docs/guides/add-autogenerated-captions-and-use-transcripts) - Source captions for translation
* [Webhooks](/docs/core/listen-for-webhooks) - Trigger translations automatically
* [Mux Player](/docs/guides/mux-player-web) - Display translated captions with language switching

## Best practices

* **Validate translations and review quality**: AI translations are generally accurate but may miss context-specific nuances - for critical content, consider human review
* **Handle errors gracefully**: Translation may fail for very long videos or stability issues with LLMs
* **Consider costs**: Translating to many languages increases LLM costs

## Resources

* [@mux/ai GitHub Repository](https://github.com/muxinc/ai)
* [@mux/ai Workflows Documentation](https://github.com/muxinc/ai/blob/main/docs/WORKFLOWS.md)
* [Mux Auto-generated Captions](/docs/guides/add-autogenerated-captions-and-use-transcripts)
* [Mux Player Language Switching](/docs/guides/mux-player-web)
* [ISO 639-1 Language Codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)


# Moderate video content
Effectively moderate user-generated content on your platform by developing content moderation tools and policies tailored to your needs.
If your platform accepts **user-generated content** in any form, you know that people will upload everything and anything. For video this can be particularly high stakes, with the potential for users to upload anything from popular media content to inappropriate footage.

While large platforms may staff big teams of Trust & Safety specialists, **you don't need an army to implement content moderation strategies of your own**. Below we've rounded up a number of technical and operational strategies that Mux customers can use to keep their content libraries healthy.

## Technical strategies

### Secure Video Playback

Mux's secure video playback tools can help make it more difficult for bad actors to use your videos for their own purposes.

When first testing out Mux, it's common to set a video's playback policy to `public` so you can easily view the video via its public URL. Once testing is done, we recommend that UGC platforms switch to using [signed playback policies](/docs/guides/secure-video-playback#1-create-an-asset-or-live-stream-with-a-signed-playback-policy) to help curb abuse. These allow you to use a [JWT](/docs/guides/signing-jwts) to time-limit requests for your content and to set [playback restrictions](/docs/guides/secure-video-playback#3-create-an-optional-playback-restriction-for-your-mux-account-environment) specifying which referring domains can serve your content.

### High Delivery Webhook

For certain platforms, we currently offer an internal feature that sends notifications via webhook when we detect high delivery traffic on an asset. This can be helpful to catch unauthorized content quickly, before it results in increased spend or risk to your platform. To get this feature enabled for your account, [contact Support](/support).

### Alert Forwarding

Our Trust & Safety team contacts all administrators on your account in the event of account usage or content that violates our [Terms of Service](https://www.mux.com/terms). Our team may take actions that include the deletion of assets, disabling of live streams, and in rare cases disabling of environments. Because bad actors will often repeatedly upload the same unauthorized content, we recommend making sure these messages reach you right away so you can take appropriate actions to address the source (e.g., closing the user's account).

To ensure emails from our team get escalated, add an email group or paging service email as an admin on your Mux account. (For example, see PagerDuty's docs on [email routing](https://support.pagerduty.com/docs/email-integration-guide).)

### Video Content Moderation

As our own engineers have blogged about, [you either die an MVP or live long enough to build content moderation](https://www.mux.com/blog/you-either-die-an-mvp-or-live-long-enough-to-build-content-moderation). A basic content moderation flow should take some information about the video asset (a sample of still frames, the transcript of its audio track, a copy of its metadata) and evaluate it based on algorithmic rules to escalate potentially troublesome content. For a peek at how Mux has iterated on our own approach, check out [this talk](https://www.youtube.com/watch?v=eydIWjJodeY\&list=PLkyaYNWEKcOesxC4VpHJtbjnzuN6r1NGg) that one of our experts gave at Demuxed 2023.

For info on the tools Mux offers to help you retrieve relevant data, check out these docs & blogs:

* [Get images from a video](/docs/guides/get-images-from-a-video)
* [Add auto-generated captions to your videos and use transcripts](/docs/guides/add-autogenerated-captions-and-use-transcripts)
* [Create timeline hover previews](/docs/guides/create-timeline-hover-previews) (helpful for human reviewers)
* [No-code partner integration blog](https://www.mux.com/blog/video-content-moderation-in-five-minutes-without-code-using-avflow-hive-and-mux)

Many customers grab images from their content via Mux APIs to feed into third-party services that can provide object detection and specialized content classification. While we recommend relying primarily on thumbnails, we also support MP4 downloads for those services that prefer a video. The results coming out of these services can be used as the trigger for automated workflows that end up in your own Slack channel or on platforms like Pagerduty or Opsgenie. Through those flows, you can action simple cases automatically and escalate edge cases to a human reviewer. You can use a tool like [n8n](https://n8n.io/) to build these workflows with no-code blocks.

### Mux Data

If high risk content is ending up in a page where you control the player, you can integrate [Mux Data](/docs/guides/data) to get a lot of visibility into viewing sessions and track engagement (including the unwanted kind).

Aggregating these Views will help you gain insights into the types of platforms & devices being used to stream your content.

If you're using Mux Data's Media tier, you can also take advantage of two additional features:

* Once Views are appearing in your Dashboard, you can set up [alerting](/docs/guides/setup-alerts) based on concurrent viewership. These alerts can be tuned and filtered, so you only get notifications for platforms or users you're interested in. Even more, you can create your own [custom dimensions](/docs/guides/extend-data-with-custom-metadata) to supplement the built-in metrics.
* And if you need to analyze your data outside of the Mux Dashboard, you can export it via [CSV files](/docs/guides/export-raw-video-view-data#call-the-export-api-to-get-daily-aggregated-data) or [Streaming Exports](/docs/guides/export-raw-video-view-data#stream-views-as-they-complete).

One thing worth noting: the High Delivery Webhook's "delivery rate" is different than the "Views" tracked by Mux Data. Both can be used for telemetry, but they are looking at [different parts of the video pipeline](/docs/guides/mux-data-faqs#is-mux-video-delivery-usage-api-similar-to-watch-time-in-mux-data).

## Operational strategies

## Content Policy

One simple way to address issues with user-provided content is to make sure your content policies are clear. These rules can be in your Terms of Service, Acceptable Use Policy, Community Guidelines, or a separate content policy.

Consider covering the following common topics (you're even welcome to copy this and make it your own):

* You represent and warrant that:
  * You will provide and maintain accurate account information.
  * You will ensure that you have the necessary licenses, rights, and permissions to upload your content and host it on our service (including any rights to third-party music, images, or footage included in your content).
  * You won't use our service for unlawful purposes or in any way that would violate applicable laws, rules, and regulations.
  * You won't upload any content that infringes on anyone's copyright, trademark, or other intellectual property rights.
* You won't upload any content that is:
  * false or misleading, including content that constitutes impersonation or defamation
  * violent, harmful, illegal, or dangerous, including content harmful to children
  * hateful, abusive, offensive, racist, sexist, or otherwise inappropriate
  * graphic, sexually explicit, or mature in nature
  * You agree not to use our service in a way that could create an undue burden or impact service to other users.
  * You agree not to circumvent any security or moderation features of our service.
  * If we find any violations of applicable law, our legal terms, or our policies, you acknowledge that we may take action at our discretion, including removing content and restricting or terminating your account.

You can also share details of how you'll enforce the policy, such as a strikes-based system.

For some examples of artful content policies, check out [Patreon](https://www.patreon.com/policy/guidelines), [Strava](https://www.strava.com/community-standards), and [Crowdcast](https://www.crowdcast.io/community-guidelines). If you have a legal advisor, make sure to discuss any obligations that may apply to your company (e.g., under [DMCA](https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_Act)) and include coverage of those.

### Contact Mechanisms

While your platform will need its own active measures for content moderation, you can also incorporate third-party reporting into your approach. At minimum, you should have an email address specifically for complaints, such as `copyright@yourdomain.com` or `abuse@yourdomain.com`, but you can also build a simple intake form that will create a support ticket in your system. Make sure incoming messages will be routed to someone with training on how to handle them appropriately. Evaluate whether your contact info should be listed in the US [DMCA Agent Directory](https://www.copyright.gov/dmca-directory/).

You can also implement in-product reporting capabilities that allow other users to report a video that may violate your content policies.

This is another good place to consult your legal advisor, as some copyright safe harbor laws include specific requirements around contact details and response turnaround times to keep yourself free from liability.

### User Signup Flow

When users sign up for an account on your platform, you likely collect a short list of details (e.g., email) while keeping things as simple/frictionless as possible. If your platform is seeing patterns of abuse, consider altering this flow to disincentivize signups/posting by bad actors:

* Collect additional personal information (e.g., full name) to increase the sense of accountability
* Send a verification link to their email to verify its authenticity before allowing users to post videos
* Add a buffer of time before new users can post videos or start a live stream
* Add a viewership limit to videos posted by users who have joined within the last day
* If your service is paid but includes a free trial, require entry of payment info before the free trial begins


# Synchronize video playback
Learn how to synchronize video playback with other components on your webpage.
## Introduction to video playback synchronization

You may have encountered many video streaming products and services enabling the following experiences:

* Plan a watch party where each viewer is watching the video at the exact moment at the same time.
  And when one viewer pauses the video, the video playback pauses for every viewer at the same time at the exact moment.
* Synchronize video playback with other components like chats, activity feeds, fitness stats collection, etc.

You can build similar and possibly many more interactive experiences like those mentioned above by aligning your web page
or application components using a common source of truth, i.e., [epoch time](https://en.wikipedia.org/wiki/Unix_time).
The underlying assumption to achieve synchronization is that your viewer's device remains synced to an NTP server.

Mux records the epoch time of each frame received for the live stream and includes that timing information in
the HLS Manifest as [`EXT-X-PROGRAM-DATE-TIME` (aka PDT) tag](https://datatracker.ietf.org/doc/html/rfc8216#section-4.3.2.6).
The PDT tag value is represented in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601).
This tag is added every few seconds with monotonically increasing epoch time representing the next frame's recorded epoch time.

Below is an example of the HLS rendition (2nd level) manifest with repeating PDT tags for every 2s of the
live stream recorded:

```text
#EXTM3U
#EXT-X-VERSION:7
#EXT-X-TARGETDURATION:2
#EXT-X-MAP:URI="https://chunk-gce-us-east1-production.cfcdn.mux.com/v1/chunk/3aJUOua6jsMHYybcqXRBpcXH82aCYXTu02TPTKHzIokndAPmz300ZThlCZbeNAy1t73003iytFZNJdjcvjTsOrCVTaGZgQ9J00uU/18446744073709551615.m4s?skid=default&signature=NjBmMjFkODBfYWVhMjIyZTdmMDU0ZmI0YWU2ZWJkZTJiYTY4MzhmYWQzNWQ2YzMyMTVlYjdjNmM0NzZiZjBmZGU0ODU1MTUyNQ=="
#EXT-X-PLAYLIST-TYPE:VOD

#EXT-X-PROGRAM-DATE-TIME:2021-06-28T17:53:25.533+00:00
#EXTINF:2,
https://chunk-gce-us-east1-production.cfcdn.mux.com/v1/chunk/3aJUOua6jsMHYybcqXRBpcXH82aCYXTu02TPTKHzIokndAPmz300ZThlCZbeNAy1t73003iytFZNJdjcvjTsOrCVTaGZgQ9J00uU/0.m4s?skid=default&signature=NjBmMjFkODBfOWJkMzMyMTc5YzgwY2VmMTdlYzIwODgzZGI2NWFiMThiM2U1NDM0NzM0NDZhMmQwOThhZmI0NDQ5OWY5N2VmMA==

#EXT-X-PROGRAM-DATE-TIME:2021-06-28T17:53:27.533+00:00
#EXTINF:2,
https://chunk-gce-us-east1-production.cfcdn.mux.com/v1/chunk/3aJUOua6jsMHYybcqXRBpcXH82aCYXTu02TPTKHzIokndAPmz300ZThlCZbeNAy1t73003iytFZNJdjcvjTsOrCVTaGZgQ9J00uU/1.m4s?skid=default&signature=NjBmMjFkODBfMjA1ZWNmYzgzYWRhMzNjMTY5YmEyYmM2NzE4MDk5N2I1MWE3NzhjODlhNGIzNWI3NGIwNTA5ZTIxOWQyNjI5OQ==

#EXT-X-PROGRAM-DATE-TIME:2021-06-28T17:53:29.533+00:00
#EXTINF:2,
https://chunk-gce-us-east1-production.cfcdn.mux.com/v1/chunk/3aJUOua6jsMHYybcqXRBpcXH82aCYXTu02TPTKHzIokndAPmz300ZThlCZbeNAy1t73003iytFZNJdjcvjTsOrCVTaGZgQ9J00uU/2.m4s?skid=default&signature=NjBmMjFkODBfZTIyOTA5YWFjZjMzYTY4MzQ4YWEzZDBiNDkyODk1NTg2ODE2M2YwZjI3NmY2MTVhOTM5MTA2MzQ4ODIyNTNkOQ==

#EXT-X-PROGRAM-DATE-TIME:2021-06-28T17:53:31.533+00:00
#EXTINF:2,
https://chunk-gce-us-east1-production.cfcdn.mux.com/v1/chunk/3aJUOua6jsMHYybcqXRBpcXH82aCYXTu02TPTKHzIokndAPmz300ZThlCZbeNAy1t73003iytFZNJdjcvjTsOrCVTaGZgQ9J00uU/3.m4s?skid=default&signature=NjBmMjFkODBfNDRkZTNhYTE5M2RhYTA4MTA4MWFkODc0YzgyMDcyMGMwODFmZWIxOGRiNWM4YzJhMTM0YTNiNGRhYmYyMWE1Nw==

#EXT-X-ENDLIST
```

## How to get epoch time value

Every modern video player exposes an API to get `EXT-X-PROGRAM-DATE-TIME` tag value.
Your application can synchronize video playback to other components using this epoch time.
An example demonstrating how to implement synchronization:

* Watch this Demuxed 2018 presentation by Seth Maddison on
  [How to Synchronize your Watches: Cross-platform stream synchronization of HLS and DASH](https://youtu.be/lSe4hcKRlYk)

## Supported Video Players

The list below shows the various video players providing API to get `EXT-X-PROGRAM-DATE-TIME` tag value.
If your player isn't listed here, [please reach out](/support).

* [Mux Player](/docs/guides/player-advanced-usage#synchronize-video-playback)
* [hls.js](https://hls-js-dev.netlify.app/api-docs/)
* [JW Player](https://docs.jwplayer.com/players/reference/seek-events#ontime)
* [THEOplayer](https://docs.theoplayer.com/knowledge-base/03-playback/04-program-date-time.md)
* [Bitmovin](https://cdn.bitmovin.com/player/web/8/docs/interfaces/core_events.segmentplaybackevent.html#datetime)
* [React Native](https://github.com/react-native-video/react-native-video#currentplaybacktime)
* [Apple AVPlayer](https://developer.apple.com/documentation/avfoundation/avplayeritem/1386188-currentdate)
* [Android ExoPlayer](https://exoplayer.dev/doc/reference/com/google/android/exoplayer2/source/hls/playlist/HlsMediaPlaylist.html#hasProgramDateTime)

If your application wants to synchronize viewers playing videos on different devices, your application can
subscribe to communication channels for receiving and sending epoch time values. Many cloud-based or other
commercial products developed using WebSockets are available for implementing such a communication channel.

## Add `exclude_pdt` parameter

By default, HLS playback using `stream.mux.com/{PLAYBACK_ID}.m3u8` URL always adds `EXT-X-PROGRAM-DATE-TIME` tag with
recorded epoch time value. If you add the `exclude_pdt=true` parameter to the playback URL, then Mux will exclude this tag
from the HLS rendition manifest.

There are few reasons to exclude the HLS tag:

* Video Player, like React Native, updates the current play position time value with the `EXT-X-PROGRAM-DATE-TIME` tag value.
  So if your application expects a zero-based play position time, the viewer could experience playback issues when the video
  player starts reporting epoch time instead.
* Your application is using a legacy video player or a player version without support for this HLS tag.

## Using signed URLs

Mux videos have two types of playback policy, `public` or `signed`. If your `playback_id` is `signed`,
then all query parameters, including `exclude_pdt` need to be added to the claims body.

Take a look at the [signed URLs guide](/docs/guides/secure-video-playback) for details.

## FAQs

### Is the epoch time available with on-demand video?

Yes. Mux records epoch time for all live streams. So, the HLS manifest includes the epoch time every few seconds
with the `EXT-X-PROGRAM-DATE-TIME` tag value when the live stream is active, or for the on-demand playback of the
live stream recording.

The epoch time is not available in the HLS manifest when the input is a video file.

### Can I retrieve the epoch time through the API?

Yes. The <ApiRefLink href="/docs/api-reference/video/assets">asset resource object</ApiRefLink> includes
`recording_times` which represents the live stream start epoch time and the duration recorded.
You can store the live stream timing information using the`recording_times`
for managing the live stream's status information.


# Use videos in your website design
Learn the easiest way to add background and hero videos to your website.
# Use videos in your website design

Video has a wide range of uses, but when you're building a marketing page there are two very common choices: Background videos and Hero videos.

This guide covers the best way to add both types of video to your website.

## Two types of video

### Background Videos

<script type="module" src="https://cdn.jsdelivr.net/npm/@mux/mux-background-video/html/+esm" />

<div style={{ position: 'relative', width: '100%' }}>
  <mux-background-video src="https://stream.mux.com/kF01v9aKFlY63i2GkQKQGDv5Y9PbMGdtQD92j5qJCYWU.m3u8" style={{ width: '100%', height: 'auto', display: 'block' }}>
    <img src="https://image.mux.com/kF01v9aKFlY63i2GkQKQGDv5Y9PbMGdtQD92j5qJCYWU/thumbnail.webp?time=0" alt="Background video example" style={{ width: '100%', height: 'auto', display: 'block' }} />
  </mux-background-video>

  <div
    style={{
  position: 'absolute',
  top: 0,
  left: 0,
  right: 0,
  bottom: 0,
  display: 'flex',
  alignItems: 'center',
  justifyContent: 'center',
  padding: '20px'
}}
  >
    <div
      style={{
    backgroundColor: 'rgba(0, 0, 0, 0.6)',
    padding: '16px',
    borderRadius: '8px',
    backdropFilter: 'blur(4px)',
    color: 'white',
    maxWidth: "400px",
  }}
    >
      <p style={{ margin: '0 0 12px 0', fontSize: '14px' }}>Short, looping clips that autoplay silently, such as hero banners, product previews, or ambient visuals.</p>

      <ul style={{ margin: 0, paddingLeft: '20px', listStyleType: 'disc', fontSize: '14px' }}>
        <li>Short (often 5-60 seconds long)</li>
        <li>Muted</li>
        <li>Loop continuously</li>
        <li>Autoplay on page load</li>
      </ul>
    </div>
  </div>
</div>

### Hero Videos

Informational videos such as product demos, explainers, customer testimonials, and promotional content. These have sound, playback controls, and can be any length.

* Any length
* User-controlled (play, pause, seek)
* Sound enabled

#### Hero Video example

<iframe src="https://player.mux.com/lSHERjOdEb4T01F5JzfX01qGPrITp6i01hvqYGdhpvHcyM?metadata-video-title=Hero+Video+Example&video-title=Hero+Video+Example" style={{ "width": "100%", "border": "none", "aspect-ratio": "16/9" }} allow="accelerometer; gyroscope; autoplay; encrypted-media; picture-in-picture;" allowFullScreen />

***

## Background video

The recommended way to add background videos to your website is with **Mux Background Video**, a lightweight component that uses HLS streaming to deliver the right quality for each viewer's screen size and network conditions.

* **Adaptive streaming** — Automatically adjusts quality based on network conditions
* **Resolution control** — Set a maximum resolution to optimize for your layout
* **Analytics ready** — Optional Mux Data integration for tracking engagement
* **Lightweight** — Minimal bundle size with no dependencies

### 1. Add the Background Component to Your Page

#### HTML Custom Element

Add the following to your HTML page. This example uses a sample video—you'll replace the playback ID with your own in the next step.

```html
<!DOCTYPE html>
<html>
<head>
  <!-- Remove this script if you don't want Mux Data analytics -->
  <script defer src="https://cdn.jsdelivr.net/npm/mux-embed"></script>
  <script type="module" src="https://cdn.jsdelivr.net/npm/@mux/mux-background-video/html/+esm"></script>
  <style>
    mux-background-video,
    mux-background-video img {
      display: block;
      width: 100%;
      height: 100%;
      object-fit: cover;
    }
  </style>
</head>
<body>
  <mux-background-video src="https://stream.mux.com/kF01v9aKFlY63i2GkQKQGDv5Y9PbMGdtQD92j5qJCYWU.m3u8">
    <img src="https://image.mux.com/kF01v9aKFlY63i2GkQKQGDv5Y9PbMGdtQD92j5qJCYWU/thumbnail.webp?time=0" alt="Background video" />
  </mux-background-video>
</body>
</html>
```

The `<img>` element inside acts as a poster image while the video loads.

#### React Component

For React applications, install the package:

```bash
npm install @mux/mux-background-video
```

```tsx
import { MuxBackgroundVideo } from '@mux/mux-background-video/react';

function BackgroundVideoSection() {
  return (
    <MuxBackgroundVideo src="https://stream.mux.com/kF01v9aKFlY63i2GkQKQGDv5Y9PbMGdtQD92j5qJCYWU.m3u8">
      <img src="https://image.mux.com/kF01v9aKFlY63i2GkQKQGDv5Y9PbMGdtQD92j5qJCYWU/thumbnail.webp?time=0" alt="Background video" />
    </MuxBackgroundVideo>
  );
}
```

### 2. Upload Your Video and Replace the Playback ID

To use your own video:

1. Go to **Assets** in the Mux Dashboard and click **Upload**
2. Select your video file
3. For video quality, select **Basic** (recommended for background videos)
4. Click **Upload**

Once the upload is complete, copy your **Playback ID** from the asset details page and replace the sample playback ID in your code.

<Callout type="info">
  Mux Background Video requires [Basic](https://www.mux.com/docs/guides/use-video-quality-levels#basic) or [Premium](https://www.mux.com/docs/guides/use-video-quality-levels#premium) video quality.
</Callout>

### 3. Set Maximum Resolution

Limit the resolution to match your layout and save bandwidth:

```html
<mux-background-video
  src="https://stream.mux.com/{PLAYBACK_ID}.m3u8"
  max-resolution="720p"
>
  <img src="https://image.mux.com/{PLAYBACK_ID}/thumbnail.webp?time=0" alt="Background video" />
</mux-background-video>
```

For React:

```tsx
<MuxBackgroundVideo
  src="https://stream.mux.com/{PLAYBACK_ID}.m3u8"
  maxResolution="720p"
>
  <img src="https://image.mux.com/{PLAYBACK_ID}/thumbnail.webp?time=0" alt="Background video" />
</MuxBackgroundVideo>
```

### 4. Pause When Not Visible (Optional)

Background videos continue playing even when users switch to another browser tab. To save CPU and battery, you can pause the video when the page is hidden:

```js
document.addEventListener('visibilitychange', () => {
  const video = document.querySelector('mux-background-video video');
  if (document.hidden) {
    video.pause();
  } else {
    video.play();
  }
});
```

For complete API documentation and advanced options, see the [Mux Background Video guide](/docs/guides/mux-background-video).

***

## Alternative: Static MP4 Files

You can also serve background videos as static MP4 files instead of using HLS streaming. Each approach has its own strengths.

**Why choose MP4:**

* No JavaScript required—just a `<video>` tag
* Works on legacy devices and browsers
* Better for very short clips (under 10 seconds)
* Downloaded once and cached by the browser

**Why choose HLS (Mux Background Video):**

* Adaptive streaming adjusts quality to network conditions
* Lower bandwidth usage on slow connections
* Videos are ready to play sooner after upload

MP4 files have a small additional cost for storage. See [video pricing](/docs/pricing/video#static-rendition-mp4s-storage) for details.

### Enable Static Renditions

To use MP4 files, enable static renditions when creating your asset. You can do this in both the Mux Dashboard and via the API by including the `static_renditions` property:

```json
{
  "playback_policies": ["public"],
  "video_quality": "basic",
  "static_renditions": [{ "resolution": "highest" }]
}
```

**In the Mux Dashboard:** Click **Advanced** when uploading to reveal the JSON editor, then add the `static_renditions` property shown above.

**Via the API:** Include the `static_renditions` property in your [asset creation request](/api/reference/video#operation/create-asset).

### Embed the Video

```html
<video
  src="https://stream.mux.com/{PLAYBACK_ID}/highest.mp4"
  poster="https://image.mux.com/{PLAYBACK_ID}/thumbnail.jpg"
  autoplay
  loop
  muted
  playsinline
></video>
```

The key attributes:

* `autoplay` — Start immediately
* `muted` — Required for autoplay to work in browsers
* `loop` — Replay continuously
* `playsinline` — Prevent fullscreen on mobile

For more details on static renditions, see [Use static MP4 and M4A renditions](/docs/guides/enable-static-mp4-renditions).

***

## Hero Videos

For product demos, explainers, testimonials, and promotional content, you need a different approach than background videos. These videos are typically longer, users actively watch them, and they need playback controls (play, pause, seek, volume).

The **Mux Player** is built for this use case. It provides adaptive quality that adjusts to network conditions, full playback controls, and custom branding.

## Mux Player

Mux Player is a customizable video player with built-in adaptive streaming, playback controls, and branding options for delivering high-quality video experiences on your website.

* **Adaptive streaming** — Quality adjusts automatically to network conditions
* **Full controls** — Play, pause, seek, volume, fullscreen
* **Custom branding** — Match your website's colors
* **Captions** — Built-in support for accessibility
* **Works everywhere** — Responsive and mobile-friendly

## Implementation

### 1. Upload Your Video

In the Mux Dashboard:

1. Go to **Assets** and click **Upload**
2. Select your video file
3. For video quality, select **Basic** (recommended for website videos)
4. Optionally enable **Auto-generated captions** to automatically generate captions from your video's audio for improved accessibility
5. Click **Upload**

Once the upload is complete, copy your **Playback ID** from the asset details page.

### 2. Embed the Player

Add an iframe to your page. You can also copy this code directly from the **Playback and Thumbnails** tab on the asset details page in the Dashboard.

```html
<iframe
  src="https://player.mux.com/{PLAYBACK_ID}"
  style="aspect-ratio: 16/9; width: 100%;"
  allow="accelerometer; gyroscope; autoplay; encrypted-media; picture-in-picture;"
  allowfullscreen="true"
></iframe>
```

You now have a fully functional video player that looks like this.

<iframe src="https://player.mux.com/lSHERjOdEb4T01F5JzfX01qGPrITp6i01hvqYGdhpvHcyM?metadata-video-title=Docs+Hero+Video+Example+%5BDon%27t+Delete%5D&video-title=Docs+Hero+Video+Example+%5BDon%27t+Delete%5D" style={{"width": "100%", "border": "none", "aspect-ratio": "16/9"}} allow="accelerometer; gyroscope; autoplay; encrypted-media; picture-in-picture;" allowfullscreen />

## Controlling Playback Resolution

When you upload a video, Mux creates multiple versions of it at different resolutions (like 480p, 720p, 1080p). These versions are called renditions. Lower resolution renditions download faster on slow connections, while higher resolution renditions look sharper on larger screens.

By default, Mux automatically picks the best rendition for each viewer. You can also set minimum and maximum resolutions to control which renditions are available.

### Setting a Minimum Resolution

You can use `min-resolution` to exclude lower resolution renditions:

```html
src="https://player.mux.com/{PLAYBACK_ID}?min-resolution=720p"
```

**Why set a minimum resolution?**

* **Visual quality matters**: If your video contains text, product details, or screen recordings, low resolutions can make content hard to read
* **Brand perception**: You may not want viewers to ever see a pixelated version of your marketing content
* **Known audience**: If your viewers are primarily on desktop with reliable connections, there's no need to serve 360p or 480p

### Setting a Maximum Resolution

You can use `max-resolution` to cap the highest quality served:

```html
src="https://player.mux.com/{PLAYBACK_ID}?max-resolution=1080p"
```

**Why set a maximum resolution?**

* **Embedded player size**: If your video displays at 720p on the page, serving 4K is unnecessary
* **Bandwidth control**: Limit data usage for viewers on metered connections
* **Consistency**: Ensure all viewers get a similar experience regardless of their device capabilities
* **Cost**: Higher resolutions cost more to deliver, so capping resolution can reduce your bill.

### Combining Both

You can use both parameters together to define a specific range:

```html
<!-- Only serve 720p and 1080p -->
src="https://player.mux.com/{PLAYBACK_ID}?min-resolution=720p&max-resolution=1080p"
```

This excludes both the lower renditions (480p, 360p, 270p) and higher ones (1440p, 2160p), giving you precise control over the viewing experience.

Available values: 270p, 360p, 480p, 540p, 720p, 1080p, 1440p, 2160p

## Customization

You can customize the player by adding parameters to the URL.

### Brand Colors

You can set `accent-color` to match your brand. This colors the play button, progress bar, and controls:

```html
<iframe
  src="https://player.mux.com/{PLAYBACK_ID}?min-resolution=720p&accent-color=%235D3FD3"
  style="aspect-ratio: 16/9; width: 100%;"
  allow="accelerometer; gyroscope; autoplay; encrypted-media; picture-in-picture;"
  allowFullScreen="true"
></iframe>
```

### Thumbnail

Mux automatically generates a thumbnail from your video. You can also specify a time, use a custom URL, or use a GIF.

```html
<!-- Default thumbnail -->
src="https://player.mux.com/{PLAYBACK_ID}"

<!-- Thumbnail at 10 seconds -->
src="https://player.mux.com/{PLAYBACK_ID}?thumbnail-time=10"
```

See [Get images from a video](/docs/guides/get-images-from-a-video) for more thumbnail options.

### Sizing

Control the player size with CSS:

```html
<iframe
  src="https://player.mux.com/{PLAYBACK_ID}?min-resolution=720p"
  style="aspect-ratio: 16/9; width: 100%; max-width: 800px;"
  allow="accelerometer; gyroscope; autoplay; encrypted-media; picture-in-picture;"
  allowfullscreen="true"
></iframe>
```

## Captions

If you enabled auto-generated captions during upload, the player automatically shows the captions button once they're ready. No additional configuration needed.

Mux supports auto-generated captions in English, French, German, Italian, Portuguese, Spanish, and many other languages.

See [Add subtitles to your videos](/docs/guides/add-subtitles-to-your-videos) for more details.

## FAQ

### How do I make sure my videos are cached properly

For looping videos, the browser will cache the video and not re-request it on each loop. With short videos, caching typically happens quickly—once the video is fully downloaded, subsequent loops play from cache.

**How to verify:**
Open DevTools, watch the Network tab, and let a video loop a few times. Cached requests may not show up at all, or they may indicate that they were "(from cache)" or "(from disk cache)".
