You can add videos to Gemini requests to perform tasks that involve
understanding the contents of the included videos. This page
shows you how to add videos to your requests to Gemini in
Vertex AI by using the Google Cloud console and the Vertex AI API. The following table lists the models that support video understanding: The quota metric is
For a list of languages supported by Gemini models, see model information
Google models. To learn
more about how to design multimodal prompts, see
Design multimodal prompts.
If you're looking for a way to use Gemini directly from your mobile and
web apps, see the
Firebase AI Logic client SDKs for
Swift, Android, Web, Flutter, and Unity apps. You can add a single video or multiple videos in your request to Gemini and the
video can include audio. The sample code in each of the following tabs shows a different way to identify
what's in a video. This sample works with all Gemini multimodal models. In the Vertex AI section of the Google Cloud console, go to
the Vertex AI Studio page. Click Create prompt. Optional: Configure the model and parameters: Optional: To configure advanced parameters, click Advanced and
configure as follows:
Top-K: Use the slider or textbox to enter a value for top-K.
For each token selection step, the top-K tokens with the highest
probabilities are sampled. Then tokens are further filtered based on top-P with
the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more
random responses. If the model returns a response that's too generic, too short, or the model gives a fallback
response, try increasing the temperature. Specify a lower value for shorter responses and a higher value for potentially longer
responses. Click Insert Media, and select a source for your file. Select the file that you want to upload and click Open. Enter the URL of the file that you want to use and click Insert.
Enter the URL of the YouTube video that you want to use and click
Insert. You can use any public video or a video that's owned by the account that
you used to sign in to the Google Cloud console. Select the bucket and then the file from the bucket that
you want to import and click Select. Click Select. The file thumbnail displays in the Prompt pane. The total
number of tokens also displays. If your prompt data exceeds the
token limit, the
tokens are truncated and aren't included in processing your data. Enter your text prompt in the Prompt pane. Optional: To view the Token ID to text and Token IDs, click the
tokens count in the Prompt pane. Click Submit. Optional: To save your prompt to My prompts, click Optional: To get the Python code or a curl command for your prompt, click
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Learn how to install or update the Go.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
After you
set up your environment,
you can use REST to test a text prompt. The following sample sends a request to the publisher
model endpoint.
Before using any of the request data,
make the following replacements:
When specifying a
If you don't have a video file in Cloud Storage, then you can use the following
publicly available file:
Click to expand MIME types To send your request, choose one of these options:
Save the request body in a file named
Then execute the following command to send your REST request:
Save the request body in a file named
Then execute the following command to send your REST request:
You should receive a JSON response similar to the following. The following shows you how to summarize a video file with audio and return
chapters with timestamps. This sample works with Gemini 2.0.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
After you
set up your environment,
you can use REST to test a text prompt. The following sample sends a request to the publisher
model endpoint.
Before using any of the request data,
make the following replacements:
When specifying a
If you don't have a video file in Cloud Storage, then you can use the following
publicly available file:
Click to expand MIME types To send your request, choose one of these options:
Save the request body in a file named
Then execute the following command to send your REST request:
Save the request body in a file named
Then execute the following command to send your REST request:
You should receive a JSON response similar to the following. In the Vertex AI section of the Google Cloud console, go to
the Vertex AI Studio page. Click Create prompt. Optional: Configure the model and parameters: Optional: To configure advanced parameters, click Advanced and
configure as follows:
Top-K: Use the slider or textbox to enter a value for top-K.
For each token selection step, the top-K tokens with the highest
probabilities are sampled. Then tokens are further filtered based on top-P with
the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more
random responses. If the model returns a response that's too generic, too short, or the model gives a fallback
response, try increasing the temperature. Specify a lower value for shorter responses and a higher value for potentially longer
responses. Click Insert Media, and select a source for your file. Select the file that you want to upload and click Open. Enter the URL of the file that you want to use and click Insert.
Enter the URL of the YouTube video that you want to use and click
Insert. You can use any public video or a video that's owned by the account that
you used to sign in to the Google Cloud console. Select the bucket and then the file from the bucket that
you want to import and click Select. Click Select. The file thumbnail displays in the Prompt pane. The total
number of tokens also displays. If your prompt data exceeds the
token limit, the
tokens are truncated and aren't included in processing your data. Enter your text prompt in the Prompt pane. Optional: To view the Token ID to text and Token IDs, click the
tokens count in the Prompt pane. Click Submit. Optional: To save your prompt to My prompts, click Optional: To get the Python code or a curl command for your prompt, click
You can customize video processing in the Gemini for Google Cloud API by setting
clipping intervals or providing custom frame rate sampling. You can clip videos by specifying
You can set custom frame rate sampling by passing an By default 1 frame per second (FPS) is sampled from the video. You might want to
set low FPS (< 1) for long videos. This is especially useful for mostly static
videos (e.g. lectures). If you want to capture more details in rapidly changing
visuals, consider setting a higher FPS value. You can adjust
Each model has a set of optional parameters that you can set. For more
information, see Content generation parameters.
Here's how tokens are calculated for video:Supported models
Model
Media details
MIME types
Gemini 2.5 Flash-Lite
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
Gemini 2.5 Flash with Live API native audio
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
Gemini 2.0 Flash with Live API
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
Gemini 2.0 Flash with image generation
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
Gemini 2.5 Pro
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
Gemini 2.5 Flash
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
Gemini 2.0 Flash
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
Gemini 2.0 Flash-Lite
video/x-flv
video/quicktime
video/mpeg
video/mpegs
video/mpg
video/mp4
video/webm
video/wmv
video/3gpp
generate_content_video_input_per_base_model_id_and_resolution
.Add videos to a request
Single video
Console
To send a multimodal prompt by using the Google Cloud console, do the
following:
Click to expand advanced configurations
1
means the next selected token is the most probable among all
tokens in the model's vocabulary (also called greedy decoding), while a top-K of
3
means that the next token is selected from among the three most
probable tokens by using temperature.
0
.
The temperature is used for sampling during response generation, which occurs when
and topP
topK
are applied. Temperature controls the degree of randomness in token selection.
Lower temperatures are good for prompts that require a less open-ended or creative response, while
higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given
prompt are mostly deterministic, but a small amount of variation is still possible.
Upload
By URL
YouTube
Cloud Storage
Google Drive
Python
Install
pip install --upgrade google-genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Go
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
REST
PROJECT_ID
: Your project ID.FILE_URI
:
The URI or URL of the file to include in the prompt. Acceptable values include the following:
gemini-2.0-flash
and gemini-2.0-flash-lite
, the size limit is 2 GB.fileURI
, you must also specify the media type
(mimeType
) of the file. If VPC Service Controls is enabled, specifying a media file
URL for fileURI
is not supported.gs://cloud-samples-data/video/animals.mp4
with a mime type of
video/mp4
. To view this video,
open the sample MP4
file.
MIME_TYPE
:
The media type of the file specified in the data
or fileUri
fields. Acceptable values include the following:
application/pdf
audio/mpeg
audio/mp3
audio/wav
image/png
image/jpeg
image/webp
text/plain
video/mov
video/mpeg
video/mp4
video/mpg
video/avi
video/wmv
video/mpegps
video/flv
TEXT
:
The text instructions to include in the prompt.
For example,
What is in the video?
curl
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
EOF
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.5-flash:generateContent"PowerShell
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
'@ | Out-File -FilePath request.json -Encoding utf8
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.5-flash:generateContent" | Select-Object -Expand Content
generateContent
method to request that the response is returned after it's fully generated.
To reduce the perception of latency to a human audience, stream the response as it's being
generated by using the
streamGenerateContent
method.
gemini-2.0-flash
). This sample might support other
models as well.
Video with audio
Python
Install
pip install --upgrade google-genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
REST
PROJECT_ID
: .FILE_URI
:
The URI or URL of the file to include in the prompt. Acceptable values include the following:
gemini-2.0-flash
and gemini-2.0-flash-lite
, the size limit is 2 GB.fileURI
, you must also specify the media type
(mimeType
) of the file. If VPC Service Controls is enabled, specifying a media file
URL for fileURI
is not supported.gs://cloud-samples-data/generative-ai/video/pixel8.mp4
with a mime type of
video/mp4
. To view this video,
open the sample MP4
file.
MIME_TYPE
:
The media type of the file specified in the data
or fileUri
fields. Acceptable values include the following:
application/pdf
audio/mpeg
audio/mp3
audio/wav
image/png
image/jpeg
image/webp
text/plain
video/mov
video/mpeg
video/mp4
video/mpg
video/avi
video/wmv
video/mpegps
video/flv
TEXT
Provide a description of the video. The description should also contain anything
important which people say in the video.
curl
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
EOF
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.5-flash:generateContent"PowerShell
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
'@ | Out-File -FilePath request.json -Encoding utf8
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.5-flash:generateContent" | Select-Object -Expand Content
generateContent
method to request that the response is returned after it's fully generated.
To reduce the perception of latency to a human audience, stream the response as it's being
generated by using the
streamGenerateContent
method.
gemini-2.0-flash
). This sample might support other
models as well.
Console
To send a multimodal prompt by using the Google Cloud console, do the
following:
Click to expand advanced configurations
1
means the next selected token is the most probable among all
tokens in the model's vocabulary (also called greedy decoding), while a top-K of
3
means that the next token is selected from among the three most
probable tokens by using temperature.
0
.
The temperature is used for sampling during response generation, which occurs when
and topP
topK
are applied. Temperature controls the degree of randomness in token selection.
Lower temperatures are good for prompts that require a less open-ended or creative response, while
higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given
prompt are mostly deterministic, but a small amount of variation is still possible.
Upload
By URL
YouTube
Cloud Storage
Google Drive
Customize video processing
Set clipping intervals
videoMetadata
with start and end offsets.Set a custom frame rate
fps
argument to
videoMetadata
.Adjust media resolution
MediaResolution
to process your videos with fewer tokens.Set optional model parameters
Video tokenization
Best practices
When using video, use the following best practices and information for the best results:
- If your prompt contains a single video, place the video before the text prompt.
- If you require timestamp localization in a video with audio, ask the model to generate timestamps that follow the format as described in "Timestamp format".
Limitations
While Gemini multimodal models are powerful in many multimodal use cases, it's important to understand the limitations of the models:
- Content moderation: The models refuse to provide answers on videos that violate our safety policies.
- Non-speech sound recognition: The models that support audio might make mistakes recognizing sound that's not speech.
Technical details about videos
Supported models & context: All Gemini 2.0 and 2.5 models can process video data.
- Models with a 2M context window can process videos up to 2 hours long at default media resolution or 6 hours long at low media resolution, while models with a 1M context window can process videos up to 1 hour long at default media resolution or 3 hours long at low media resolution.
File API processing: When using the File API, videos are sampled at 1 frame per second (FPS) and audio is processed at 1Kbps (single channel). Timestamps are added every second.
- These rates are subject to change in the future for improvements in inference.
Token calculation: Each second of video is tokenized as follows:
Individual frames (sampled at 1 FPS):
If
mediaResolution
is set to low, frames are tokenized at 66 tokens per frame, plus timestamp tokens.Otherwise, frames are tokenized at 258 tokens per frame, plus timestamp tokens.
Audio: 25 tokens per second, plus timestamp tokens.
Metadata is also included.
Total: Approximately 300 tokens per second of video at default media resolution, or 100 tokens per second of video at low media resolution.
Timestamp format: When referring to specific moments in a video within your prompt, the timestamp format depends on your video's frame per second (FPS) sampling rate:
For sampling rates at 1 FPS or below: Use the
MM:SS
format, where the first two digits represent minutes and the last two digits represent seconds. If you have offsets that are greater than 1 hour, use theH:MM:SS
format.For sampling rates above 1 FPS: Use the
MM:SS.sss
format, or, if you have offsets that are greater than 1 hour, use theH:MM:SS.sss
format, described as follows:- The first digit represents the hour.
- The second two digits two digits represent minutes.
- The third two digits represent seconds.
- The final three digits represent subseconds.
Best practices:
Use only one video per prompt request for optimal results.
If combining text and a single video, place the text prompt after the video part in the
contents
array.Be aware that fast action sequences might lose detail due to the 1 FPS sampling rate. Consider slowing down such clips if necessary.
What's next
- Start building with Gemini multimodal models - new customers get $300 in free Google Cloud credits to explore what they can do with Gemini.
- Learn how to send chat prompt requests.
- Learn about responsible AI best practices and Vertex AI's safety filters.