Veo is the name of the model that supports video generation. Veo generates a video from a text prompt or an image prompt that you provide.
To explore this model in the console, see the Video Generation
model card in
the Model Garden.
Try Veo on Vertex AI (Vertex AI Studio)
Request access: Advanced features & Veo waitlist
Supported Models
Veo API supports the following models:
HTTP request
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:predictLongRunning \
-d '{
"instances": [
{
"prompt": string,
"image": {
// Union field can be only one of the following:
"bytesBase64Encoded": string,
"gcsUri": string,
// End of list of possible types for union field.
"mimeType": string
},
"lastFrame": {
// Union field can be only one of the following:
"bytesBase64Encoded": string,
"gcsUri": string,
// End of list of possible types for union field.
"mimeType": string
},
"video": {
// Union field can be only one of the following:
"bytesBase64Encoded": string,
"gcsUri": string,
// End of list of possible types for union field.
"mimeType": string
}
}
],
"parameters": {
"aspectRatio": string,
"durationSeconds": integer,
"enhancePrompt": boolean,
"generateAudio": boolean,
"negativePrompt": string,
"personGeneration": string,
"sampleCount": integer,
"seed": uint32,
"storageUri": string
}
}'
Instances | |
---|---|
|
Required for text-to-video. A text string to guide the first eight seconds in the video. For example:
|
Union field
Optional. An image to guide video generation, which can be either a |
|
Union field
Optional. An image of the first frame of a video to fill the space
between.
|
|
Union field
Optional. A Veo generated video to extend in length,
which can be either a
|
|
bytesBase64Encoded |
A bytes Base64-encoded string of an image or video file. |
gcsUri |
A string URI to a Cloud Storage bucket location. |
mimeType |
Required for the following objects: Specifies the mime type of a video or image. For images, the following mime types are accepted:
For videos, the following mime types are accepted:
|
Parameters | |
---|---|
aspectRatio |
Optional. Specifies the aspect ratio of generated videos. The following are accepted values:
|
durationSeconds |
Required. The length of video files that you want to generate. The following are the accepted values for each model:
|
enhancePrompt |
Optional. Use Gemini to enhance your prompts. Accepted
values are |
generateAudio |
Required for
|
negativePrompt |
Optional. A text string that describes anything you want to discourage the model from generating. For example:
|
personGeneration |
Optional. The safety setting that controls whether people or face generation is allowed. One of the following:
|
sampleCount |
Optional. The number of output videos requested. Accepted values are
|
seed |
Optional. A number to request to make generated videos deterministic. Adding a seed number with your request without changing other parameters will cause the model to produce the same videos.
The accepted range is |
storageUri |
Optional. A Cloud Storage bucket URI to store the output video, in
the format
|
Sample request
Use the following requests to send a text-to-video request or an image-to-video request:
Text-to-video generation request
REST
To test a text prompt by using the Vertex AI Veo API, send a POST request to the publisher model endpoint.
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- MODEL_ID: The model ID to use. Available values:
veo-2.0-generate-001
(GA)veo-3.0-generate-preview
(Preview)veo-3.0-fast-generate-preview
(Preview)
- TEXT_PROMPT: The text prompt used to guide video generation.
- OUTPUT_STORAGE_URI: Optional: The Cloud Storage bucket to
store the output videos. If not provided, video bytes are returned in the
response. For example:
gs://video-bucket/output/
. - RESPONSE_COUNT: The number of video files you want to generate. Accepted integer values: 1-4.
- DURATION: The length of video files that you want to generate. Accepted integer values are 5-8.
-
Additional optional parameters
Use the following optional variables depending on your use case. Add some or all of the following parameters in the
"parameters": {}
object."parameters": { "aspectRatio": "ASPECT_RATIO", "negativePrompt": "NEGATIVE_PROMPT", "personGeneration": "PERSON_SAFETY_SETTING", "sampleCount": RESPONSE_COUNT, "seed": SEED_NUMBER }
- ASPECT_RATIO: string. Optional. Defines the aspect ratio of the generated
videos. Values:
16:9
(default, landscape) or9:16
(portrait). - NEGATIVE_PROMPT: string. Optional. A text string that describes what you want to discourage the model from generating.
- PERSON_SAFETY_SETTING: string. Optional. The safety setting that controls
whether people or face generation is allowed. Values:
allow_adult
(default value): Allow generation of adults only.disallow
: Disallows inclusion of people or faces in images.
- RESPONSE_COUNT: int. Optional. The number of output images requested. Values:
1
-4
. - SEED_NUMBER: uint32. Optional. A number to make generated videos deterministic.
Specifying a seed number with your request without changing other parameters guides the
model to produce the same videos. Values:
0
-4294967295
.
- ASPECT_RATIO: string. Optional. Defines the aspect ratio of the generated
videos. Values:
HTTP method and URL:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:predictLongRunning
Request JSON body:
{ "instances": [ { "prompt": "TEXT_PROMPT" } ], "parameters": { "storageUri": "OUTPUT_STORAGE_URI", "sampleCount": "RESPONSE_COUNT" } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:predictLongRunning"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:predictLongRunning" | Select-Object -Expand Content
{ "name": "projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID/operations/a1b07c8e-7b5a-4aba-bb34-3e1ccb8afcc8" }
Image-to-video generation request
REST
To test a text prompt by using the Vertex AI Veo API, send a POST request to the publisher model endpoint.
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- MODEL_ID: The model ID to use. Available values:
veo-2.0-generate-001
(GA)veo-3.0-generate-preview
(Preview)
- TEXT_PROMPT: The text prompt used to guide video generation.
- INPUT_IMAGE: Base64-encoded bytes string representing the input image. To ensure quality, the input image should be 720p or higher (1280 x 720 pixels) and have a 16:9 or 9:16 aspect ratio. Images of other aspect ratios or sizes may be resized or centrally cropped during the upload process.
- MIME_TYPE: The MIME type of the input image. Only the images of
the following MIME types are supported:
image/jpeg
orimage/png
. - OUTPUT_STORAGE_URI: Optional: The Cloud Storage bucket to
store the output videos. If not provided, video bytes are returned in the
response. For example:
gs://video-bucket/output/
. - RESPONSE_COUNT: The number of video files you want to generate. Accepted integer values: 1-4.
- DURATION: The length of video files that you want to generate. Accepted integer values are 5-8.
-
Additional optional parameters
Use the following optional variables depending on your use case. Add some or all of the following parameters in the
"parameters": {}
object."parameters": { "aspectRatio": "ASPECT_RATIO", "negativePrompt": "NEGATIVE_PROMPT", "personGeneration": "PERSON_SAFETY_SETTING", "sampleCount": RESPONSE_COUNT, "seed": SEED_NUMBER }
- ASPECT_RATIO: string. Optional. Defines the aspect ratio of the generated
videos. Values:
16:9
(default, landscape) or9:16
(portrait). - NEGATIVE_PROMPT: string. Optional. A text string that describes what you want to discourage the model from generating.
- PERSON_SAFETY_SETTING: string. Optional. The safety setting that controls
whether people or face generation is allowed. Values:
allow_adult
(default value): Allow generation of adults only.disallow
: Disallows inclusion of people or faces in images.
- RESPONSE_COUNT: int. Optional. The number of output images requested. Values:
1
-4
. - SEED_NUMBER: uint32. Optional. A number to make generated videos deterministic.
Specifying a seed number with your request without changing other parameters guides the
model to produce the same videos. Values:
0
-4294967295
.
- ASPECT_RATIO: string. Optional. Defines the aspect ratio of the generated
videos. Values:
HTTP method and URL:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:predictLongRunning
Request JSON body:
{ "instances": [ { "prompt": "TEXT_PROMPT", "image": { "bytesBase64Encoded": "INPUT_IMAGE", "mimeType": "MIME_TYPE" } } ], "parameters": { "storageUri": "OUTPUT_STORAGE_URI", "sampleCount": RESPONSE_COUNT } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:predictLongRunning"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:predictLongRunning" | Select-Object -Expand Content
{ "name": "projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID/operations/a1b07c8e-7b5a-4aba-bb34-3e1ccb8afcc8" }
Poll the status of the video generation long-running operation
Check the status of the video generation long-running operation.
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- MODEL_ID: The model ID to use. Available values:
veo-2.0-generate-001
(GA)veo-3.0-generate-preview
(Preview)
- OPERATION_ID: The unique operation ID returned in the original generate video request.
HTTP method and URL:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:fetchPredictOperation
Request JSON body:
{ "operationName": "projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID/operations/OPERATION_ID" }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:fetchPredictOperation"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:fetchPredictOperation" | Select-Object -Expand Content
Response body (generate video request)
Sending a text-to-video or image-to-video request returns the following response:
{
"name": string
}
Response element | Description |
---|---|
name |
The full operation name of the long-running operation that begins after a video generation request is sent. |
Sample response (generate video request)
{
"name": "projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID/operations/OPERATION_ID"
}
Response body (poll long-running operation)
Polling the status of the original video generation long-running operation returns a response similar to the following:
{
"name": string,
"done": boolean,
"response":{
"@type":"type.googleapis.com/cloud.ai.large_models.vision.GenerateVideoResponse",
"raiMediaFilteredCount": integer,
"videos":[
{
"gcsUri": string,
"mimeType": string
},
{
"gcsUri": string,
"mimeType": string
},
{
"gcsUri": string,
"mimeType": string
},
{
"gcsUri": string,
"mimeType": string
},
]
}
}
Response element | Description |
---|---|
bytesBase64Encoded |
A Base64 bytes encoded string that represents the video object. |
done |
A boolean value that indicates whether the operation is complete. |
encoding |
The video encoding type. |
gcsUri |
The Cloud Storage URI of the generated video. |
name |
The full operation name of the long-running operation that begins after a video generation request is sent. |
raiMediaFilteredCount |
Returns a count of videos that Veo filtered due to
responsible AI policies. If no videos are filtered, the returned count is
0 .
|
raiMediaFilteredReasons |
Lists the reasons for any Veo filtered videos due to responsible AI policies. For more information, see Safety filter code categories. |
response |
The response body of the long-running operation. |
video |
The generated video. |
Sample response (poll long-running operation)
{
"name": "projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID/operations/OPERATION_ID",
"done":true,
"response":{
"@type":"type.googleapis.com/cloud.ai.large_models.vision.GenerateVideoResponse",
"raiMediaFilteredCount": 0,
"videos":[
{
"gcsUri":"gs://STORAGE_BUCKET/TIMESTAMPED_SUBDIRECTORY/sample_0.mp4",
"mimeType":"video/mp4"
},
{
"gcsUri":"gs://STORAGE_BUCKET/TIMESTAMPED_SUBDIRECTORY/sample_1.mp4",
"mimeType":"video/mp4"
},
{
"gcsUri":"gs://STORAGE_BUCKET/TIMESTAMPED_SUBDIRECTORY/sample_2.mp4",
"mimeType":"video/mp4"
},
{
"gcsUri":"gs://STORAGE_BUCKET/TIMESTAMPED_SUBDIRECTORY/sample_3.mp4",
"mimeType":"video/mp4"
}
]
}
}
More information
- For more information about using Veo on Vertex AI, see Generate videos using text and image prompts using Veo.
What's next
- Read Google DeepMind's information on the Veo model.
- Read the blog post "Veo and Imagen 3: Announcing new video and image generation models on Vertex AI".
- Read the blog post "New generative media models and tools, built with and for creators".