You can add images to Gemini requests to perform tasks that involve
understanding the contents of the included images. This page shows you how to add
images to your requests to Gemini in Vertex AI by using the
Google Cloud console and the Vertex AI API. The following table lists the models that support image understanding: The quota metric is
For a list of languages supported by Gemini models, see model information
Google models. To learn
more about how to design multimodal prompts, see
Design multimodal prompts.
If you're looking for a way to use Gemini directly from your mobile and
web apps, see the
Firebase AI Logic client SDKs for
Swift, Android, Web, Flutter, and Unity apps. You can add a single image or multiple images in your request to Gemini. The sample code in each of the following tabs shows a different way to identify
what's in an image. This sample works with all Gemini multimodal models. In the Vertex AI section of the Google Cloud console, go to
the Vertex AI Studio page. Click Open freeform. Optional: Configure the model and parameters: Temperature: Use the slider or textbox to enter a value for
temperature.
The temperature is used for sampling during response generation, which occurs when If the model returns a response that's too generic, too short, or the model gives a fallback
response, try increasing the temperature. Output token limit: Use the slider or textbox to enter a value for
the max output limit.
Maximum number of tokens that can be generated in the response. A token is
approximately four characters. 100 tokens correspond to roughly 60-80 words.
Specify a lower value for shorter responses and a higher value for potentially longer
responses. Add stop sequence: Optional. Enter a stop sequence, which is a
series of characters that includes spaces. If the model encounters a
stop sequence, the response generation stops. The stop sequence isn't
included in the response, and you can add up to five stop sequences. Optional: To configure advanced parameters, click Advanced and
configure as follows:
Top-K: Use the slider or textbox to enter a value for top-K.
(not supported for Gemini 1.5). For each token selection step, the top-K tokens with the highest
probabilities are sampled. Then tokens are further filtered based on top-P with
the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more
random responses. Click Insert Media, and select a source for your file. Select the file that you want to upload and click Open. Enter the URL of the file that you want to use and click Insert. Select the bucket and then the file from the bucket that
you want to import and click Select. Click Select. The file thumbnail displays in the Prompt pane. The total
number of tokens also displays. If your prompt data exceeds the
token limit, the
tokens are truncated and aren't included in processing your data. Enter your text prompt in the Prompt pane. Optional: To view the Token ID to text and Token IDs, click the
tokens count in the Prompt pane. Click Submit. Optional: To save your prompt to My prompts, click Optional: To get the Python code or a curl command for your prompt, click
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Learn how to install or update the Go.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Learn how to install or update the Java.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
After you
set up your environment,
you can use REST to test a text prompt. The following sample sends a request to the publisher
model endpoint.
Before using any of the request data,
make the following replacements:
When specifying a
If you don't have an image file in Cloud Storage, then you can use the following
publicly available file:
Click to expand MIME types To send your request, choose one of these options:
Save the request body in a file named
Then execute the following command to send your REST request:
Save the request body in a file named
Then execute the following command to send your REST request:
You should receive a JSON response similar to the following.
Before using any of the request data,
make the following replacements:
Click to expand a partial list of available regions Click to expand MIME types To send your request, choose one of these options:
Save the request body in a file named
Then execute the following command to send your REST request:
Save the request body in a file named
Then execute the following command to send your REST request:
You should receive a JSON response similar to the following. Each of the following tabs show you a different way to include multiple images
in a prompt request. Each sample takes in two sets of the following inputs: The sample also takes in a third image and media type, but no text. The sample
returns a text response indicating the city and landmark in the third image. These image samples work with all Gemini multimodal models. In the Vertex AI section of the Google Cloud console, go to
the Vertex AI Studio page. Click Open freeform. Optional: Configure the model and parameters: Temperature: Use the slider or textbox to enter a value for
temperature.
The temperature is used for sampling during response generation, which occurs when If the model returns a response that's too generic, too short, or the model gives a fallback
response, try increasing the temperature. Output token limit: Use the slider or textbox to enter a value for
the max output limit.
Maximum number of tokens that can be generated in the response. A token is
approximately four characters. 100 tokens correspond to roughly 60-80 words.
Specify a lower value for shorter responses and a higher value for potentially longer
responses. Add stop sequence: Optional. Enter a stop sequence, which is a
series of characters that includes spaces. If the model encounters a
stop sequence, the response generation stops. The stop sequence isn't
included in the response, and you can add up to five stop sequences. Optional: To configure advanced parameters, click Advanced and
configure as follows:
Top-K: Use the slider or textbox to enter a value for top-K.
(not supported for Gemini 1.5). For each token selection step, the top-K tokens with the highest
probabilities are sampled. Then tokens are further filtered based on top-P with
the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more
random responses. Click Insert Media, and select a source for your file. Select the file that you want to upload and click Open. Enter the URL of the file that you want to use and click Insert. Select the bucket and then the file from the bucket that
you want to import and click Select. Click Select. The file thumbnail displays in the Prompt pane. The total
number of tokens also displays. If your prompt data exceeds the
token limit, the
tokens are truncated and aren't included in processing your data. Enter your text prompt in the Prompt pane. Optional: To view the Token ID to text and Token IDs, click the
tokens count in the Prompt pane. Click Submit. Optional: To save your prompt to My prompts, click Optional: To get the Python code or a curl command for your prompt, click
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Learn how to install or update the Go.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Learn how to install or update the Java.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
After you
set up your environment,
you can use REST to test a text prompt. The following sample sends a request to the publisher
model endpoint.
Before using any of the request data,
make the following replacements:
When specifying a
If you don't have an image file in Cloud Storage, then you can use the following
publicly available file:
Click to expand MIME types When specifying a
If you don't have an image file in Cloud Storage, then you can use the following
publicly available file:
When specifying a
If you don't have an image file in Cloud Storage, then you can use the following
publicly available file:
To send your request, choose one of these options:
Save the request body in a file named
Then execute the following command to send your REST request:
Save the request body in a file named
Then execute the following command to send your REST request:
You should receive a JSON response similar to the following. Each model has a set of optional parameters that you can set. For more
information, see Content generation parameters.
Here's how tokens are calculated for images:Supported models
Model
Media details
MIME types
Gemini 2.5 Flash-Lite
image/png
image/jpeg
image/webp
Gemini 2.0 Flash with image generation
image/png
image/jpeg
image/webp
Gemini 2.5 Pro
image/png
image/jpeg
image/webp
Gemini 2.5 Flash
image/png
image/jpeg
image/webp
Gemini 2.0 Flash
image/png
image/jpeg
image/webp
Gemini 2.0 Flash-Lite
image/png
image/jpeg
image/webp
generate_content_video_input_per_base_model_id_and_resolution
.Add images to a request
Single image
Console
To send a multimodal prompt by using the Google Cloud console, do the
following:
topP
and topK
are applied. Temperature controls the degree of randomness in token selection.
Lower temperatures are good for prompts that require a less open-ended or creative response, while
higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given
prompt are mostly deterministic, but a small amount of variation is still possible.
Click to expand advanced configurations
1
means the next selected token is the most probable among all
tokens in the model's vocabulary (also called greedy decoding), while a top-K of
3
means that the next token is selected from among the three most
probable tokens by using temperature.
0
. Upload
By URL
Cloud Storage
Google Drive
Python
Install
pip install --upgrade google-genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Go
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Node.js
Install
npm install @google/genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Java
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
REST
Image in Cloud Storage
PROJECT_ID
: Your project ID.FILE_URI
:
The URI or URL of the file to include in the prompt. Acceptable values include the following:
gemini-2.0-flash
and gemini-2.0-flash-lite
, the size limit is 2 GB.fileURI
, you must also specify the media type
(mimeType
) of the file. If VPC Service Controls is enabled, specifying a media file
URL for fileURI
is not supported.gs://cloud-samples-data/generative-ai/image/scones.jpg
with a mime type of
image/jpeg
. To view this image,
open the sample image
file.
MIME_TYPE
:
The media type of the file specified in the data
or fileUri
fields. Acceptable values include the following:
application/pdf
audio/mpeg
audio/mp3
audio/wav
image/png
image/jpeg
image/webp
text/plain
video/mov
video/mpeg
video/mp4
video/mpg
video/avi
video/wmv
video/mpegps
video/flv
TEXT
:
The text instructions to include in the prompt.
For example,
What is shown in this image?
curl
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
EOF
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-1.5-flash:generateContent"PowerShell
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
'@ | Out-File -FilePath request.json -Encoding utf8
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-1.5-flash:generateContent" | Select-Object -Expand ContentBase64 image data
LOCATION
: The region to process the
request.
Enter a supported region. For the full list of supported regions, see
Available locations.
us-central1
us-west4
northamerica-northeast1
us-east4
us-west1
asia-northeast3
asia-southeast1
asia-northeast1
PROJECT_ID
: Your project ID.B64_BASE_IMAGE
mimeType
) of the data.
MIME_TYPE
:
The media type of the file specified in the data
or fileUri
fields. Acceptable values include the following:
application/pdf
audio/mpeg
audio/mp3
audio/wav
image/png
image/jpeg
image/webp
text/plain
video/mov
video/mpeg
video/mp4
video/mpg
video/avi
video/wmv
video/mpegps
video/flv
TEXT
:
The text instructions to include in the prompt.
For example,
What is shown in this image?
.
curl
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF'
{
"contents": {
"role": "USER",
"parts": [
{
"inlineData": {
"data": "B64_BASE_IMAGE",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
EOF
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-1.5-flash:generateContent"PowerShell
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@'
{
"contents": {
"role": "USER",
"parts": [
{
"inlineData": {
"data": "B64_BASE_IMAGE",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT"
}
]
}
}
'@ | Out-File -FilePath request.json -Encoding utf8
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-1.5-flash:generateContent" | Select-Object -Expand Content
generateContent
method to request that the response is returned after it's fully generated.
To reduce the perception of latency to a human audience, stream the response as it's being
generated by using the
streamGenerateContent
method.
gemini-2.0-flash
). This sample might support other
models as well.
Multiple images
Console
To send a multimodal prompt by using the Google Cloud console, do the
following:
topP
and topK
are applied. Temperature controls the degree of randomness in token selection.
Lower temperatures are good for prompts that require a less open-ended or creative response, while
higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given
prompt are mostly deterministic, but a small amount of variation is still possible.
Click to expand advanced configurations
1
means the next selected token is the most probable among all
tokens in the model's vocabulary (also called greedy decoding), while a top-K of
3
means that the next token is selected from among the three most
probable tokens by using temperature.
0
. Upload
By URL
Cloud Storage
Google Drive
Python
Install
pip install --upgrade google-genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Go
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Node.js
Install
npm install @google/genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Java
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
REST
PROJECT_ID
: .FILE_URI1
:
The URI or URL of the file to include in the prompt. Acceptable values include the following:
gemini-2.0-flash
and gemini-2.0-flash-lite
, the size limit is 2 GB.fileURI
, you must also specify the media type
(mimeType
) of the file. If VPC Service Controls is enabled, specifying a media file
URL for fileURI
is not supported.gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png
with a mime type of
image/png
. To view this image,
open the sample image
file.
MIME_TYPE
:
The media type of the file specified in the data
or fileUri
fields. Acceptable values include the following:
application/pdf
audio/mpeg
audio/mp3
audio/wav
image/png
image/jpeg
image/webp
text/plain
video/mov
video/mpeg
video/mp4
video/mpg
video/avi
video/wmv
video/mpegps
video/flv
TEXT1
:
The text instructions to include in the prompt.
For example,
city: Rome, Landmark: the Colosseum
FILE_URI2
:
The URI or URL of the file to include in the prompt. Acceptable values include the following:
gemini-2.0-flash
and gemini-2.0-flash-lite
, the size limit is 2 GB.fileURI
, you must also specify the media type
(mimeType
) of the file. If VPC Service Controls is enabled, specifying a media file
URL for fileURI
is not supported.gs://cloud-samples-data/vertex-ai/llm/prompts/landmark2.png
with a mime type of
image/png
. To view this image,
open the sample image
file.
TEXT2
:
The text instructions to include in the prompt.
For example,
city: Beijing, Landmark: Forbidden City
FILE_URI3
:
The URI or URL of the file to include in the prompt. Acceptable values include the following:
gemini-2.0-flash
and gemini-2.0-flash-lite
, the size limit is 2 GB.fileURI
, you must also specify the media type
(mimeType
) of the file. If VPC Service Controls is enabled, specifying a media file
URL for fileURI
is not supported.gs://cloud-samples-data/vertex-ai/llm/prompts/landmark3.png
with a mime type of
image/png
. To view this image,
open the sample image
file.
curl
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI1",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT1"
},
{
"fileData": {
"fileUri": "FILE_URI2",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT2"
},
{
"fileData": {
"fileUri": "FILE_URI3",
"mimeType": "MIME_TYPE"
}
}
]
}
}
EOF
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.5-flash:generateContent"PowerShell
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@'
{
"contents": {
"role": "USER",
"parts": [
{
"fileData": {
"fileUri": "FILE_URI1",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT1"
},
{
"fileData": {
"fileUri": "FILE_URI2",
"mimeType": "MIME_TYPE"
}
},
{
"text": "TEXT2"
},
{
"fileData": {
"fileUri": "FILE_URI3",
"mimeType": "MIME_TYPE"
}
}
]
}
}
'@ | Out-File -FilePath request.json -Encoding utf8
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.5-flash:generateContent" | Select-Object -Expand Content
generateContent
method to request that the response is returned after it's fully generated.
To reduce the perception of latency to a human audience, stream the response as it's being
generated by using the
streamGenerateContent
method.
gemini-2.0-flash
). This sample might support other
models as well.
Set optional model parameters
Image tokenization
Best practices
When using images, use the following best practices and information for the best results:
- If you want to detect text in an image, use prompts with a single image to produce better results than prompts with multiple images.
- If your prompt contains a single image, place the image before the text prompt in your request.
- If your prompt contains multiple images, and you want to refer to them
later in your prompt or have the model refer to them in the model response,
it can help to give each image an index before the image. Use
ora
b
c
for your index. The following is an example of using indexed images in a prompt:image 1
image 2
image 3
image 1
image 2 image 3 Write a blogpost about my day using image 1 and image 2. Then, give me ideas for tomorrow based on image 3. - Use images with higher resolution; they yield better results.
- Include a few examples in the prompt.
- Rotate images to their proper orientation before adding them to the prompt.
- Avoid blurry images.
Limitations
While Gemini multimodal models are powerful in many multimodal use cases, it's important to understand the limitations of the models:
- Content moderation: The models refuse to provide answers on images that violate our safety policies.
- Spatial reasoning: The models aren't precise at locating text or objects in images. They might only return the approximated counts of objects.
- Medical uses: The models aren't suitable for interpreting medical images (for example, x-rays and CT scans) or providing medical advice.
- People recognition: The models aren't meant to be used to identify people who aren't celebrities in images.
- Accuracy: The models might hallucinate or make mistakes when interpreting low-quality, rotated, or extremely low-resolution images. The models might also hallucinate when interpreting handwritten text in images documents.
What's next
- Start building with Gemini multimodal models - new customers get $300 in free Google Cloud credits to explore what they can do with Gemini.
- Learn how to send chat prompt requests.
- Learn about responsible AI best practices and Vertex AI's safety filters.