This guide shows you how to send chat prompts to a Gemini model. This page covers the following topics: The following diagram summarizes the overall workflow: To learn how to add images and other media to your request, see
Image understanding. For a list of languages supported by Gemini, see
Language support. To explore
the generative AI models and APIs that are available on Vertex AI, go to
Model Garden in the Google Cloud console. If you're looking for a way to use Gemini directly from your mobile and
web apps, see the
Firebase AI Logic client SDKs for
Swift, Android, Web, Flutter, and Unity apps. You can interact with the Gemini model in three ways. The following table helps you choose the best method for your use case. To test and iterate on chat prompts, use the Google Cloud console. To send prompts programmatically, use the REST API, Google Gen AI SDK, Vertex AI SDK for Python, or another supported library or SDK. You can use system instructions to steer the behavior of the model based on a specific need or use case. For example, you can define a persona or role for a chatbot that responds to customer service requests. For more information, see the system instructions code samples. To use the Vertex AI Studio to send a chat prompt in the
Google Cloud console, do the following: Optional: Configure the model and parameters: Temperature: Use the slider or textbox to enter a value for
temperature. If the model returns a response that's too generic, too short, or the model gives a fallback
response, try increasing the temperature. Output token limit: Use the slider or textbox to enter a value for the
max output limit. Specify a lower value for shorter responses and a higher value for potentially longer
responses. Click to expand advanced configurations Top-K: Use the slider or textbox to enter a value for top-K.
For each token selection step, the top-K tokens with the highest
probabilities are sampled. Then tokens are further filtered based on top-P with
the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more
random responses. To close the tokenizer tool pane, click X, or click outside of the pane.
Before using any of the request data,
make the following replacements:
Click to expand a partial list of available regions If the model returns a response that's too generic, too short, or the model gives a fallback
response, try increasing the temperature. To send your request, choose one of these options:
Save the request body in a file named
Then execute the following command to send your REST request:
Save the request body in a file named
Then execute the following command to send your REST request:
You should receive a JSON response similar to the following. You can use the Google Gen AI SDK to send requests if you're using Gemini 2.0 Flash. The following sample shows a non-streaming text generation request.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Learn how to install or update the Go.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Learn how to install or update the Java.
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
You can choose whether the model generates streaming responses or
non-streaming responses. For streaming responses, you receive each response
as soon as its output token is generated. For non-streaming responses, you receive
all responses after all of the output tokens are generated. The following sample shows a streaming text generation request.
Before trying this sample, follow the Python setup instructions in the
Vertex AI quickstart using
client libraries.
For more information, see the
Vertex AI Python API
reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials.
For more information, see
Set up authentication for a local development environment.
Choose a method to generate text
Method
Description
Use Case
Vertex AI Studio
A web-based UI in the Google Cloud console that lets you prototype and experiment with prompts.
Best for exploring model capabilities, testing different parameters, and iterating on prompt design without writing code.
REST API
A standard web API that lets you send requests to the model endpoint by using HTTP methods.
Integrate text generation into any application that can make HTTP requests.
SDKs (Python, Go, etc.)
Language-specific libraries that simplify interaction with the API by handling details like authentication and request formatting.
Recommended for building applications in a supported language. SDKs provide a more idiomatic and robust integration than raw API calls.
Generate text
Console
topP
and topK
are applied. Temperature controls the degree of randomness in token selection.
Lower temperatures are good for prompts that require a less open-ended or creative response, while
higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given
prompt are mostly deterministic, but a small amount of variation is still possible.
1
means the next selected token is the most probable among all
tokens in the model's vocabulary (also called greedy decoding), while a top-K of
3
means that the next token is selected from among the three most
probable tokens by using temperature.
0
.
REST
GENERATE_RESPONSE_METHOD
: The type of response that you want the model to generate.
Choose a method that generates how you want the model's response to be returned:
streamGenerateContent
: The response is streamed as it's being generated to reduce the perception of latency to a human audience.generateContent
: The response is returned after it's fully generated.LOCATION
: The region to process the request. Available
options include the following:
us-central1
us-west4
northamerica-northeast1
us-east4
us-west1
asia-northeast3
asia-southeast1
asia-northeast1
PROJECT_ID
: Your project ID.MODEL_ID
: The model ID of the multimodal model
that you want to use.
TEXT1
What are all the colors in a
rainbow?
TEXT2
Why does it appear when it rains?
TEMPERATURE
:
The temperature is used for sampling during response generation, which occurs when topP
and topK
are applied. Temperature controls the degree of randomness in token selection.
Lower temperatures are good for prompts that require a less open-ended or creative response, while
higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given
prompt are mostly deterministic, but a small amount of variation is still possible.
curl
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF'
{
"contents": [
{
"role": "user",
"parts": { "text": "TEXT1" }
},
{
"role": "model",
"parts": { "text": "What a great question!" }
},
{
"role": "user",
"parts": { "text": "TEXT2" }
}
],
"generation_config": {
"temperature": TEMPERATURE
}
}
EOF
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD"PowerShell
request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@'
{
"contents": [
{
"role": "user",
"parts": { "text": "TEXT1" }
},
{
"role": "model",
"parts": { "text": "What a great question!" }
},
{
"role": "user",
"parts": { "text": "TEXT2" }
}
],
"generation_config": {
"temperature": TEMPERATURE
}
}
'@ | Out-File -FilePath request.json -Encoding utf8
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD" | Select-Object -Expand ContentSDKs
Python
Install
pip install --upgrade google-genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Go
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Node.js
Install
npm install @google/genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Java
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Streaming vs. non-streaming responses
Streaming and non-streaming responses
Python
What's next
Learn how to send multimodal prompt requests:
Learn about responsible AI best practices and Vertex AI's safety filters.