Text generation

This page shows you how to send chat prompts to a Gemini model by using the Google Cloud console, REST API, and supported SDKs.

To learn how to add images and other media to your request, see Image understanding.

For a list of languages supported by Gemini, see Language support.


To explore the generative AI models and APIs that are available on Vertex AI, go to Model Garden in the Google Cloud console.

Go to Model Garden


If you're looking for a way to use Gemini directly from your mobile and web apps, see the Vertex AI in Firebase SDKs for Android, Swift, web, and Flutter apps.

For testing and iterating on chat prompts, we recommend using the Google Cloud console. To send prompts programmatically to the model, you can use the REST API, Vertex AI SDK for Python, or one of the other supported libraries and SDKs shown in the following tabs.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the stream parameter in generate_content.

  response = model.generate_content(contents=[...], stream = True)
  

For a non-streaming response, remove the parameter, or set the parameter to False.

Sample code

import vertexai

from vertexai.generative_models import GenerativeModel, ChatSession

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-002")

chat_session = model.start_chat()

def get_chat_response(chat: ChatSession, prompt: str) -> str:
    text_response = []
    responses = chat.send_message(prompt, stream=True)
    for chunk in responses:
        text_response.append(chunk.text)
    return "".join(text_response)

prompt = "Hello."
print(get_chat_response(chat_session, prompt))
# Example response:
# Hello there! How can I help you today?

prompt = "What are all the colors in a rainbow?"
print(get_chat_response(chat_session, prompt))
# Example response:
# The colors in a rainbow are often remembered using the acronym ROY G. BIV:
# * **Red**
# * **Orange** ...

prompt = "Why does it appear when it rains?"
print(get_chat_response(chat_session, prompt))
# Example response:
# It's important to note that these colors blend smoothly into each other, ...

C#

Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI C# reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the StreamGenerateContent method.

  public virtual PredictionServiceClient.StreamGenerateContentStream StreamGenerateContent(GenerateContentRequest request)
  

For a non-streaming response, use the GenerateContentAsync method.

  public virtual Task<GenerateContentResponse> GenerateContentAsync(GenerateContentRequest request)
  

For more information on how the server can stream responses, see Streaming RPCs.

Sample code


using Google.Cloud.AIPlatform.V1;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

public class MultiTurnChatSample
{
    public async Task<string> GenerateContent(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001"
    )
    {
        // Create a chat session to keep track of the context
        ChatSession chatSession = new ChatSession($"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}", location);

        string prompt = "Hello.";
        Console.WriteLine($"\nUser: {prompt}");

        string response = await chatSession.SendMessageAsync(prompt);
        Console.WriteLine($"Response: {response}");

        prompt = "What are all the colors in a rainbow?";
        Console.WriteLine($"\nUser: {prompt}");

        response = await chatSession.SendMessageAsync(prompt);
        Console.WriteLine($"Response: {response}");

        prompt = "Why does it appear when it rains?";
        Console.WriteLine($"\nUser: {prompt}");

        response = await chatSession.SendMessageAsync(prompt);
        Console.WriteLine($"Response: {response}");

        return response;
    }

    private class ChatSession
    {
        private readonly string _modelPath;
        private readonly PredictionServiceClient _predictionServiceClient;

        private readonly List<Content> _contents;

        public ChatSession(string modelPath, string location)
        {
            _modelPath = modelPath;

            _predictionServiceClient = new PredictionServiceClientBuilder
            {
                Endpoint = $"{location}-aiplatform.googleapis.com"
            }.Build();

            // Initialize contents to send over in every request.
            _contents = new List<Content>();
        }

        public async Task<string> SendMessageAsync(string prompt)
        {
            var content = new Content
            {
                Role = "USER",
                Parts =
                {
                    new Part { Text = prompt }
                }
            };
            _contents.Add(content);

            var generateContentRequest = new GenerateContentRequest
            {
                Model = _modelPath,
                GenerationConfig = new GenerationConfig
                {
                    Temperature = 0.9f,
                    TopP = 1,
                    TopK = 32,
                    CandidateCount = 1,
                    MaxOutputTokens = 2048
                }
            };
            generateContentRequest.Contents.AddRange(_contents);

            GenerateContentResponse response = await _predictionServiceClient.GenerateContentAsync(generateContentRequest);

            _contents.Add(response.Candidates[0].Content);

            return response.Candidates[0].Content.Parts[0].Text;
        }
    }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Generative AI quickstart using the Node.js SDK. For more information, see the Node.js SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the generateContentStream method.

  const streamingResp = await generativeModel.generateContentStream(request);
  

For a non-streaming response, use the generateContent method.

  const streamingResp = await generativeModel.generateContent(request);
  

Sample code

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function createStreamChat(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeModel = vertexAI.getGenerativeModel({
    model: model,
  });

  const chat = generativeModel.startChat({});
  const chatInput1 = 'How can I learn more about that?';

  console.log(`User: ${chatInput1}`);

  const result1 = await chat.sendMessageStream(chatInput1);
  for await (const item of result1.stream) {
    console.log(item.candidates[0].content.parts[0].text);
  }
}

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Java SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the generateContentStream method.

  public ResponseStream<GenerateContentResponse> generateContentStream(Content content)
  

For a non-streaming response, use the generateContent method.

  public GenerateContentResponse generateContent(Content content)
  

Sample code

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.ChatSession;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.IOException;

public class ChatDiscussion {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    chatDiscussion(projectId, location, modelName);
  }

  // Ask interrelated questions in a row using a ChatSession object.
  public static void chatDiscussion(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      GenerateContentResponse response;

      GenerativeModel model = new GenerativeModel(modelName, vertexAI);
      // Create a chat session to be used for interactive conversation.
      ChatSession chatSession = new ChatSession(model);

      response = chatSession.sendMessage("Hello.");
      System.out.println(ResponseHandler.getText(response));

      response = chatSession.sendMessage("What are all the colors in a rainbow?");
      System.out.println(ResponseHandler.getText(response));

      response = chatSession.sendMessage("Why does it appear when it rains?");
      System.out.println(ResponseHandler.getText(response));
      System.out.println("Chat Ended.");
    }
  }
}

Go

Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Go SDK for Gemini reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

For a streaming response, use the GenerateContentStream method.

  iter := model.GenerateContentStream(ctx, genai.Text("Tell me a story about a lumberjack and his giant ox. Keep it very short."))
  

For a non-streaming response, use the GenerateContent method.

  resp, err := model.GenerateContent(ctx, genai.Text("What is the average size of a swallow?"))
  

Sample code

import (
	"context"
	"encoding/json"
	"fmt"
	"io"

	"cloud.google.com/go/vertexai/genai"
)

func makeChatRequests(ctx context.Context, w io.Writer, projectID, region, modelName string) error {
	client, err := genai.NewClient(ctx, projectID, region)

	if err != nil {
		return fmt.Errorf("error creating client: %w", err)
	}
	defer client.Close()

	gemini := client.GenerativeModel(modelName)
	chat := gemini.StartChat()

	send := func(message string) error {
		r, err := chat.SendMessage(ctx, genai.Text(message))
		if err != nil {
			return err
		}
		rb, err := json.MarshalIndent(r, "", "  ")
		if err != nil {
			return err
		}
		fmt.Fprintln(w, string(rb))
		return nil
	}

	if err := send("Hello"); err != nil {
		return err
	}
	if err := send("What are all the colors in a rainbow?"); err != nil {
		return err
	}
	return send("Why does it appear when it rains?")
}

REST

After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • GENERATE_RESPONSE_METHOD: The type of response that you want the model to generate. Choose a method that generates how you want the model's response to be returned:
    • streamGenerateContent: The response is streamed as it's being generated to reduce the perception of latency to a human audience.
    • generateContent: The response is returned after it's fully generated.
  • LOCATION: The region to process the request. Available options include the following:

    Click to expand a partial list of available regions

    • us-central1
    • us-west4
    • northamerica-northeast1
    • us-east4
    • us-west1
    • asia-northeast3
    • asia-southeast1
    • asia-northeast1
  • PROJECT_ID: Your project ID.
  • MODEL_ID: The model ID of the multimodal model that you want to use. Some options include the following:
    • gemini-1.0-pro-002
    • gemini-1.0-pro-vision-001
    • gemini-1.5-pro-002
    • gemini-1.5-flash
  • TEXT1
    The text instructions to include in the first prompt of the multi-turn conversation. For example, What are all the colors in a rainbow?
  • TEXT2
    The text instructions to include in the second prompt. For example, Why does it appear when it rains?
  • TEMPERATURE: The temperature is used for sampling during response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.

    If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.

To send your request, choose one of these options:

curl

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

cat > request.json << 'EOF'
{
  "contents": [
    {
      "role": "user",
      "parts": { "text": "TEXT1" }
    },
    {
      "role": "model",
      "parts": { "text": "What a great question!" }
    },
    {
      "role": "user",
      "parts": { "text": "TEXT2" }
    }
  ],
  "generation_config": {
    "temperature": TEMPERATURE
  }
}
EOF

Then execute the following command to send your REST request:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD"

PowerShell

Save the request body in a file named request.json. Run the following command in the terminal to create or overwrite this file in the current directory:

@'
{
  "contents": [
    {
      "role": "user",
      "parts": { "text": "TEXT1" }
    },
    {
      "role": "model",
      "parts": { "text": "What a great question!" }
    },
    {
      "role": "user",
      "parts": { "text": "TEXT2" }
    }
  ],
  "generation_config": {
    "temperature": TEMPERATURE
  }
}
'@  | Out-File -FilePath request.json -Encoding utf8

Then execute the following command to send your REST request:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Note the following in the URL for this sample:
  • Use the generateContent method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using the streamGenerateContent method.
  • The multimodal model ID is located at the end of the URL before the method (for example, gemini-1.5-flash or gemini-1.0-pro-vision). This sample may support other models as well.

Console

To use the Vertex AI Studio to send a chat prompt in the Google Cloud console, do the following:

  1. In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.

    Go to Vertex AI Studio

  2. In Start a conversation, click Text chat.
  3. Optional: Configure the model and parameters:

    • Model: Select Gemini Pro.
    • Region: Select the region that you want to use.
    • Temperature: Use the slider or textbox to enter a value for temperature.

      The temperature is used for sampling during response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.

      If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.

    • Output token limit: Use the slider or textbox to enter a value for the max output limit.

      Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.

      Specify a lower value for shorter responses and a higher value for potentially longer responses.

    • Add stop sequence: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.
  4. Optional: To configure advanced parameters, click Advanced and configure as follows:

    Click to expand advanced configurations

    • Top-K: Use the slider or textbox to enter a value for top-K.

      Top-K changes how the model selects tokens for output. A top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature.

      For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.

      Specify a lower value for less random responses and a higher value for more random responses.

    • Top-P: Use the slider or textbox to enter a value for top-P. Tokens are selected from most probable to the least until the sum of their probabilities equals the value of top-P. For the least variable results, set top-P to 0.
    • Enable Grounding: Add a grounding source and path to customize this feature.
  5. Enter your text prompt in the Prompt pane. The model uses previous messages as context for new responses.
  6. Optional: To display the number of text tokens, click View tokens. You can view the tokens or token IDs of your text prompt.
    • To view the tokens in the text prompt that are highlighted with different colors marking the boundary of each token ID, click Token ID to text. Media tokens aren't supported.
    • To view the token IDs, click Token ID.

      To close the tokenizer tool pane, click X, or click outside of the pane.

  7. Click Submit.
  8. Optional: To save your prompt to My prompts, click Save.
  9. Optional: To get the Python code or a curl command for your prompt, click Get code.
  10. Optional: To clear all previous messages, click Clear conversation

You can use system instructions to steer the behavior of the model based on a specific need or use case. For example, you can define a persona or role for a chatbot that responds to customer service requests. For more information, see the system instructions code samples.

What's next