Create a multi-turn non-streaming conversation with Vertex AI

Sample demonstrating how to set-up the Vertex AI SDK and create a non-streaming conversation with three consecutive messages sent to the model.

Code sample

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function createNonStreamingChat(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeModel = vertexAI.getGenerativeModel({
    model: model,
  });

  const chat = generativeModel.startChat({});

  const result1 = await chat.sendMessage('Hello');
  const response1 = await result1.response;
  console.log('Chat response 1: ', JSON.stringify(response1));

  const result2 = await chat.sendMessage(
    'Can you tell me a scientific fun fact?'
  );
  const response2 = await result2.response;
  console.log('Chat response 2: ', JSON.stringify(response2));

  const result3 = await chat.sendMessage('How can I learn more about that?');
  const response3 = await result3.response;
  console.log('Chat response 3: ', JSON.stringify(response3));
}

What's next

To search and filter code samples for other Google Cloud products, see the Google Cloud sample browser.