Grounding

In generative AI, grounding is the ability to connect model output to verifiable sources of information. If you provide models with access to specific data sources, then grounding tethers their output to these data and reduces the chances of inventing content.

With Vertex AI, you can ground model outputs in the following ways:

  • Ground with Google Search - ground a model with publicly available web data.
  • Ground to your own data - ground a model with your own data from Vertex AI Search as a data store (Preview).

For more information about grounding, see Grounding overview.

Supported models

Model Version
Gemini 1.5 Pro with only text input gemini-1.5-pro-002
gemini-1.5-pro-001
Gemini 1.5 Flash with only text input gemini-1.5-flash-002
gemini-1.5-flash-001
Gemini 1.0 Pro with only text input gemini-1.0-pro-001
gemini-1.0-pro-002

Limitations

Grounding is available only for text requests.

Example syntax

This example shows the syntax for how to ground a model and to specify the supported language. For language support, see Languages.


Specify values for the following variables:
  • PROMPT_TEXT: Your text request that you submit to a language model to receive a response back.
  • SYSTEM_INSTRUCTION: A set of instructions that tells the model how to behave and respond to prompts.
  • FACT_TEXT_1: Verified information used by the model to generate a response.
  • TITLE_1: The name of your verified source of information.
  • URI_1: A string that uniquely identifies a location of your verified source of information.
  • AUTHOR_1: The provider of the verified information used by the model to generate a response.
  • FACT_TEXT_2: Verified information used by the model to generate a response.
  • TITLE_2: The name of your verified source of information.
  • URI_2: A string that uniquely identifies a location of your verified source of information.
  • FACT_TEXT_3: Verified information used by the model to generate a response.
  • TITLE_3: The name of your verified source of information.
  • URI_3: A string that uniquely identifies a location of your verified source of information.
  • PROJECT_NUMBER: Uniquely identifies your Google Cloud project, which contains your resources.
  • APP_ID_1: An application that interacts with the model.
  • APP_ID_2: An application that interacts with the model.
  • MODEL_ID: Identifies the model that your application interacts with.
  • LANGUAGE_CODE: Identifies the language used in generating a grounded response.

curl

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://discoveryengine.googleapis.com/v1/projects/PROJECT_NUMBER/locations/global:generateGroundedContent" \
-d '
{
"contents": [
 {
   "role": "user",
   "parts": [
     {
       "text": "PROMPT_TEXT"
     }
   ]
 }
],
"systemInstruction": {
   "parts": {
       "text": "SYSTEM_INSTRUCTION"
   }
},
"groundingSpec": {
 "groundingSources": [
   {
     "inlineSource": {
       "groundingFacts": [
         {
           "factText": "FACT_TEXT_1",
           "attributes": {
             "title": "TITLE_1",
             "uri": "URI_1",
             "author": "AUTHOR_1"
           }
         }
       ]
     }
   },
   {
     "inlineSource": {
       "groundingFacts": [
         {
           "factText": "FACT_TEXT_2",
           "attributes": {
             "title": "TITLE_2",
             "uri": "URI_2"
           }
         },
         {
           "factText": "FACT_TEXT_3",
           "attributes": {
             "title": "TITLE_3",
             "uri": "URI_3"
           }
         }
       ]
     }
   },
   {
     "searchSource": {
       "servingConfig": "projects/PROJECT_NUMBER/locations/global/collections/default_collection/engines/APP_ID_1/servingConfigs/default_search"
     }
   },
   {
     "searchSource": {
       "servingConfig": "projects/PROJECT_NUMBER/locations/global/collections/default_collection/engines/APP_ID_2/servingConfigs/default_search"
     }
   }
  ]
},
"generationSpec": {
  "modelId": "MODEL_ID",
  "temperature": TEMPERATURE,
  "topP": TOP_P,
  "topK": TOP_K,
},
"user_context": {
  "languageCode: "LANGUAGE_CODE"
  "latLng": {
    "latitude": 46.7,
    "longitude": 8.9
  },
}'

Parameter list

See examples for implementation details.

GoogleSearchRetrieval

Ground the response with public data.

Parameters

google_search_retrieval

Required: Object

Ground with publicly available web data.

Retrieval

Ground the response with private data from Vertex AI Search as a data store. Defines a retrieval tool that the model can call to access external knowledge.

Parameters

source

Required: VertexAISearch

Ground with Vertex AI Search data sources.

VertexAISearch

Parameters

datastore

Required: string

Fully-qualified data store resource ID from Vertex AI Search, in the following format: projects/{project}/locations/{location}/collections/default_collection/dataStores/{datastore}

Examples

Ground response on public web data using Google Search

Ground the response with Google Search public data. Include the google_search_retrieval tool in the request. No additional parameters are required.

REST

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request.
  • PROJECT_ID: Your project ID.
  • MODEL_ID: The model ID of the multimodal model. Only the Gemini 1.0 and Gemini 1.5 models support dynamic retrieval. The Gemini 2.0 models don't support dynamic retrieval.
  • TEXT: The text instructions to include in the prompt.
  • DYNAMIC_THRESHOLD: An optional field to set the threshold to invoke the dynamic retrieval configuration. It is a floating point value in the range [0,1]. If you don't set the dynamicThreshold field, the threshold value defaults to 0.7.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:generateContent

Request JSON body:

{
  "contents": [{
    "role": "user",
    "parts": [{
      "text": "TEXT"
    }]
  }],
  "tools": [{
    "googleSearchRetrieval": {
      "dynamicRetrievalConfig": {
        "mode": "MODE_DYNAMIC",
        "dynamicThreshold": DYNAMIC_THRESHOLD
      }
    }
  }],
  "model": "projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID"
}

To send your request, expand one of these options:

You should receive a JSON response similar to the following:

{
   "candidates": [
     {
       "content": {
         "role": "model",
         "parts": [
           {
             "text": "Chicago weather changes rapidly, so layers let you adjust easily. Consider a base layer, a warm mid-layer (sweater-fleece), and a weatherproof outer layer."
           }
         ]
       },
       "finishReason": "STOP",
       "safetyRatings":[
       "..."
    ],
       "groundingMetadata": {
         "webSearchQueries": [
           "What's the weather in Chicago this weekend?"
         ],
         "searchEntryPoint": {
            "renderedContent": "....................."
         }
         "groundingSupports": [
            {
              "segment": {
                "startIndex": 0,
                "endIndex": 65,
                "text": "Chicago weather changes rapidly, so layers let you adjust easily."
              },
              "groundingChunkIndices": [
                0
              ],
              "confidenceScores": [
                0.99
              ]
            },
          ]
          "retrievalMetadata": {
              "webDynamicRetrievalScore": 0.96879
            }
       }
     }
   ],
   "usageMetadata": { "..."
   }
 }

Python

import vertexai

from vertexai.generative_models import (
    GenerationConfig,
    GenerativeModel,
    Tool,
    grounding,
)

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-001")

# Use Google Search for grounding
tool = Tool.from_google_search_retrieval(grounding.GoogleSearchRetrieval())

prompt = "When is the next total solar eclipse in US?"
response = model.generate_content(
    prompt,
    tools=[tool],
    generation_config=GenerationConfig(
        temperature=0.0,
    ),
)

print(response.text)
# Example response:
# The next total solar eclipse visible from the contiguous United States will be on **August 23, 2044**.

NodeJS

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function generateContentWithGoogleSearchGrounding(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  const generativeModelPreview = vertexAI.preview.getGenerativeModel({
    model: model,
    generationConfig: {maxOutputTokens: 256},
  });

  const googleSearchRetrievalTool = {
    googleSearchRetrieval: {},
  };

  const request = {
    contents: [{role: 'user', parts: [{text: 'Why is the sky blue?'}]}],
    tools: [googleSearchRetrievalTool],
  };

  const result = await generativeModelPreview.generateContent(request);
  const response = await result.response;
  const groundingMetadata = response.candidates[0].groundingMetadata;
  console.log(
    'Response: ',
    JSON.stringify(response.candidates[0].content.parts[0].text)
  );
  console.log('GroundingMetadata is: ', JSON.stringify(groundingMetadata));
}

Java

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.api.GoogleSearchRetrieval;
import com.google.cloud.vertexai.api.GroundingMetadata;
import com.google.cloud.vertexai.api.Tool;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.IOException;
import java.util.Collections;

public class GroundingWithPublicData {
  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    groundWithPublicData(projectId, location, modelName);
  }

  // A request whose response will be "grounded" with information found in Google Search.
  public static String groundWithPublicData(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests.
    // This client only needs to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      Tool googleSearchTool =
          Tool.newBuilder()
              .setGoogleSearchRetrieval(
                  // Enable using the result from this tool in detecting grounding
                  GoogleSearchRetrieval.newBuilder())
              .build();

      GenerativeModel model =
          new GenerativeModel(modelName, vertexAI)
              .withTools(Collections.singletonList(googleSearchTool));

      GenerateContentResponse response = model.generateContent("Why is the sky blue?");

      GroundingMetadata groundingMetadata = response.getCandidates(0).getGroundingMetadata();
      String answer = ResponseHandler.getText(response);

      System.out.println("Answer: " + answer);
      System.out.println("Grounding metadata: " + groundingMetadata);

      return answer;
    }
  }
}

Ground response on private data using Vertex AI Search

Ground the response with data from a Vertex AI Search data store. For more information, see Vertex AI Agent Builder.

Before you ground a response with private data, create a data store and a search app.

WARNING: For the time being, this "grounding" interface does not support Vertex AI Search "chunk mode".

REST

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request.
  • PROJECT_ID: Your project ID.
  • MODEL_ID: The model ID of the multimodal model.
  • TEXT: The text instructions to include in the prompt.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:generateContent

Request JSON body:

{
  "contents": [{
    "role": "user",
    "parts": [{
      "text": "TEXT"
    }]
  }],
  "tools": [{
    "retrieval": {
      "vertexAiSearch": {
        "datastore": projects/PROJECT_ID/locations/global/collections/default_collection/dataStores/DATA_STORE_ID
      }
    }
  }],
  "model": "projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID"
}

To send your request, expand one of these options:

You should receive a JSON response similar to the following:

{
  "candidates": [
    {
      "content": {
        "role": "model",
        "parts": [
          {
            "text": "You can make an appointment on the website https://dmv.gov/"
          }
        ]
      },
      "finishReason": "STOP",
      "safetyRatings": [
        "..."
      ],
      "groundingMetadata": {
        "retrievalQueries": [
          "How to make appointment to renew driving license?"
        ],
        "groundingChunks": [
          {
            "retrievedContext": {
              "uri": "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AXiHM.....QTN92V5ePQ==",
              "title": "dmv"
            }
          }
        ],
        "groundingSupport": [
          {
            "segment": {
              "startIndex": 25,
              "endIndex": 147
            },
            "segment_text": "ipsum lorem ...",
            "supportChunkIndices": [1, 2],
            "confidenceScore": [0.9541752, 0.97726375]
          },
          {
            "segment": {
              "startIndex": 294,
              "endIndex": 439
            },
            "segment_text": "ipsum lorem ...",
            "supportChunkIndices": [1],
            "confidenceScore": [0.9541752, 0.9325467]
          }
        ]
      }
    }
  ],
  "usageMetadata": {
    "..."
  }
}

Python

import vertexai

from vertexai.preview.generative_models import (
    GenerationConfig,
    GenerativeModel,
    Tool,
    grounding,
)

# TODO(developer): Update and un-comment below lines
# PROJECT_ID = "your-project-id"
# data_store_id = "your-data-store-id"

vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-001")

tool = Tool.from_retrieval(
    grounding.Retrieval(
        grounding.VertexAISearch(
            datastore=data_store_id,
            project=PROJECT_ID,
            location="global",
        )
    )
)

prompt = "How do I make an appointment to renew my driver's license?"
response = model.generate_content(
    prompt,
    tools=[tool],
    generation_config=GenerationConfig(
        temperature=0.0,
    ),
)

print(response.text)

NodeJS

const {
  VertexAI,
  HarmCategory,
  HarmBlockThreshold,
} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function generateContentWithVertexAISearchGrounding(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001',
  dataStoreId = 'DATASTORE_ID'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  const generativeModelPreview = vertexAI.preview.getGenerativeModel({
    model: model,
    // The following parameters are optional
    // They can also be passed to individual content generation requests
    safetySettings: [
      {
        category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
      },
    ],
    generationConfig: {maxOutputTokens: 256},
  });

  const vertexAIRetrievalTool = {
    retrieval: {
      vertexAiSearch: {
        datastore: `projects/${projectId}/locations/global/collections/default_collection/dataStores/${dataStoreId}`,
      },
      disableAttribution: false,
    },
  };

  const request = {
    contents: [{role: 'user', parts: [{text: 'Why is the sky blue?'}]}],
    tools: [vertexAIRetrievalTool],
  };

  const result = await generativeModelPreview.generateContent(request);
  const response = result.response;
  const groundingMetadata = response.candidates[0];
  console.log('Response: ', JSON.stringify(response.candidates[0]));
  console.log('GroundingMetadata is: ', JSON.stringify(groundingMetadata));
}

Java

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.api.GroundingMetadata;
import com.google.cloud.vertexai.api.Retrieval;
import com.google.cloud.vertexai.api.Tool;
import com.google.cloud.vertexai.api.VertexAISearch;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.IOException;
import java.util.Collections;

public class GroundingWithPrivateData {
  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";
    String datastore = String.format(
        "projects/%s/locations/global/collections/default_collection/dataStores/%s",
        projectId, "datastore_id");

    groundWithPrivateData(projectId, location, modelName, datastore);
  }

  // A request whose response will be "grounded"
  // with information found in Vertex AI Search datastores.
  public static String groundWithPrivateData(String projectId, String location, String modelName,
                                             String datastoreId)
      throws IOException {
    // Initialize client that will be used to send requests.
    // This client only needs to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      Tool datastoreTool = Tool.newBuilder()
          .setRetrieval(
              Retrieval.newBuilder()
                  .setVertexAiSearch(VertexAISearch.newBuilder().setDatastore(datastoreId))
                  .setDisableAttribution(false))
          .build();

      GenerativeModel model = new GenerativeModel(modelName, vertexAI).withTools(
          Collections.singletonList(datastoreTool)
      );

      GenerateContentResponse response = model.generateContent(
          "How do I make an appointment to renew my driver's license?");

      GroundingMetadata groundingMetadata = response.getCandidates(0).getGroundingMetadata();
      String answer = ResponseHandler.getText(response);

      System.out.println("Answer: " + answer);
      System.out.println("Grounding metadata: " + groundingMetadata);

      return answer;
    }
  }
}

What's next

For detailed documentation, see the following: