This page introduces Vertex AI Search integration with the
Vertex AI RAG Engine. Vertex AI Search provides a solution for retrieving and managing
data within your Vertex AI RAG applications. By using
Vertex AI Search as your retrieval backend, you can improve
performance, scalability, and ease of integration. Enhanced performance and scalability: Vertex AI Search is
designed to handle large volumes of data with exceptionally low latency. This
translates to faster response times and improved performance for your RAG
applications, especially when dealing with complex or extensive knowledge
bases. Simplified data management: Import your data from various sources, such as
websites, BigQuery datasets, and Cloud Storage buckets, that
can streamline your data ingestion process. Seamless integration: Vertex AI provides built-in
integration with Vertex AI Search, which lets you select
Vertex AI Search as the corpus backend for your RAG
application. This simplifies the integration process and helps to ensure
optimal compatibility between components. Improved LLM output quality: By using the retrieval capabilities of
Vertex AI Search, you can help to ensure that your RAG
application retrieves the most relevant information from your corpus, which
leads to more accurate and informative LLM-generated outputs. Vertex AI Search
brings together deep information retrieval, natural-language processing, and the
latest features in large language model (LLM) processing, which helps to
understand user intent and to return the most relevant results for the user. With Vertex AI Search, you can build a Google-quality search
application using data that you control. To set up a Vertex AI Search, do the following: Once the Vertex AI Search is set up, follow these steps to set it
as the retrieval backend for the RAG application. These code samples show you how to configure Vertex AI Search as
the retrieval backend for a RAG corpus. To use the command line to create a RAG corpus, do the following: Create a RAG corpus Replace the following variables used in the code sample: Monitor progress Replace the following variables used in the code sample:
Before trying this sample, follow the Python setup instructions in the
Vertex AI quickstart using
client libraries.
For more information, see the
Vertex AI Python API
reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials.
For more information, see
Set up authentication for a local development environment.
Vertex AI Search
Configure Vertex AI Search
Use the Vertex AI Search as a retrieval backend for Vertex AI RAG Engine
Set the Vertex AI Search as the retrieval backend to create a RAG corpus
REST
projects/PROJECT_NUMBER/locations/LOCATION/collections/default_collection/engines/ENGINE_NAME/servingConfigs/default_search
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/ragCorpora" \
-d '{
"display_name" : "DISPLAY_NAME",
"vertex_ai_search_config" : {
"serving_config": "ENGINE_NAME/servingConfigs/default_search"
}
}'
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/operations/OPERATION_ID"
Python
Retrieve contexts using the RAG API
After the RAG corpus creation, relevant contexts can be retrieved from
Vertex AI Search through the RetrieveContexts
API.
REST
This code sample demonstrates how to retrieve contexts using REST.
Replace the following variables used in the code sample:
- PROJECT_ID: The ID of your Google Cloud project.
- LOCATION: The region to process the request.
- RAG_CORPUS_RESOURCE: The name of the RAG corpus
resource.
Format:
projects/{project}/locations/{location}/ragCorpora/{rag_corpus}.
- TEXT: The query text to get relevant contexts.
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION:retrieveContexts" \
-d '{
"vertex_rag_store": {
"rag_resources": {
"rag_corpus": "RAG_CORPUS_RESOURCE"
}
},
"query": {
"text": "TEXT"
}
}'
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Generate content using Vertex AI Gemini API
REST
To generate content using Gemini models, make a call to the
Vertex AI GenerateContent
API. By specifying the
RAG_CORPUS_RESOURCE
in the request, it automatically retrieves data from
Vertex AI Search.
Replace the following variables used in the sample code:
- PROJECT_ID: The ID of your Google Cloud project.
- LOCATION: The region to process the request.
- MODEL_ID: LLM model for content generation. For
example,
gemini-2.0-flash
. - GENERATION_METHOD: LLM method for content generation.
For example,
generateContent
,streamGenerateContent
. - INPUT_PROMPT: The text that is sent to the LLM for content generation. Try to use a prompt relevant to the documents in Vertex AI Search.
- RAG_CORPUS_RESOURCE: The name of the RAG corpus
resource. Format:
projects/{project}/locations/{location}/ragCorpora/{rag_corpus}
. SIMILARITY_TOP_K: Optional: The number of top contexts to retrieve.
curl -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATION_METHOD" \ -d '{ "contents": { "role": "user", "parts": { "text": "INPUT_PROMPT" } }, "tools": { "retrieval": { "disable_attribution": false, "vertex_rag_store": { "rag_resources": { "rag_corpus": "RAG_CORPUS_RESOURCE" }, "similarity_top_k": SIMILARITY_TOP_K } } } }'
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.