This page shows you how to connect your Vertex AI Knowledge Engine corpus to your Weaviate database.
You can use your Weaviate database instance, which is an open source database, with Vertex AI Knowledge Engine to index and conduct a vector-based similarity search. A similarity search is a way to find pieces of text that are similar to the text that you're looking for, which requires the use of an embedding model. The embedding model produces vector data for each piece of text being compared. The similarity search is used to retrieve semantic contexts for grounding to return the most accurate content from your LLM.
With Vertex AI Knowledge Engine, you can continue to use your fully-managed vector database instance, which you are responsible for provisioning. Vertex AI Knowledge Engine uses the vector database for storage, index management, and search.
Considerations
Consider the following steps before using the Weaviate database:
- You must create, configure, and deploy your Weaviate database instance and collection. Follow the instructions in Create your Weaviate collection to set up a collection based on your schema.
- You must provide a Weaviate API key, which allows Vertex AI Knowledge Engine to interact
with the Weaviate database. Vertex AI Knowledge Engine supports the API key-based
AuthN
andAuthZ
, which connects to your Weaviate database and supports an HTTPS connection. - Vertex AI Knowledge Engine doesn't store and manage your Weaviate API key. Instead, you
must do the following:
- Store your key in the Google Cloud Secret Manager.
- Grant your project's service account permissions to access your secret.
- Provide Vertex AI Knowledge Engine access to your secret's resource name.
- When you interact with your Weaviate database, Vertex AI Knowledge Engine accesses your secret resource using your service account.
- Vertex AI Knowledge Engine corpus and the Weaviate collection have a one-to-one
mapping. RAG files are stored in a Weaviate database collection. When a call is
made to the
CreateRagCorpus
API or theUpdateRagCorpus
API, the RAG corpus is associated to the database collection. - In addition to dense embeddings-based semantic searches, the hybrid search is also supported with Vertex AI Knowledge Engine through a Weaviate database. You can also adjust the weight between dense and sparse vector similarity in a hybrid search.
Provision the Weaviate database
Before using the Weaviate database with Vertex AI Knowledge Engine, you must do the following:
- Configure and deploy your Weaviate database instance.
- Prepare the HTTPS endpoint.
- Create your Weaviate collection.
- Use your API key to provision Weaviate using
AuthN
andAuthZ
. - Provision your Vertex AI Knowledge Engine service account.
Configure and deploy your Weaviate database instance
You must follow the Weaviate official guide quickstart. However, you can use the Google Cloud Marketplace guide, which is optional.
You can set up your Weaviate instance anywhere as long as the Weaviate endpoint is accessible to configure and deploy in your project. You can then fully manage your Weaviate database instance.
Because Vertex AI Knowledge Engine isn't involved in any stage of your Weaviate database instance lifecycle, it is your responsibility to grant permissions to Vertex AI Knowledge Engine so it can store and search for data in your Weaviate database. It is also your responsibility to ensure that the data in your database can be used by Vertex AI Knowledge Engine. For example, if you change your data, Vertex AI Knowledge Engine isn't responsible for any unexpected behaviors because of those changes.
Prepare the HTTPS endpoint
During Weaviate provisioning, ensure that you create an HTTPS endpoint. Although HTTP connections are supported, we prefer that Vertex AI Knowledge Engine and Weaviate database traffic use an HTTPS connection.
Create your Weaviate collection
Because the Vertex AI Knowledge Engine corpus and the Weaviate collection have a one-to-one
mapping, you must create a collection in your Weaviate database before
associating your collection with the Vertex AI Knowledge Engine corpus. This one-time
association is made when you call the CreateRagCorpus
API or the
UpdateRagCorpus
API.
When creating a collection in Weaviate, you must use the following schema:
Property name | Data type |
---|---|
fileId |
text |
corpusId |
text |
chunkId |
text |
chunkDataType |
text |
chunkData |
text |
fileOriginalUri |
text |
Use your API key to provision Weaviate using AuthN
and AuthZ
Provisioning the Weaviate API key involves the following steps:
- Create the Weaviate API key.
- Configure Weaviate using your Weaviate API key.
- Store your Weaviate API key in Secret Manager.
Create the API key
Vertex AI Knowledge Engine can only connect to your Weaviate database instances by using your API key for authentication and authorization. You must follow the Weaviate official guide to authentication to configure the API key-based authentication in your Weaviate database instance.
If creating the Weaviate API key requires identity information to associate with that comes from Vertex AI Knowledge Engine, you must create your first corpus, and use your Vertex AI Knowledge Engine service account as an identity.
Store your API key in Secret Manager
An API key holds Sensitive Personally Identifiable Information (SPII), which is subject to legal requirements. If the SPII data is compromised or misused, an individual might experience a significant risk or harm. To minimize risks to an individual while using Vertex AI Knowledge Engine, don't store and manage your API key, and avoid sharing the unencrypted API key.
To protect SPII, do the following:
- Store your API key in Secret Manager.
- Grant your Vertex AI Knowledge Engine service account the permissions to your secret(s),
and manage the access control at the secret resource level.
- Navigate to your project's permissions.
- Enable the option Include Google-provided role grants.
- Find the service account, which has the format
service-{project number}@gcp-sa-vertex-rag.iam.gserviceaccount.com
- Edit the service account's principals.
- Add the Secret Manager Secret Accessor role to the service account.
- During the creation or update of the RAG corpus, pass the secret resource name to Vertex AI Knowledge Engine, and store the secret resource name.
When you make API requests to your Weaviate database instance(s), Vertex AI Knowledge Engine uses each service account to read the API key that corresponds to your secret resources in Secret Manager from your project(s).
Provision your Vertex AI Knowledge Engine service account
When you create the first resource in your project, Vertex AI Knowledge Engine creates a dedicated service account. You can find your service account from your project's IAM page. The service account follows this format:
service-{project number}@gcp-sa-vertex-rag.iam.gserviceaccount.com
For example, service-123456789@gcp-sa-vertex-rag.iam.gserviceaccount.com
.
When integrating with the Weaviate database, your service account is used in the following scenarios:
- You can use your service account to generate your Weaviate API key for authentication. In some cases, generating the API key doesn't require any user information, which means that a service account isn't required when generating the API key.
- You can bind your service account with the API key in your Weaviate database
to configure the authentication (
AuthN
) and authorization (AuthZ
). However, your service account isn't required. - You can store the API key Secret Manager in your project, and you can grant your service account permissions to these secret resources.
- Vertex AI Knowledge Engine uses service accounts to access the API key from the Secret Manager in your projects.
Set up your Google Cloud console environment
Click to learn how to set up your environment
Learn how to set up your environment by selecting one of the following tabs:
Python
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
Install or update the Vertex AI SDK for Python by running the following command:
pip3 install --upgrade "google-cloud-aiplatform>=1.38"
Node.js
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
Install or update the Vertex AI SDK for Node.js by running the following command:
npm install @google-cloud/vertexai
Java
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
-
To add
google-cloud-vertexai
as a dependency, add the appropriate code for your environment:Maven with BOM
Add the following HTML to your
pom.xml
:<dependencyManagement> <dependencies> <dependency> <groupId>com.google.cloud</groupId> <artifactId>libraries-bom</artifactId> <version>26.32.0</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>com.google.cloud</groupId> <artifactId>google-cloud-vertexai</artifactId> </dependency> </dependencies>
Maven without BOM
Add the following HTML to your
pom.xml
:<dependency> <groupId>com.google.cloud</groupId> <artifactId>google-cloud-vertexai</artifactId> <version>0.4.0</version> </dependency>
Gradle without BOM
Add the following to your
build.gradle
implementation 'com.google.cloud:google-cloud-vertexai:0.4.0'
Go
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
Review the available Vertex AI API Go packages to determine which package best meets your project's needs:
Package cloud.google.com/go/vertexai (recommended)
vertexai
is a human authored package that provides access to common capabilities and features.This package is recommended as the starting point for most developers building with the Vertex AI API. To access capabilities and features not yet covered by this package, use the auto-generated
aiplatform
instead.Package cloud.google.com/go/aiplatform
aiplatform
is an auto-generated package.This package is intended for projects that require access to Vertex AI API capabilities and features not yet provided by the human authored
vertexai
package.
Install the desired Go package based on your project's needs by running one of the following commands:
# Human authored package. Recommended for most developers. go get cloud.google.com/go/vertexai
# Auto-generated package. go get cloud.google.com/go/aiplatform
C#
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
REST
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
- Configure environment variables by entering the following. Replace
PROJECT_ID
with the ID of your Google Cloud project.MODEL_ID="gemini-1.5-flash-002" PROJECT_ID="PROJECT_ID"
- Provision the endpoint:
gcloud beta services identity create --service=aiplatform.googleapis.com --project=${PROJECT_ID}
-
Optional: If you are using Cloud Shell and you are asked to authorize Cloud Shell, click Authorize.
Prepare your RAG corpus
To access data from your Weaviate database, Vertex AI Knowledge Engine must have access to a RAG corpus. This section provides the steps for creating a single RAG corpus and additional RAG corpora.
Use CreateRagCorpus
and UpdateRagCorpus
APIs
You must specify the following fields when calling the CreateRagCorpus
and
UpdateRagCorpus
APIs:
rag_vector_db_config.weaviate
: After you call theCreateRagCorpus
API, the vector database configuration is chosen. The vector database configuration contains all of the configuration fields. If therag_vector_db_config.weaviate
field isn't set, thenrag_vector_db_config.rag_managed_db
is set by default.weaviate.http_endpoint
: The HTTPS or HTTP Weaviate endpoint is created during provisioning of the Weaviate database instance.weaviate.collection_name
: The name of the collection that is created during the Weaviate instance provisioning. The name must start with a capital letter.api_auth.api_key_config
: The configuration specifies to use an API key to authorize your access to the vector database.api_key_config.api_key_secret_version
: The resource name of the secret that is stored in Secret Manager, which contains your Weaviate API key.
You can create and associate your RAG corpus to the Weaviate collection in your database instance. However, you might need the service account to generate your API key and to configure your Weaviate database instance. When you create your first RAG corpus, the service account is generated. After you create your first RAG corpus, the association between the Weaviate database and the API key might not be ready for use in the creation of another RAG corpus.
Just in case your database and key aren't ready to be associated to your RAG corpus, do the following to your RAG corpus:
Set the
weaviate
field inrag_vector_db_config
.- You can't change the associated vector database.
- Leave both the
http_endpoint
and thecollection_name
fields empty. Both fields can be updated at a later time.
If you don't have your API key stored in Secret Manager, then you can leave the
api_auth
field empty. When you call theUpdateRagCorpus
API, you can update theapi_auth
field. Weaviate requires that the following be done:- Set the
api_key_config
in theapi_auth
field. Set the
api_key_secret_version
of your Weaviate API key in Secret Manager. Theapi_key_secret_version
field uses the following format:projects/{project}/secrets/{secret}/versions/{version}
- Set the
If you specify fields that can only be set one time, like
http_endpoint
orcollection_name
, you can't change them unless you delete your RAG corpus, and create your RAG corpus again. Other fields like the API key field,api_key_secret_version
, can be updated.When you call
UpdateRagCorpus
, you can set thevector_db
field. Thevector_db
should be set toweaviate
by yourCreateRagCorpus
API call. Otherwise, the system chooses the RAG Managed Database option, which is the default. This option can't be changed when you call theUpdateRagCorpus
API. When you callUpdateRagCorpus
and thevector_db
field is partially set, you can update the fields that are marked as Changeable (also referred to as mutable).
This table lists the WeaviateConfig
mutable and immutable fields that are used
in your code.
Field name | Mutable or Immutable |
---|---|
http_endpoint |
Immutable once set |
collection_name |
Immutable once set |
api_key_authentication |
Mutable |
Create the first RAG corpus
When the Vertex AI Knowledge Engine service account doesn't exist, do the following:
- Create a RAG corpus in Vertex AI Knowledge Engine with an empty Weaviate configuration, which initiates Vertex AI Knowledge Engine provisioning to create a service account.
- Choose a name for your Vertex AI Knowledge Engine service account that follows this
format:
service-{project number}@gcp-sa-vertex-rag.iam.gserviceaccount.com
For example,
service-123456789@gcp-sa-vertex-rag.iam.gserviceaccount.com
. - Using your service account, access your secret that is stored in your project's Secret Manager, which contains your Weaviate API key.
- Get the following information after Weaviate provisioning completes:
- Your Weaviate HTTPS or HTTP endpoint.
- The name of your Weaviate collection.
- Call the
CreateRagCorpus
API to create a RAG corpus with an empty Weaviate configuration, and call theUpdateRagCorpus
API to update the RAG corpus with the following information:- Your Weaviate HTTPS or HTTP endpoint.
- The name of your Weaviate collection.
- The API key resource name.
Create another RAG corpus
When the Vertex AI Knowledge Engine service account exists, do the following:
- Get your Vertex AI Knowledge Engine service account from your project's permissions.
- Enable the option "Include Google-provided role grants"
- Choose a name for your Vertex AI Knowledge Engine service account that follows this
format:
service-{project number}@gcp-sa-vertex-rag.iam.gserviceaccount.com
- Using your service account, access your secret that is stored in your project's Secret Manager, which contains your Weaviate API key.
- During Weaviate provisioning, get the following information:
- The Weaviate HTTPS or HTTP endpoint.
- The name of your Weaviate collection.
- Create a RAG corpus in Vertex AI Knowledge Engine, and connect with your Weaviate
collection by doing one of the following:
- Make a
CreateRagCorpus
API call to create a RAG corpus with a populated Weaviate configuration, which is the preferred option. - Make a
CreateRagCorpus
API call to create a RAG corpus with an empty Weaviate configuration, and make anUpdateRagCorpus
API call to update the RAG corpus with the following information:- Weaviate database HTTP endpoint
- Weaviate Collection name
- API key
- Make a
Examples
This section presents sample code that demonstrates how to set up your Weaviate database, Secret Manager, the RAG corpus, and the RAG file. Sample code is also provided to demonstrate how to import files, to retrieve context, to generate content, and to delete the RAG corpus and RAG files.
To use the Model Garden RAG API notebook, see Use Weaviate with Llama 3.
Set up your Weaviate database
This code sample demonstrates how to set up your Weaviate data and the Secret Manager.
# TODO(developer): Update the variables.
# The HTTPS/HTTP Weaviate endpoint you created during provisioning.
HTTP_ENDPOINT_NAME="https://your.weaviate.endpoint.com"
# Your Weaviate API Key.
WEAVIATE_API_KEY="example-api-key"
# Select your Weaviate collection name, which roughly corresponds to a Vertex AI Knowledge Engine Corpus.
# For example, "MyCollectionName"
# Note that the first letter needs to be capitalized.
# Otherwise, Weavaite will capitalize it for you.
WEAVIATE_COLLECTION_NAME="MyCollectionName"
# Create a collection in Weaviate which includes the required schema fields shown below.
echo '{
"class": "'${WEAVIATE_COLLECTION_NAME}'",
"properties": [
{ "name": "fileId", "dataType": [ "string" ] },
{ "name": "corpusId", "dataType": [ "string" ] },
{ "name": "chunkId", "dataType": [ "string" ] },
{ "name": "chunkDataType", "dataType": [ "string" ] },
{ "name": "chunkData", "dataType": [ "string" ] },
{ "name": "fileOriginalUri", "dataType": [ "string" ] }
]
}' | curl \
-X POST \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer "${WEAVIATE_API_KEY} \
-d @- \
${HTTP_ENDPOINT_NAME}/v1/schema
Set up your Secret Manager
To set up your Secret Manager, you must enable Secret Manager, and set permissions.
Enable your Secret Manager
To enable your Secret Manager, do the following:
- Go to the Secret Manager page.
- Click + Create Secret.
- Enter the Name of your secret. Secret names can only contain English letters (A-Z), numbers (0-9), dashes (-), and underscores (_).
- Specifying the following fields is optional:
- To upload the file with your secret, click Browse.
- Read the Replication policy.
- If you want to manually manage the locations for your secret, then check Manually manage locations for this secret. At least one region must be selected.
- Select your encryption option.
- If you want to manually set your rotation period, then check Set rotation period.
- If you want to specify Publish or subscribe topic(s) to receive event notifications, click Add topics.
- By default, the secret never expires. If you want to set an expiration date, then check Set expiration date.
- By default, secret versions are destroyed upon request. To delay the destruction of secret versions, check Set duration for delayed destruction.
- If you want to use labels to organize and categorize your secrets, then click + Add label.
- If you want to use annotations to attach non-identifying metadata to your secrets, then click + Add annotation.
- Click Create secret.
Set permissions
You must grant Secret Manager permissions to your service account.
In the IAM & Admin section of your Google Cloud console, find your service account account, and click the pencil icon to edit.
In the Role field, select Secret Manager Secret Accessor.
This code sample demonstrates how to set up your Secret Manager.
# TODO(developer): Update the variables.
# Select a resource name for your Secret, which contains your API Key.
SECRET_NAME="MyWeaviateApiKeySecret"
# Create a secret in SecretManager.
curl "https://secretmanager.googleapis.com/v1/projects/${PROJECT_ID}/secrets?secretId=${SECRET_NAME}" \
--request "POST" \
--header "authorization: Bearer $(gcloud auth print-access-token)" \
--header "content-type: application/json" \
--data "{\"replication\": {\"automatic\": {}}}"
# Your Weaviate API Key.
WEAVIATE_API_KEY="example-api-key"
# Encode your WEAVIATE_API_KEY using base 64.
SECRET_DATA=$(echo ${WEAVIATE_API_KEY} | base64)
# Create a new version of your secret which uses SECRET_DATA as payload
curl.
"https://secretmanager.googleapis.com/v1/projects/${PROJECT_ID}/secrets/${SECRET_NAME}:addVersion" \
--request "POST" \
--header "authorization: Bearer $(gcloud auth print-access-token)" \
--header "content-type: application/json" \
--data "{\"payload\": {\"data\": \"${SECRET_DATA}\"}}"
Use Weaviate with Llama 3
The Model Garden RAG API notebook demonstrates how to use the Vertex AI SDK for Python with a Weaviate corpus and Llama 3 model. To use the notebook, you must do the following:
For more examples, see Examples.
Create a RAG corpus
This code sample demonstrates how to create a RAG corpus, and sets the Weaviate instance as its vector database.
REST
# TODO(developer): Update the variables.
PROJECT_ID = "YOUR_PROJECT_ID"
# The HTTPS/HTTP Weaviate endpoint you created during provisioning.
HTTP_ENDPOINT_NAME="https://your.weaviate.endpoint.com"
# Your Weaviate collection name, which roughly corresponds to a Vertex AI Knowledge Engine Corpus.
# For example, "MyCollectionName"
# Note that the first letter needs to be capitalized.
# Otherwise, Weaviate will capitalize it for you.
WEAVIATE_COLLECTION_NAME="MyCollectionName"
# The resource name of your Weaviate API Key your Secret.
SECRET_NAME="MyWeaviateApiKeySecret"
# The Secret Manager resource name containing the API Key for your Weaviate endpoint.
# For example, projects/{project}/secrets/{secret}/versions/latest
APIKEY_SECRET_VERSION="projects/${PROJECT_ID}/secrets/${SECRET_NAME}/versions/latest"
# Select a Corpus display name.
CORPUS_DISPLAY_NAME="SpecialCorpus"
# Call CreateRagCorpus API and set all Vector DB Config parameters for Weaviate to create a new corpus associated to your selected Weaviate collection.
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/us-central1/ragCorpora \
-d '{
"display_name" : '\""${CORPUS_DISPLAY_NAME}"\"',
"rag_vector_db_config" : {
"weaviate": {
"http_endpoint": '\""${HTTP_ENDPOINT_NAME}"\"',
"collection_name": '\""${WEAVIATE_COLLECTION_NAME}"\"'
},
"api_auth" : {
"api_key_config": {
"api_key_secret_version": '\""${APIKEY_SECRET_VERSION}"\"'
}
}
}
}'
# TODO(developer): Update the variables.
# Get operation_id returned in CreateRagCorpus.
OPERATION_ID="your-operation-id"
# Poll Operation status until done = true in the response.
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/us-central1/operations/${OPERATION_ID}
# Call ListRagCorpora API to verify the RAG corpus is created successfully.
curl -sS -X GET \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://us-central1-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/us-central1/ragCorpora"
Python
import vertexai
from vertexai.preview import rag
from vertexai.preview.generative_models import GenerativeModel, Tool
# Set Project
PROJECT_ID = "YOUR_PROJECT_ID" # @param {type:"string"}
vertexai.init(project=PROJECT_ID, location="us-central1")
# Configure a Google first-party embedding model
embedding_model_config = rag.EmbeddingModelConfig(
publisher_model="publishers/google/models/text-embedding-004"
)
# Configure a third-party model or a Google fine-tuned first-party model as a Vertex Endpoint resource
# See https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_e5.ipynb for
# deploying 3P embedding models to endpoints
ENDPOINT_ID = "your-model-endpoint-id" # @param {type:"string"}
MODEL_ENDPOINT = "projects/{PROJECT_ID}/locations/us-central1/endpoints/{ENDPOINT_ID}"
embedding_model_config = rag.EmbeddingModelConfig(
endpoint=MODEL_ENDPOINT,
)
# Configure a Weaviate Vector Database Instance for the corpus
WEAVIATE_HTTP_ENDPOINT = "weaviate-http-endpoint" # @param {type:"string"}
COLLECTION_NAME = "weaviate-collection-name" # @param {type:"string"}
API_KEY = "your-secret-manager-resource-name" # @param {type:"string"}
vector_db = rag.Weaviate(
weaviate_http_endpoint=WEAVIATE_HTTP_ENDPOINT,
collection_name=COLLECTION_NAME,
api_key=API_KEY,
)
# Name your corpus
DISPLAY_NAME = "your-corpus-name" # @param {type:"string"}
rag_corpus = rag.create_corpus(
display_name=DISPLAY_NAME, embedding_model_config=embedding_model_config, vector_db=vector_db
)
# Check the corpus just created
rag.list_corpora()
Use the RAG file
The RAG API handles the file upload, import, listing, and deletion.
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your project ID.
- LOCATION: The region to process the request.
- RAG_CORPUS_ID: The ID of the
RagCorpus
resource. - INPUT_FILE: The path of a local file.
- FILE_DISPLAY_NAME: The display name of the
RagFile
. - RAG_FILE_DESCRIPTION: The description of the
RagFile
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:upload
Request JSON body:
{ "rag_file": { "display_name": "FILE_DISPLAY_NAME", "description": "RAG_FILE_DESCRIPTION" } }
To send your request, choose one of these options:
curl
Save the request body in a file named INPUT_FILE
,
and execute the following command:
curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @INPUT_FILE \
"https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:upload"
PowerShell
Save the request body in a file named INPUT_FILE
,
and execute the following command:
$headers = @{ }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile INPUT_FILE `
-Uri "https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:upload" | Select-Object -Expand Content
RagFile
resource. The last component of the RagFile.name
field is the server-generated rag_file_id
.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Import RAG files
Files and folders can be imported from Drive or Cloud Storage.
REST
Use response.metadata
to view partial failures, request time, and response
time in the SDK's response
object.
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your project ID.
- LOCATION: The region to process the request.
- RAG_CORPUS_ID: The ID of the
RagCorpus
resource. - GCS_URIS: A list of Cloud Storage locations. Example:
gs://my-bucket1, gs://my-bucket2
. - DRIVE_RESOURCE_ID: The ID of the Drive resource. Examples:
https://drive.google.com/file/d/ABCDE
https://drive.google.com/corp/drive/u/0/folders/ABCDEFG
- DRIVE_RESOURCE_TYPE: Type of the Drive resource. Options:
RESOURCE_TYPE_FILE
- FileRESOURCE_TYPE_FOLDER
- Folder- CHUNK_SIZE: Optional: Number of tokens each chunk should have.
- CHUNK_OVERLAP: Optional: Number of tokens overlap between chunks.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import
Request JSON body:
{ "import_rag_files_config": { "gcs_source": { "uris": GCS_URIS }, "google_drive_source": { "resource_ids": { "resource_id": DRIVE_RESOURCE_ID, "resource_type": DRIVE_RESOURCE_TYPE }, } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$headers = @{ }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/upload/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import" | Select-Object -Expand Content
ImportRagFilesOperationMetadata
resource.
The following sample demonstrates how to import a file from
Cloud Storage. Use the max_embedding_requests_per_min
control field
to limit the rate at which Vertex AI Knowledge Engine calls the embedding model during the
ImportRagFiles
indexing process. The field has a default value of 1000
calls
per minute.
// Cloud Storage bucket/file location.
// Such as "gs://rag-e2e-test/"
GCS_URIS=YOUR_GCS_LOCATION
// Enter the QPM rate to limit RAG's access to your embedding model
// Example: 1000
EMBEDDING_MODEL_QPM_RATE=MAX_EMBEDDING_REQUESTS_PER_MIN_LIMIT
// ImportRagFiles
// Import a single Cloud Storage file or all files in a Cloud Storage bucket.
// Input: ENDPOINT, PROJECT_ID, RAG_CORPUS_ID, GCS_URIS
// Output: ImportRagFilesOperationMetadataNumber
// Use ListRagFiles to find the server-generated rag_file_id.
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora/${RAG_CORPUS_ID}/ragFiles:import \
-d '{
"import_rag_files_config": {
"gcs_source": {
"uris": '\""${GCS_URIS}"\"'
},
"rag_file_chunking_config": {
"chunk_size": 512
},
"max_embedding_requests_per_min": '"${EMBEDDING_MODEL_QPM_RATE}"'
}
}'
// Poll the operation status.
// The response contains the number of files imported.
OPERATION_ID=OPERATION_ID
poll_op_wait ${OPERATION_ID}
The following sample demonstrates how to import a file from
Drive. Use the max_embedding_requests_per_min
control field to
limit the rate at which Vertex AI Knowledge Engine calls the embedding model during the
ImportRagFiles
indexing process. The field has a default value of 1000
calls
per minute.
// Google Drive folder location.
FOLDER_RESOURCE_ID=YOUR_GOOGLE_DRIVE_FOLDER_RESOURCE_ID
// Enter the QPM rate to limit RAG's access to your embedding model
// Example: 1000
EMBEDDING_MODEL_QPM_RATE=MAX_EMBEDDING_REQUESTS_PER_MIN_LIMIT
// ImportRagFiles
// Import all files in a Google Drive folder.
// Input: ENDPOINT, PROJECT_ID, RAG_CORPUS_ID, FOLDER_RESOURCE_ID
// Output: ImportRagFilesOperationMetadataNumber
// Use ListRagFiles to find the server-generated rag_file_id.
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/ragCorpora/${RAG_CORPUS_ID}/ragFiles:import \
-d '{
"import_rag_files_config": {
"google_drive_source": {
"resource_ids": {
"resource_id": '\""${FOLDER_RESOURCE_ID}"\"',
"resource_type": "RESOURCE_TYPE_FOLDER"
}
},
"max_embedding_requests_per_min": '"${EMBEDDING_MODEL_QPM_RATE}"'
}
}'
// Poll the operation status.
// The response contains the number of files imported.
OPERATION_ID=OPERATION_ID
poll_op_wait ${OPERATION_ID}
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Get a RAG file
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your project ID.
- LOCATION: The region to process the request.
- RAG_CORPUS_ID: The ID of the
RagCorpus
resource. - RAG_FILE_ID: The ID of the
RagFile
resource.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID
To send your request, choose one of these options:
curl
Execute the following command:
curl -X GET \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID"
PowerShell
Execute the following command:
$headers = @{ }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID" | Select-Object -Expand Content
RagFile
resource.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
List RAG files
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your project ID.
- LOCATION: The region to process the request.
- RAG_CORPUS_ID: The ID of the
RagCorpus
resource. - PAGE_SIZE: The standard list page size. You may adjust the number of
RagFiles
to return per page by updating thepage_size
parameter. - PAGE_TOKEN: The standard list page token. Obtained typically using
ListRagFilesResponse.next_page_token
of the previousVertexRagDataService.ListRagFiles
call.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles?page_size=PAGE_SIZE&page_token=PAGE_TOKEN
To send your request, choose one of these options:
curl
Execute the following command:
curl -X GET \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles?page_size=PAGE_SIZE&page_token=PAGE_TOKEN"
PowerShell
Execute the following command:
$headers = @{ }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles?page_size=PAGE_SIZE&page_token=PAGE_TOKEN" | Select-Object -Expand Content
RagFiles
under the given RAG_CORPUS_ID
.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Delete a RAG file
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your project ID.
- LOCATION: The region to process the request.
- RAG_CORPUS_ID: The ID of the
RagCorpus
resource. - RAG_FILE_ID: The ID of the
RagFile
resource. Format:projects/{project}/locations/{location}/ragCorpora/{rag_corpus}/ragFiles/{rag_file_id}
.
HTTP method and URL:
DELETE https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID
To send your request, choose one of these options:
curl
Execute the following command:
curl -X DELETE \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID"
PowerShell
Execute the following command:
$headers = @{ }
Invoke-WebRequest `
-Method DELETE `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles/RAG_FILE_ID" | Select-Object -Expand Content
DeleteOperationMetadata
resource.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Retrieve context
When a user asks a question or provides a prompt, the retrieval component in RAG searches through its knowledge base to find information that is relevant to the query.
REST
Before using any of the request data, make the following replacements:
- LOCATION: The region to process the request.
- PROJECT_ID: Your project ID.
- RAG_CORPUS_RESOURCE: The name of the
RagCorpus
resource. Format:projects/{project}/locations/{location}/ragCorpora/{rag_corpus}
. - VECTOR_DISTANCE_THRESHOLD: Only contexts with a vector distance smaller than the threshold are returned.
- TEXT: The query text to get relevant contexts.
- SIMILARITY_TOP_K: The number of top contexts to retrieve.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION:retrieveContexts
Request JSON body:
{ "vertex_rag_store": { "rag_resources": { "rag_corpus": "RAG_CORPUS_RESOURCE", }, "vector_distance_threshold": 0.8 }, "query": { "text": "TEXT", "similarity_top_k": SIMILARITY_TOP_K } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION:retrieveContexts"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$headers = @{ }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION:retrieveContexts" | Select-Object -Expand Content
RagFiles
.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Generates content
A prediction controls the LLM method that generates content.
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your project ID.
- LOCATION: The region to process the request.
- MODEL_ID: LLM model for content generation. Example:
gemini-1.5-pro-002
- GENERATION_METHOD: LLM method for content generation. Options:
generateContent
,streamGenerateContent
- INPUT_PROMPT: The text sent to the LLM for content generation. Try to use a prompt relevant to the uploaded rag Files.
- RAG_CORPUS_RESOURCE: The name of the
RagCorpus
resource. Format:projects/{project}/locations/{location}/ragCorpora/{rag_corpus}
. - SIMILARITY_TOP_K: Optional: The number of top contexts to retrieve.
- VECTOR_DISTANCE_THRESHOLD: Optional: Contexts with a vector distance smaller than the threshold are returned.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATION_METHOD
Request JSON body:
{ "contents": { "role": "user", "parts": { "text": "INPUT_PROMPT" } }, "tools": { "retrieval": { "disable_attribution": false, "vertex_rag_store": { "rag_resources": { "rag_corpus": "RAG_CORPUS_RESOURCE", }, "similarity_top_k": SIMILARITY_TOP_K, "vector_distance_threshold": VECTOR_DISTANCE_THRESHOLD } } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATION_METHOD"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$headers = @{ }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATION_METHOD" | Select-Object -Expand Content
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Hybrid search
Hybrid search is supported with Weaviate database, which combines both semantic and keyword searches to improve the relevance of search results. During the retrieval of search results, a combination of similarity scores from semantic (a dense vector) and keyword matching (a sparse vector) produces the final ranked results.
Hybrid search using the Vertex AI Knowledge Engine retrieval API
This is an example of how to enable a hybrid search using the Vertex AI Knowledge Engine retrieval API.
REST
The following variables are used in the code sample:
- PROJECT_ID: Your Google Cloud project ID.
- RAG_CORPUS_RESOURCE: The full resource name for your RAG
corpus in the format of
projects/*/locations/us-central1/ragCorpora/*
. - DISTANCE_THRESHOLD: A threshold set for a vector search
distance in the range of
[0, 1.0]
. The default value is set to0.3
. - ALPHA: The alpha value controls the weight between semantic
and keyword search results. The range is
[0, 1]
where0
is a sparse vector search and1
is a dense vector search. The default value is0.5
, which balances sparse and dense vector searches. - RETRIEVAL_QUERY: Your retrieval query.
- TOP_K: The number of top
k
results to be retrieved.
This example demonstrates how to call the HTTP method in a URL.
POST https://us-central1-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/us-central1:retrieveContexts
This code sample demonstrates how to use the request JSON body.
{
"vertex_rag_store": {
"rag_resources": {
"rag_corpus": '\""${RAG_CORPUS_RESOURCE}"\"',
},
"vector_distance_threshold": ${DISTANCE_THRESHOLD}
},
"query": {
"text": '\""${RETRIEVAL_QUERY}"\"',
"similarity_top_k": ${TOP_K},
"ranking": { "alpha" : ${ALPHA}}
}
}
Python
from vertexai.preview import rag
import vertexai
# TODO(developer): Update the variables.
# PROJECT_ID = "your-project-id"
# rag_corpus_id = "your-rag-corpus-id"
# Only one corpus is supported at this time
# Initialize Vertex AI API once per session
vertexai.init(project=PROJECT_ID, location="us-central1")
response = rag.retrieval_query(
rag_resources=[
rag.RagResource(
rag_corpus=rag_corpus_id,
# Optional: supply IDs from `rag.list_files()`.
# rag_file_ids=["rag-file-1", "rag-file-2", ...],
)
],
text="What is RAG and why it is helpful?",
similarity_top_k=10, # Optional
vector_distance_threshold=0.5, # Optional
ranking=rag.RagQuery.Ranking(
alpha=0.5
), # Optional
)
print(response)
Use hybrid search and Vertex AI Knowledge Engine for grounded generation
This is an example of how to use hybrid search and Vertex AI Knowledge Engine for grounded generation.
REST
The following variables are used in the code sample:
- PROJECT_ID: Your Google Cloud project ID.
- RAG_CORPUS_RESOURCE: Your RAG corpus full resource name in
the format of
projects/*/locations/us-central1/ragCorpora/*
. - DISTANCE_THRESHOLD: A threshold set for a vector search
distance in the range of
[0, 1.0]
. The default value is set to0.3
. - ALPHA: The alpha value controls the weight between semantic
and keyword search results. The range is
[0, 1]
where0
is a sparse vector search and1
is a dense vector search. The default value is0.5
, which balances sparse and dense vector searches. - INPUT_PROMPT: Your input prompt.
- TOP_K: The number of top
k
results to be retrieved.
This example demonstrates how to call the HTTP method in a URL.
POST https://us-central1-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/gemini-pro:generateContent
This code sample demonstrates how to use the request JSON body.
{
"contents": {
"role": "user",
"parts": {
"text": '\""${INPUT_PROMPT}"\"'
}
},
"tools": {
"retrieval": {
"vertex_rag_store": {
"rag_resources": {
"rag_corpus": '\""${RAG_CORPUS_RESOURCE}"\"',
},
"similarity_top_k": ${TOP_K},
"vector_distance_threshold": ${DISTANCE_THRESHOLD},
"ranking": { "alpha" : ${ALPHA}}
}
}
}
}
Python
from vertexai.preview import rag
from vertexai.preview.generative_models import GenerativeModel, Tool
import vertexai
# TODO(developer): Update the variables.
# PROJECT_ID = "your-project-id"
# rag_corpus_id = "your-rag-corpus-id" # Only one corpus is supported at this time
# Initialize Vertex AI API once per session
vertexai.init(project=PROJECT_ID, location="us-central1")
rag_retrieval_tool = Tool.from_retrieval(
retrieval=rag.Retrieval(
source=rag.VertexRagStore(
rag_resources=[
rag.RagResource(
rag_corpus=rag_corpus_id, # Currently only 1 corpus is allowed.
# Optional: supply IDs from `rag.list_files()`.
# rag_file_ids=["rag-file-1", "rag-file-2", ...],
)
],
similarity_top_k=3, # Optional
vector_distance_threshold=0.5, # Optional
ranking=rag.RagQuery.Ranking(
alpha=0.5
), # Optional
),
)
)
rag_model = GenerativeModel(
model_name="gemini-1.5-flash-001", tools=[rag_retrieval_tool]
)
response = rag_model.generate_content("Why is the sky blue?")
print(response.text)
What's next
- To learn more about grounding, see Grounding overview.
- To learn more about Vertex AI Knowledge Engine, see Use Vertex AI Knowledge Engine.
- To learn more about grounding and RAG, see Ground responses using
RAG.