This page describes how to generate multimodal embeddings using the supported
Vertex AI multimodal model, multimodalembedding@001
.
You can use the Vertex AI multimodal embedding models referred to in Supported models.
This page assumes that you're familiar with AlloyDB for PostgreSQL and generative AI concepts. For more information about embeddings, see What are embeddings.
Before you begin
Before you use multimodal embeddings, do the following:
- Verify that the
google_ml_integration
extension is installed. - Verify that the
google_ml_integration.enable_model_support
flag is set toon
. - Integrate with Vertex AI.
- Access data in Cloud Storage to generate multimodal embeddings.
Access data in Cloud Storage to generate multimodal embeddings
- To generate multimodal embeddings, refer to content in Cloud Storage using
a
gs://
URI. - Access Cloud Storage content through your current project's Vertex AI service agent. By default, the Vertex AI service agent already has permission to access the bucket in the same project. For more information, see IAM roles and permissions index.
To access data in a Cloud Storage bucket in another Google Cloud project, run the following gcloud CLI command to grant the Storage Object Viewer role (
roles/storage.objectViewer
) to the Vertex AI service agent of your AlloyDB project.gcloud projects add-iam-policy-binding <ANOTHER_PROJECT_ID> \ --member="serviceAccount:service-<PROJECT_ID>@gcp-sa-aiplatform.iam.gserviceaccount.com" \ --role="roles/storage.objectViewer"
For more information, see Set and manage IAM policies on buckets.
To generate multimodal embeddings, select one of the following schemas.