This page describes how to generate multimodal embeddings using the supported
Vertex AI multimodal model, multimodalembedding@001
. You can run
queries by setting the model ID to the qualified name of the Vertex AI
multimodal model, and other input information.
To use instructions on this page, you must have an understanding of AlloyDB for PostgreSQL and be familiar with generative AI concepts.
Before you begin
- Request access to generate multimodal embeddings and wait until you receive the enablement confirmation before you follow the instructions on this page.
Generate multimodal embeddings
To generate text embeddings for a multimodalembedding@001
model endpoint, run the following statement:
SELECT
ai.text_embedding(
model_id => 'multimodalembedding@001',
content => 'TEXT');
To generate image embeddings for a registered multimodalembedding@001
model endpoint where the image mimetype is default image/jpeg
, run the following statement:
SELECT
ai.image_embedding(
model_id => 'multimodalembedding@001',
image => 'IMAGE_PATH_OR_TEXT',
mimetype => MIMETYPE');
Replace the following:
IMAGE_PATH_OR_TEXT
with the Cloud Storage URI of the image in the same AlloyDB project for example,gs://my-bucket/embeddings/flowers.jpeg
, or the base64 string of the image.MIMETYPE
with the mimetype of the image, for example,image/jpeg
. The default mimetype isimage/jpeg
.
To generate video embeddings for a registered multimodalembedding@001
model endpoint, run the following statement:
SELECT
ai.video_embedding(
model_id => 'multimodalembedding@001',
video => 'VIDEO_URI');
Replace VIDEO_URI
with the Cloud Storage URI of the target video, for example, gs://my-bucket/embeddings/supermarket-video.mp4
, or the base64 string of the video.