Class MultiModalEmbeddingModel (1.31.1)

MultiModalEmbeddingModel(model_id: str, endpoint_name: typing.Optional[str] = None)

Generates embedding vectors from images.

Examples::

model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")

embeddings = model.get_embeddings(
    image=image,
    contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
text_embedding = embeddings.text_embedding

Methods

MultiModalEmbeddingModel

MultiModalEmbeddingModel(model_id: str, endpoint_name: typing.Optional[str] = None)

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=...) instead.

from_pretrained

from_pretrained(model_name: str) -> vertexai._model_garden._model_garden_models.T

Loads a _ModelGardenModel.

Exceptions
Type Description
ValueError If model_name is unknown.
ValueError If model does not support this class.

get_embeddings

get_embeddings(
    image: typing.Optional[vertexai.vision_models._vision_models.Image] = None,
    contextual_text: typing.Optional[str] = None,
) -> vertexai.vision_models._vision_models.MultiModalEmbeddingResponse

Gets embedding vectors from the provided image.

Parameters
Name Description
image Image

Optional. The image to generate embeddings for. One of image or contextual_text is required.

contextual_text str

Optional. Contextual text for your input image. If provided, the model will also generate an embedding vector for the provided contextual text. The returned image and text embedding vectors are in the same semantic space with the same dimensionality, and the vectors can be used interchangeably for use cases like searching image by text or searching text by image. One of image or contextual_text is required.

Returns
Type Description
ImageEmbeddingResponse The image and text embedding vectors.