Class MultiModalEmbeddingResponse (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
MultiModalEmbeddingResponse(
_prediction_response: typing.Any,
image_embedding: typing.Optional[typing.List[float]] = None,
video_embeddings: typing.Optional[
typing.List[vertexai.vision_models.VideoEmbedding]
] = None,
text_embedding: typing.Optional[typing.List[float]] = None,
)
The multimodal embedding response.
Attributes |
Name |
Description |
image_embedding |
List[float]
Optional. The embedding vector generated from your image.
|
video_embeddings |
List[VideoEmbedding]
Optional. The embedding vectors generated from your video.
|
text_embedding |
List[float]
Optional. The embedding vector generated from the contextual text provided for your image or video.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Class MultiModalEmbeddingResponse (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.preview.vision_models.MultiModalEmbeddingResponse) \n\n MultiModalEmbeddingResponse(\n _prediction_response: typing.Any,\n image_embedding: typing.Optional[typing.List[float]] = None,\n video_embeddings: typing.Optional[\n typing.List[vertexai.vision_models.VideoEmbedding]\n ] = None,\n text_embedding: typing.Optional[typing.List[float]] = None,\n )\n\nThe multimodal embedding response."]]