Send feedback
Package vision_models (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
Version latestkeyboard_arrow_down
API documentation for vision_models
package.
Classes
Generates captions from image.
Examples::
model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
image=image,
# Optional:
number_of_results=1,
language="en",
)
Generates images from text prompt.
Examples::
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
# Optional:
number_of_images=1,
seed=0,
)
response[0].show()
response[0].save("image1.png")
Image generation response.
Answers questions about an image.
Examples::
model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
image=image,
question="What color is the car in this image?",
# Optional:
number_of_results=1,
)
Generates text from images.
Examples::
model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
image=image,
# Optional:
number_of_results=1,
language="en",
)
answers = model.ask_question(
image=image,
question="What color is the car in this image?",
# Optional:
number_of_results=1,
)
Generates embedding vectors from images and videos.
Examples::
model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")
embeddings = model.get_embeddings(
image=image,
video=video,
contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding
The multimodal embedding response.
Embeddings generated from video with offset times.
The specific video segments (in seconds) the embeddings are generated for.
Send feedback
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
Need to tell us more?
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Package vision_models (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.vision_models)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.vision_models)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.vision_models)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.vision_models)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.vision_models)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.vision_models)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.vision_models)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.vision_models)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.vision_models)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.vision_models)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.vision_models)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.vision_models)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.vision_models)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.vision_models)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.vision_models)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.vision_models)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.vision_models)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.vision_models)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.vision_models)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.vision_models)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.vision_models)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.vision_models)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.vision_models)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.vision_models)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.vision_models)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.vision_models)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.vision_models)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.vision_models)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.vision_models)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.vision_models)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.vision_models)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.vision_models)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.vision_models)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.vision_models)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.vision_models) \nAPI documentation for `vision_models` package. \n\nClasses\n-------\n\n### [GeneratedImage](/python/docs/reference/vertexai/latest/vertexai.vision_models.GeneratedImage)\n\nGenerated image.\n\n### [Image](/python/docs/reference/vertexai/latest/vertexai.vision_models.Image)\n\nImage.\n\n### [ImageCaptioningModel](/python/docs/reference/vertexai/latest/vertexai.vision_models.ImageCaptioningModel)\n\nGenerates captions from image.\n\nExamples:: \n\n model = ImageCaptioningModel.from_pretrained(\"imagetext@001\")\n image = Image.load_from_file(\"image.png\")\n captions = model.get_captions(\n image=image,\n # Optional:\n number_of_results=1,\n language=\"en\",\n )\n\n### [ImageGenerationModel](/python/docs/reference/vertexai/latest/vertexai.vision_models.ImageGenerationModel)\n\nGenerates images from text prompt.\n\nExamples:: \n\n model = ImageGenerationModel.from_pretrained(\"imagegeneration@002\")\n response = model.generate_images(\n prompt=\"Astronaut riding a horse\",\n # Optional:\n number_of_images=1,\n seed=0,\n )\n response[0].show()\n response[0].save(\"image1.png\")\n\n### [ImageGenerationResponse](/python/docs/reference/vertexai/latest/vertexai.vision_models.ImageGenerationResponse)\n\nImage generation response.\n\n### [ImageQnAModel](/python/docs/reference/vertexai/latest/vertexai.vision_models.ImageQnAModel)\n\nAnswers questions about an image.\n\nExamples:: \n\n model = ImageQnAModel.from_pretrained(\"imagetext@001\")\n image = Image.load_from_file(\"image.png\")\n answers = model.ask_question(\n image=image,\n question=\"What color is the car in this image?\",\n # Optional:\n number_of_results=1,\n )\n\n### [ImageTextModel](/python/docs/reference/vertexai/latest/vertexai.vision_models.ImageTextModel)\n\nGenerates text from images.\n\nExamples:: \n\n model = ImageTextModel.from_pretrained(\"imagetext@001\")\n image = Image.load_from_file(\"image.png\")\n\n captions = model.get_captions(\n image=image,\n # Optional:\n number_of_results=1,\n language=\"en\",\n )\n\n answers = model.ask_question(\n image=image,\n question=\"What color is the car in this image?\",\n # Optional:\n number_of_results=1,\n )\n\n### [MultiModalEmbeddingModel](/python/docs/reference/vertexai/latest/vertexai.vision_models.MultiModalEmbeddingModel)\n\nGenerates embedding vectors from images and videos.\n\nExamples:: \n\n model = MultiModalEmbeddingModel.from_pretrained(\"multimodalembedding@001\")\n image = Image.load_from_file(\"image.png\")\n video = Video.load_from_file(\"video.mp4\")\n\n embeddings = model.get_embeddings(\n image=image,\n video=video,\n contextual_text=\"Hello world\",\n )\n image_embedding = embeddings.image_embedding\n video_embeddings = embeddings.video_embeddings\n text_embedding = embeddings.text_embedding\n\n### [MultiModalEmbeddingResponse](/python/docs/reference/vertexai/latest/vertexai.vision_models.MultiModalEmbeddingResponse)\n\nThe multimodal embedding response.\n\n### [Video](/python/docs/reference/vertexai/latest/vertexai.vision_models.Video)\n\nVideo.\n\n### [VideoEmbedding](/python/docs/reference/vertexai/latest/vertexai.vision_models.VideoEmbedding)\n\nEmbeddings generated from video with offset times.\n\n### [VideoSegmentConfig](/python/docs/reference/vertexai/latest/vertexai.vision_models.VideoSegmentConfig)\n\nThe specific video segments (in seconds) the embeddings are generated for."]]