Send feedback
Class ImageGenerationModel (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
Version latestkeyboard_arrow_down
ImageGenerationModel ( model_id : str , endpoint_name : typing . Optional [ str ] = None )
Generates images from text prompt.
Examples::
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
# Optional:
number_of_images=1,
seed=0,
)
response[0].show()
response[0].save("image1.png")
Methods
ImageGenerationModel
ImageGenerationModel ( model_id : str , endpoint_name : typing . Optional [ str ] = None )
Creates a _ModelGardenModel.
This constructor should not be called directly.
Use {model_class}.from_pretrained(model_name=...)
instead.
edit_image
edit_image (
* ,
prompt : str ,
base_image : typing . Optional [ vertexai . vision_models . Image ] = None ,
mask : typing . Optional [ vertexai . vision_models . Image ] = None ,
reference_images : typing . Optional [
typing . List [ vertexai . vision_models . ReferenceImage ]
] = None ,
negative_prompt : typing . Optional [ str ] = None ,
number_of_images : int = 1 ,
guidance_scale : typing . Optional [ float ] = None ,
edit_mode : typing . Optional [
typing . Literal [
"inpainting-insert" , "inpainting-remove" , "outpainting" , "product-image"
]
] = None ,
mask_mode : typing . Optional [
typing . Literal [ "background" , "foreground" , "semantic" ]
] = None ,
segmentation_classes : typing . Optional [ typing . List [ str ]] = None ,
mask_dilation : typing . Optional [ float ] = None ,
product_position : typing . Optional [ typing . Literal [ "fixed" , "reposition" ]] = None ,
output_mime_type : typing . Optional [ typing . Literal [ "image/png" , "image/jpeg" ]] = None ,
compression_quality : typing . Optional [ float ] = None ,
language : typing . Optional [ str ] = None ,
seed : typing . Optional [ int ] = None ,
output_gcs_uri : typing . Optional [ str ] = None ,
safety_filter_level : typing . Optional [
typing . Literal [ "block_most" , "block_some" , "block_few" , "block_fewest" ]
] = None ,
person_generation : typing . Optional [
typing . Literal [ "dont_allow" , "allow_adult" , "allow_all" ]
] = None
) - > vertexai . preview . vision_models . ImageGenerationResponse
Edits an existing image based on text prompt.
from_pretrained
from_pretrained ( model_name : str ) - > vertexai . _model_garden . _model_garden_models . T
Loads a _ModelGardenModel.
Exceptions
Type
Description
ValueError
If model_name is unknown.
ValueError
If model does not support this class.
generate_images
generate_images (
prompt : str ,
* ,
negative_prompt : typing . Optional [ str ] = None ,
number_of_images : int = 1 ,
aspect_ratio : typing . Optional [
typing . Literal [ "1:1" , "9:16" , "16:9" , "4:3" , "3:4" ]
] = None ,
guidance_scale : typing . Optional [ float ] = None ,
language : typing . Optional [ str ] = None ,
seed : typing . Optional [ int ] = None ,
output_gcs_uri : typing . Optional [ str ] = None ,
add_watermark : typing . Optional [ bool ] = True ,
safety_filter_level : typing . Optional [
typing . Literal [ "block_most" , "block_some" , "block_few" , "block_fewest" ]
] = None ,
person_generation : typing . Optional [
typing . Literal [ "dont_allow" , "allow_adult" , "allow_all" ]
] = None
) - > vertexai . preview . vision_models . ImageGenerationResponse
Generates images from text prompt.
upscale_image
upscale_image (
image : typing . Union [
vertexai . vision_models . Image , vertexai . preview . vision_models . GeneratedImage
],
new_size : typing . Optional [ int ] = 2048 ,
upscale_factor : typing . Optional [ typing . Literal [ "x2" , "x4" ]] = None ,
output_mime_type : typing . Optional [
typing . Literal [ "image/png" , "image/jpeg" ]
] = "image/png" ,
output_compression_quality : typing . Optional [ int ] = None ,
output_gcs_uri : typing . Optional [ str ] = None ,
) - > vertexai . vision_models . Image
Upscales an image.
This supports upscaling images generated through the generate_images()
method, or upscaling a new image.
Examples::
# Upscale a generated image
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
)
model.upscale_image(image=response[0])
# Upscale a new 1024x1024 image
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image)
# Upscale a new arbitrary sized image using a x2 or x4 upscaling factor
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, upscale_factor="x2")
# Upscale an image and get the result in JPEG format
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, output_mime_type="image/jpeg",
output_compression_quality=90)
Parameters
Name
Description
image
Union[GeneratedImage, Image]
Required. The generated image to upscale.
new_size
int
The size of the biggest dimension of the upscaled image. Only 2048 and 4096 are currently supported. Results in a 2048x2048 or 4096x4096 image. Defaults to 2048 if not provided.
Send feedback
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
Need to tell us more?
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Class ImageGenerationModel (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.preview.vision_models.ImageGenerationModel)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.preview.vision_models.ImageGenerationModel) \n\n ImageGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)\n\nGenerates images from text prompt.\n\nExamples:: \n\n model = ImageGenerationModel.from_pretrained(\"imagegeneration@002\")\n response = model.generate_images(\n prompt=\"Astronaut riding a horse\",\n # Optional:\n number_of_images=1,\n seed=0,\n )\n response[0].show()\n response[0].save(\"image1.png\")\n\nMethods\n-------\n\n### ImageGenerationModel\n\n ImageGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)\n\nCreates a _ModelGardenModel.\n\nThis constructor should not be called directly.\nUse `{model_class}.from_pretrained(model_name=...)` instead.\n\n### edit_image\n\n edit_image(\n *,\n prompt: str,\n base_image: typing.Optional[vertexai.vision_models.Image] = None,\n mask: typing.Optional[vertexai.vision_models.Image] = None,\n reference_images: typing.Optional[\n typing.List[vertexai.vision_models.ReferenceImage]\n ] = None,\n negative_prompt: typing.Optional[str] = None,\n number_of_images: int = 1,\n guidance_scale: typing.Optional[float] = None,\n edit_mode: typing.Optional[\n typing.Literal[\n \"inpainting-insert\", \"inpainting-remove\", \"outpainting\", \"product-image\"\n ]\n ] = None,\n mask_mode: typing.Optional[\n typing.Literal[\"background\", \"foreground\", \"semantic\"]\n ] = None,\n segmentation_classes: typing.Optional[typing.List[str]] = None,\n mask_dilation: typing.Optional[float] = None,\n product_position: typing.Optional[typing.Literal[\"fixed\", \"reposition\"]] = None,\n output_mime_type: typing.Optional[typing.Literal[\"image/png\", \"image/jpeg\"]] = None,\n compression_quality: typing.Optional[float] = None,\n language: typing.Optional[str] = None,\n seed: typing.Optional[int] = None,\n output_gcs_uri: typing.Optional[str] = None,\n safety_filter_level: typing.Optional[\n typing.Literal[\"block_most\", \"block_some\", \"block_few\", \"block_fewest\"]\n ] = None,\n person_generation: typing.Optional[\n typing.Literal[\"dont_allow\", \"allow_adult\", \"allow_all\"]\n ] = None\n ) -\u003e vertexai.preview.vision_models.ImageGenerationResponse\n\nEdits an existing image based on text prompt.\n\n### from_pretrained\n\n from_pretrained(model_name: str) -\u003e vertexai._model_garden._model_garden_models.T\n\nLoads a _ModelGardenModel.\n\n### generate_images\n\n generate_images(\n prompt: str,\n *,\n negative_prompt: typing.Optional[str] = None,\n number_of_images: int = 1,\n aspect_ratio: typing.Optional[\n typing.Literal[\"1:1\", \"9:16\", \"16:9\", \"4:3\", \"3:4\"]\n ] = None,\n guidance_scale: typing.Optional[float] = None,\n language: typing.Optional[str] = None,\n seed: typing.Optional[int] = None,\n output_gcs_uri: typing.Optional[str] = None,\n add_watermark: typing.Optional[bool] = True,\n safety_filter_level: typing.Optional[\n typing.Literal[\"block_most\", \"block_some\", \"block_few\", \"block_fewest\"]\n ] = None,\n person_generation: typing.Optional[\n typing.Literal[\"dont_allow\", \"allow_adult\", \"allow_all\"]\n ] = None\n ) -\u003e vertexai.preview.vision_models.ImageGenerationResponse\n\nGenerates images from text prompt.\n\n### upscale_image\n\n upscale_image(\n image: typing.Union[\n vertexai.vision_models.Image, vertexai.preview.vision_models.GeneratedImage\n ],\n new_size: typing.Optional[int] = 2048,\n upscale_factor: typing.Optional[typing.Literal[\"x2\", \"x4\"]] = None,\n output_mime_type: typing.Optional[\n typing.Literal[\"image/png\", \"image/jpeg\"]\n ] = \"image/png\",\n output_compression_quality: typing.Optional[int] = None,\n output_gcs_uri: typing.Optional[str] = None,\n ) -\u003e vertexai.vision_models.Image\n\nUpscales an image.\n\nThis supports upscaling images generated through the `generate_images()`\nmethod, or upscaling a new image.\n\nExamples:: \n\n # Upscale a generated image\n model = ImageGenerationModel.from_pretrained(\"imagegeneration@002\")\n response = model.generate_images(\n prompt=\"Astronaut riding a horse\",\n )\n model.upscale_image(image=response[0])\n\n # Upscale a new 1024x1024 image\n my_image = Image.load_from_file(\"my-image.png\")\n model.upscale_image(image=my_image)\n\n # Upscale a new arbitrary sized image using a x2 or x4 upscaling factor\n my_image = Image.load_from_file(\"my-image.png\")\n model.upscale_image(image=my_image, upscale_factor=\"x2\")\n\n # Upscale an image and get the result in JPEG format\n my_image = Image.load_from_file(\"my-image.png\")\n model.upscale_image(image=my_image, output_mime_type=\"image/jpeg\",\n output_compression_quality=90)"]]