Send feedback
Class GenerativeModel (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
Version latestkeyboard_arrow_down
GenerativeModel (
model_name : str ,
* ,
generation_config : typing . Optional [ GenerationConfigType ] = None ,
safety_settings : typing . Optional [ SafetySettingsType ] = None ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None ,
tool_config : typing . Optional [
vertexai . generative_models . _generative_models . ToolConfig
] = None ,
system_instruction : typing . Optional [ PartsType ] = None ,
labels : typing . Optional [ typing . Dict [ str , str ]] = None
)
Initializes GenerativeModel.
Usage:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
```
Methods
compute_tokens
compute_tokens (
contents : ContentsType ,
) - > google . cloud . aiplatform_v1beta1 . types . llm_utility_service . ComputeTokensResponse
Returns
Type
Description
A ComputeTokensResponse object that has the following attributes
tokens_info: Lists of tokens_info from the input. The input contents: ContentsType
could have multiple string instances and each tokens_info item represents each string instance. Each token info consists tokens list, token_ids list and a role.
compute_tokens_async
compute_tokens_async (
contents : ContentsType ,
) - > google . cloud . aiplatform_v1beta1 . types . llm_utility_service . ComputeTokensResponse
Computes tokens asynchronously.
Returns
Type
Description
And awaitable for a ComputeTokensResponse object that has the following attributes
tokens_info: Lists of tokens_info from the input. The input contents: ContentsType
could have multiple string instances and each tokens_info item represents each string instance. Each token info consists tokens list, token_ids list and a role.
count_tokens
count_tokens (
contents : ContentsType ,
* ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None
) - > google . cloud . aiplatform_v1beta1 . types . prediction_service . CountTokensResponse
Returns
Type
Description
A CountTokensResponse object that has the following attributes
total_tokens: The total number of tokens counted across all instances from the request. total_billable_characters: The total number of billable characters counted across all instances from the request.
count_tokens_async
count_tokens_async (
contents : ContentsType ,
* ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None
) - > google . cloud . aiplatform_v1beta1 . types . prediction_service . CountTokensResponse
Counts tokens asynchronously.
Returns
Type
Description
And awaitable for a CountTokensResponse object that has the following attributes
total_tokens: The total number of tokens counted across all instances from the request. total_billable_characters: The total number of billable characters counted across all instances from the request.
from_cached_content
from_cached_content (
cached_content : typing . Union [ str , CachedContent ],
* ,
generation_config : typing . Optional [ GenerationConfigType ] = None ,
safety_settings : typing . Optional [ SafetySettingsType ] = None
) - > _GenerativeModel
Creates a model from cached content.
Creates a model instance with an existing cached content. The cached
content becomes the prefix of the requesting contents.
generate_content
generate_content (
contents : ContentsType ,
* ,
generation_config : typing . Optional [ GenerationConfigType ] = None ,
safety_settings : typing . Optional [ SafetySettingsType ] = None ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None ,
tool_config : typing . Optional [
vertexai . generative_models . _generative_models . ToolConfig
] = None ,
labels : typing . Optional [ typing . Dict [ str , str ]] = None ,
stream : bool = False
) - > typing . Union [
vertexai . generative_models . _generative_models . GenerationResponse ,
typing . Iterable [ vertexai . generative_models . _generative_models . GenerationResponse ],
]
generate_content_async
generate_content_async (
contents : ContentsType ,
* ,
generation_config : typing . Optional [ GenerationConfigType ] = None ,
safety_settings : typing . Optional [ SafetySettingsType ] = None ,
tools : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Tool ]
] = None ,
tool_config : typing . Optional [
vertexai . generative_models . _generative_models . ToolConfig
] = None ,
labels : typing . Optional [ typing . Dict [ str , str ]] = None ,
stream : bool = False
) - > typing . Union [
vertexai . generative_models . _generative_models . GenerationResponse ,
typing . AsyncIterable [
vertexai . generative_models . _generative_models . GenerationResponse
],
]
Generates content asynchronously.
start_chat
start_chat (
* ,
history : typing . Optional [
typing . List [ vertexai . generative_models . _generative_models . Content ]
] = None ,
response_validation : bool = True
) - > vertexai . generative_models . _generative_models . ChatSession
Creates a stateful chat session.
Send feedback
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
Need to tell us more?
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Class GenerativeModel (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.generative_models.GenerativeModel)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.generative_models.GenerativeModel)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.generative_models.GenerativeModel)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.generative_models.GenerativeModel)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.generative_models.GenerativeModel)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.generative_models.GenerativeModel)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.generative_models.GenerativeModel)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.generative_models.GenerativeModel)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.generative_models.GenerativeModel)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.generative_models.GenerativeModel)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.generative_models.GenerativeModel)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.generative_models.GenerativeModel)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.generative_models.GenerativeModel)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.generative_models.GenerativeModel)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.generative_models.GenerativeModel)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.generative_models.GenerativeModel)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.generative_models.GenerativeModel)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.generative_models.GenerativeModel)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.generative_models.GenerativeModel)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.generative_models.GenerativeModel)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.generative_models.GenerativeModel)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.generative_models.GenerativeModel)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.generative_models.GenerativeModel)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.generative_models.GenerativeModel)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.generative_models.GenerativeModel)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.generative_models.GenerativeModel)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.generative_models.GenerativeModel)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.generative_models.GenerativeModel)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.generative_models.GenerativeModel)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.generative_models.GenerativeModel)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.generative_models.GenerativeModel)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.generative_models.GenerativeModel)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.generative_models.GenerativeModel)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.generative_models.GenerativeModel)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.generative_models.GenerativeModel) \n\n GenerativeModel(\n model_name: str,\n *,\n generation_config: typing.Optional[GenerationConfigType] = None,\n safety_settings: typing.Optional[SafetySettingsType] = None,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None,\n tool_config: typing.Optional[\n vertexai.generative_models._generative_models.ToolConfig\n ] = None,\n system_instruction: typing.Optional[PartsType] = None,\n labels: typing.Optional[typing.Dict[str, str]] = None\n )\n\nInitializes GenerativeModel.\n\nUsage: \n\n ```\n model = GenerativeModel(\"gemini-pro\")\n print(model.generate_content(\"Hello\"))\n ```\n\nMethods\n-------\n\n### compute_tokens\n\n compute_tokens(\n contents: ContentsType,\n ) -\u003e google.cloud.aiplatform_v1beta1.types.llm_utility_service.ComputeTokensResponse\n\nComputes tokens.\n\n### compute_tokens_async\n\n compute_tokens_async(\n contents: ContentsType,\n ) -\u003e google.cloud.aiplatform_v1beta1.types.llm_utility_service.ComputeTokensResponse\n\nComputes tokens asynchronously.\n\n### count_tokens\n\n count_tokens(\n contents: ContentsType,\n *,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None\n ) -\u003e google.cloud.aiplatform_v1beta1.types.prediction_service.CountTokensResponse\n\nCounts tokens.\n\n### count_tokens_async\n\n count_tokens_async(\n contents: ContentsType,\n *,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None\n ) -\u003e google.cloud.aiplatform_v1beta1.types.prediction_service.CountTokensResponse\n\nCounts tokens asynchronously.\n\n### from_cached_content\n\n from_cached_content(\n cached_content: typing.Union[str, CachedContent],\n *,\n generation_config: typing.Optional[GenerationConfigType] = None,\n safety_settings: typing.Optional[SafetySettingsType] = None\n ) -\u003e _GenerativeModel\n\nCreates a model from cached content.\n\nCreates a model instance with an existing cached content. The cached\ncontent becomes the prefix of the requesting contents.\n\n### generate_content\n\n generate_content(\n contents: ContentsType,\n *,\n generation_config: typing.Optional[GenerationConfigType] = None,\n safety_settings: typing.Optional[SafetySettingsType] = None,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None,\n tool_config: typing.Optional[\n vertexai.generative_models._generative_models.ToolConfig\n ] = None,\n labels: typing.Optional[typing.Dict[str, str]] = None,\n stream: bool = False\n ) -\u003e typing.Union[\n vertexai.generative_models._generative_models.GenerationResponse,\n typing.Iterable[vertexai.generative_models._generative_models.GenerationResponse],\n ]\n\nGenerates content.\n\n### generate_content_async\n\n generate_content_async(\n contents: ContentsType,\n *,\n generation_config: typing.Optional[GenerationConfigType] = None,\n safety_settings: typing.Optional[SafetySettingsType] = None,\n tools: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Tool]\n ] = None,\n tool_config: typing.Optional[\n vertexai.generative_models._generative_models.ToolConfig\n ] = None,\n labels: typing.Optional[typing.Dict[str, str]] = None,\n stream: bool = False\n ) -\u003e typing.Union[\n vertexai.generative_models._generative_models.GenerationResponse,\n typing.AsyncIterable[\n vertexai.generative_models._generative_models.GenerationResponse\n ],\n ]\n\nGenerates content asynchronously.\n\n### start_chat\n\n start_chat(\n *,\n history: typing.Optional[\n typing.List[vertexai.generative_models._generative_models.Content]\n ] = None,\n response_validation: bool = True\n ) -\u003e vertexai.generative_models._generative_models.ChatSession\n\nCreates a stateful chat session."]]