Class CodeGenerationModel (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
CodeGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)
Creates a LanguageModel.
This constructor should not be called directly.
Use LanguageModel.from_pretrained(model_name=...)
instead.
Methods
batch_predict
batch_predict(
*,
dataset: typing.Union[str, typing.List[str]],
destination_uri_prefix: str,
model_parameters: typing.Optional[typing.Dict] = None
) -> google.cloud.aiplatform.jobs.BatchPredictionJob
Starts a batch prediction job with the model.
Exceptions |
Type |
Description |
ValueError |
When source or destination URI is not supported. |
from_pretrained
from_pretrained(model_name: str) -> vertexai._model_garden._model_garden_models.T
Loads a _ModelGardenModel.
Exceptions |
Type |
Description |
ValueError |
If model_name is unknown. |
ValueError |
If model does not support this class. |
get_tuned_model
get_tuned_model(
tuned_model_name: str,
) -> vertexai.language_models._language_models._LanguageModel
Loads the specified tuned language model.
list_tuned_model_names
list_tuned_model_names() -> typing.Sequence[str]
Lists the names of tuned models.
predict
predict(
prefix: str,
suffix: typing.Optional[str] = None,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
candidate_count: typing.Optional[int] = None
) -> vertexai.language_models.TextGenerationResponse
Gets model response for a single prompt.
predict_async
predict_async(
prefix: str,
suffix: typing.Optional[str] = None,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
candidate_count: typing.Optional[int] = None
) -> vertexai.language_models.TextGenerationResponse
Asynchronously gets model response for a single prompt.
predict_streaming
predict_streaming(
prefix: str,
suffix: typing.Optional[str] = None,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> typing.Iterator[vertexai.language_models.TextGenerationResponse]
Predicts the code based on previous code.
The result is a stream (generator) of partial responses.
predict_streaming_async
predict_streaming_async(
prefix: str,
suffix: typing.Optional[str] = None,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> typing.AsyncIterator[vertexai.language_models.TextGenerationResponse]
Asynchronously predicts the code based on previous code.
The result is a stream (generator) of partial responses.
tune_model
tune_model(
training_data: typing.Union[str, pandas.core.frame.DataFrame],
*,
train_steps: typing.Optional[int] = None,
learning_rate_multiplier: typing.Optional[float] = None,
tuning_job_location: typing.Optional[str] = None,
tuned_model_location: typing.Optional[str] = None,
model_display_name: typing.Optional[str] = None,
tuning_evaluation_spec: typing.Optional[
vertexai.language_models.TuningEvaluationSpec
] = None,
accelerator_type: typing.Optional[typing.Literal["TPU", "GPU"]] = None,
max_context_length: typing.Optional[str] = None
) -> vertexai.language_models._language_models._LanguageModelTuningJob
Tunes a model based on training data.
This method launches and returns an asynchronous model tuning job.
Usage:
tuning_job = model.tune_model(...)
... do some other work
tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete
Exceptions |
Type |
Description |
ValueError |
If the "tuning_job_location" value is not supported |
ValueError |
If the "tuned_model_location" value is not supported |
RuntimeError |
If the model does not support tuning |
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Class CodeGenerationModel (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.language_models.CodeGenerationModel)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.language_models.CodeGenerationModel)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.language_models.CodeGenerationModel)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.language_models.CodeGenerationModel)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.language_models.CodeGenerationModel)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.language_models.CodeGenerationModel)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.language_models.CodeGenerationModel)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.language_models.CodeGenerationModel)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.language_models.CodeGenerationModel)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.language_models.CodeGenerationModel)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.language_models.CodeGenerationModel)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.language_models.CodeGenerationModel)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.language_models.CodeGenerationModel)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.language_models.CodeGenerationModel)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.language_models.CodeGenerationModel)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.language_models.CodeGenerationModel)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.language_models.CodeGenerationModel)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.language_models.CodeGenerationModel)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.language_models.CodeGenerationModel)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.language_models.CodeGenerationModel)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.language_models.CodeGenerationModel)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.language_models.CodeGenerationModel)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.language_models.CodeGenerationModel)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.language_models.CodeGenerationModel)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.language_models.CodeGenerationModel)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.language_models.CodeGenerationModel)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.language_models.CodeGenerationModel)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.language_models.CodeGenerationModel)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.language_models.CodeGenerationModel)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.language_models.CodeGenerationModel)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.language_models.CodeGenerationModel)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.language_models.CodeGenerationModel)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.language_models.CodeGenerationModel)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.language_models.CodeGenerationModel)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.language_models.CodeGenerationModel) \n\n CodeGenerationModel(model_id: str, endpoint_name: typing.Optional[str] = None)\n\nCreates a LanguageModel.\n\nThis constructor should not be called directly.\nUse `LanguageModel.from_pretrained(model_name=...)` instead.\n\nMethods\n-------\n\n### batch_predict\n\n batch_predict(\n *,\n dataset: typing.Union[str, typing.List[str]],\n destination_uri_prefix: str,\n model_parameters: typing.Optional[typing.Dict] = None\n ) -\u003e google.cloud.aiplatform.jobs.BatchPredictionJob\n\nStarts a batch prediction job with the model.\n\n### from_pretrained\n\n from_pretrained(model_name: str) -\u003e vertexai._model_garden._model_garden_models.T\n\nLoads a _ModelGardenModel.\n\n### get_tuned_model\n\n get_tuned_model(\n tuned_model_name: str,\n ) -\u003e vertexai.language_models._language_models._LanguageModel\n\nLoads the specified tuned language model.\n\n### list_tuned_model_names\n\n list_tuned_model_names() -\u003e typing.Sequence[str]\n\nLists the names of tuned models.\n\n### predict\n\n predict(\n prefix: str,\n suffix: typing.Optional[str] = None,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None,\n candidate_count: typing.Optional[int] = None\n ) -\u003e vertexai.language_models.TextGenerationResponse\n\nGets model response for a single prompt.\n\n### predict_async\n\n predict_async(\n prefix: str,\n suffix: typing.Optional[str] = None,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None,\n candidate_count: typing.Optional[int] = None\n ) -\u003e vertexai.language_models.TextGenerationResponse\n\nAsynchronously gets model response for a single prompt.\n\n### predict_streaming\n\n predict_streaming(\n prefix: str,\n suffix: typing.Optional[str] = None,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None\n ) -\u003e typing.Iterator[vertexai.language_models.TextGenerationResponse]\n\nPredicts the code based on previous code.\n\nThe result is a stream (generator) of partial responses.\n\n### predict_streaming_async\n\n predict_streaming_async(\n prefix: str,\n suffix: typing.Optional[str] = None,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None\n ) -\u003e typing.AsyncIterator[vertexai.language_models.TextGenerationResponse]\n\nAsynchronously predicts the code based on previous code.\n\nThe result is a stream (generator) of partial responses.\n\n### tune_model\n\n tune_model(\n training_data: typing.Union[str, pandas.core.frame.DataFrame],\n *,\n train_steps: typing.Optional[int] = None,\n learning_rate_multiplier: typing.Optional[float] = None,\n tuning_job_location: typing.Optional[str] = None,\n tuned_model_location: typing.Optional[str] = None,\n model_display_name: typing.Optional[str] = None,\n tuning_evaluation_spec: typing.Optional[\n vertexai.language_models.TuningEvaluationSpec\n ] = None,\n accelerator_type: typing.Optional[typing.Literal[\"TPU\", \"GPU\"]] = None,\n max_context_length: typing.Optional[str] = None\n ) -\u003e vertexai.language_models._language_models._LanguageModelTuningJob\n\nTunes a model based on training data.\n\nThis method launches and returns an asynchronous model tuning job.\nUsage: \n\n tuning_job = model.tune_model(...)\n ... do some other work\n tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete"]]