Module language_models (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
Classes for working with language models.
Classes
ChatMessage(content: str, author: str)
CountTokensResponse(
total_tokens: int,
total_billable_characters: int,
_count_tokens_response: typing.Any,
)
The response from a count_tokens request.
.. attribute:: total_tokens
The total number of tokens counted across all
instances passed to the request.
:type: int
EvaluationClassificationMetric(
label_name: typing.Optional[str] = None,
auPrc: typing.Optional[float] = None,
auRoc: typing.Optional[float] = None,
logLoss: typing.Optional[float] = None,
confidenceMetrics: typing.Optional[
typing.List[typing.Dict[str, typing.Any]]
] = None,
confusionMatrix: typing.Optional[typing.Dict[str, typing.Any]] = None,
)
The evaluation metric response for classification metrics.
Parameters |
Name |
Description |
label_name |
str
Optional. The name of the label associated with the metrics. This is only returned when only_summary_metrics=False is passed to evaluate().
|
auPrc |
float
Optional. The area under the precision recall curve.
|
auRoc |
float
Optional. The area under the receiver operating characteristic curve.
|
logLoss |
float
Optional. Logarithmic loss.
|
confidenceMetrics |
List[Dict[str, Any]]
Optional. This is only returned when only_summary_metrics=False is passed to evaluate().
|
confusionMatrix |
Dict[str, Any]
Optional. This is only returned when only_summary_metrics=False is passed to evaluate().
|
EvaluationMetric(
bleu: typing.Optional[float] = None, rougeLSum: typing.Optional[float] = None
)
The evaluation metric response.
Parameters |
Name |
Description |
bleu |
float
Optional. BLEU (Bilingual evauation understudy). Scores based on sacrebleu implementation.
|
rougeLSum |
float
Optional. ROUGE-L (Longest Common Subsequence) scoring at summary level.
|
EvaluationQuestionAnsweringSpec(
ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],
task_name: str = "question-answering",
)
Spec for question answering model evaluation tasks.
EvaluationTextClassificationSpec(
ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],
target_column_name: str,
class_names: typing.List[str],
)
Spec for text classification model evaluation tasks.
Parameters |
Name |
Description |
target_column_name |
str
Required. The label column in the dataset provided in ground_truth_data . Required when task_name='text-classification'.
|
class_names |
List[str]
Required. A list of all possible label names in your dataset. Required when task_name='text-classification'.
|
EvaluationTextGenerationSpec(
ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],
)
Spec for text generation model evaluation tasks.
EvaluationTextSummarizationSpec(
ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],
task_name: str = "summarization",
)
Spec for text summarization model evaluation tasks.
InputOutputTextPair(input_text: str, output_text: str)
InputOutputTextPair represents a pair of input and output texts.
TextEmbedding(
values: typing.List[float],
statistics: typing.Optional[
vertexai.language_models.TextEmbeddingStatistics
] = None,
_prediction_response: typing.Optional[
google.cloud.aiplatform.models.Prediction
] = None,
)
Text embedding vector and statistics.
TextEmbeddingInput(
text: str,
task_type: typing.Optional[str] = None,
title: typing.Optional[str] = None,
)
Structural text embedding input.
TextGenerationResponse(text: str, _prediction_response: typing.Any, is_blocked: bool = False, errors: typing.Tuple[int] = (), safety_attributes: typing.Dict[str, float] = <factory>, grounding_metadata: typing.Optional[vertexai.language_models._language_models.GroundingMetadata] = None)
TextGenerationResponse represents a response of a language model.
.. attribute:: text
The generated text
:type: str
TuningEvaluationSpec(
evaluation_data: typing.Optional[str] = None,
evaluation_interval: typing.Optional[int] = None,
enable_early_stopping: typing.Optional[bool] = None,
enable_checkpoint_selection: typing.Optional[bool] = None,
tensorboard: typing.Optional[
typing.Union[
google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str
]
] = None,
)
Specification for model evaluation to perform during tuning.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Module language_models (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.preview.language_models)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.preview.language_models)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.preview.language_models)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.preview.language_models)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.preview.language_models)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.preview.language_models)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.preview.language_models)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.preview.language_models)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.preview.language_models)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.preview.language_models)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.preview.language_models)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.preview.language_models)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.preview.language_models)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.preview.language_models)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.preview.language_models)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.preview.language_models)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.preview.language_models)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.preview.language_models)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.preview.language_models)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.preview.language_models)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.preview.language_models)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.preview.language_models)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.preview.language_models)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.preview.language_models)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.preview.language_models)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.preview.language_models)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.preview.language_models)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.preview.language_models)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.preview.language_models)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.preview.language_models)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.preview.language_models)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.preview.language_models)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.preview.language_models)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.preview.language_models)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.preview.language_models) \nClasses for working with language models.\n\nClasses\n-------\n\n### [ChatMessage](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.ChatMessage)\n\n ChatMessage(content: str, author: str)\n\nA chat message.\n\n### [CountTokensResponse](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.CountTokensResponse)\n\n CountTokensResponse(\n total_tokens: int,\n total_billable_characters: int,\n _count_tokens_response: typing.Any,\n )\n\nThe response from a count_tokens request.\n.. attribute:: total_tokens\n\nThe total number of tokens counted across all\ninstances passed to the request.\n\n:type: int\n\n### [EvaluationClassificationMetric](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.EvaluationClassificationMetric)\n\n EvaluationClassificationMetric(\n label_name: typing.Optional[str] = None,\n auPrc: typing.Optional[float] = None,\n auRoc: typing.Optional[float] = None,\n logLoss: typing.Optional[float] = None,\n confidenceMetrics: typing.Optional[\n typing.List[typing.Dict[str, typing.Any]]\n ] = None,\n confusionMatrix: typing.Optional[typing.Dict[str, typing.Any]] = None,\n )\n\nThe evaluation metric response for classification metrics.\n\n### [EvaluationMetric](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.EvaluationMetric)\n\n EvaluationMetric(\n bleu: typing.Optional[float] = None, rougeLSum: typing.Optional[float] = None\n )\n\nThe evaluation metric response.\n\n### [EvaluationQuestionAnsweringSpec](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.EvaluationQuestionAnsweringSpec)\n\n EvaluationQuestionAnsweringSpec(\n ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],\n task_name: str = \"question-answering\",\n )\n\nSpec for question answering model evaluation tasks.\n\n### [EvaluationTextClassificationSpec](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.EvaluationTextClassificationSpec)\n\n EvaluationTextClassificationSpec(\n ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],\n target_column_name: str,\n class_names: typing.List[str],\n )\n\nSpec for text classification model evaluation tasks.\n\n### [EvaluationTextGenerationSpec](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.EvaluationTextGenerationSpec)\n\n EvaluationTextGenerationSpec(\n ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],\n )\n\nSpec for text generation model evaluation tasks.\n\n### [EvaluationTextSummarizationSpec](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.EvaluationTextSummarizationSpec)\n\n EvaluationTextSummarizationSpec(\n ground_truth_data: typing.Union[typing.List[str], str, pandas.core.frame.DataFrame],\n task_name: str = \"summarization\",\n )\n\nSpec for text summarization model evaluation tasks.\n\n### [InputOutputTextPair](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.InputOutputTextPair)\n\n InputOutputTextPair(input_text: str, output_text: str)\n\nInputOutputTextPair represents a pair of input and output texts.\n\n### [TextEmbedding](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.TextEmbedding)\n\n TextEmbedding(\n values: typing.List[float],\n statistics: typing.Optional[\n vertexai.language_models.TextEmbeddingStatistics\n ] = None,\n _prediction_response: typing.Optional[\n google.cloud.aiplatform.models.Prediction\n ] = None,\n )\n\nText embedding vector and statistics.\n\n### [TextEmbeddingInput](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.TextEmbeddingInput)\n\n TextEmbeddingInput(\n text: str,\n task_type: typing.Optional[str] = None,\n title: typing.Optional[str] = None,\n )\n\nStructural text embedding input.\n\n### [TextGenerationResponse](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.TextGenerationResponse)\n\n TextGenerationResponse(text: str, _prediction_response: typing.Any, is_blocked: bool = False, errors: typing.Tuple[int] = (), safety_attributes: typing.Dict[str, float] = \u003cfactory\u003e, grounding_metadata: typing.Optional[vertexai.language_models._language_models.GroundingMetadata] = None)\n\nTextGenerationResponse represents a response of a language model.\n.. attribute:: text\n\nThe generated text\n\n:type: str\n\n### [TuningEvaluationSpec](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.TuningEvaluationSpec)\n\n TuningEvaluationSpec(\n evaluation_data: typing.Optional[str] = None,\n evaluation_interval: typing.Optional[int] = None,\n enable_early_stopping: typing.Optional[bool] = None,\n enable_checkpoint_selection: typing.Optional[bool] = None,\n tensorboard: typing.Optional[\n typing.Union[\n google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str\n ]\n ] = None,\n )\n\nSpecification for model evaluation to perform during tuning."]]