Class EvaluationClassificationMetric (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
EvaluationClassificationMetric(
label_name: typing.Optional[str] = None,
auPrc: typing.Optional[float] = None,
auRoc: typing.Optional[float] = None,
logLoss: typing.Optional[float] = None,
confidenceMetrics: typing.Optional[
typing.List[typing.Dict[str, typing.Any]]
] = None,
confusionMatrix: typing.Optional[typing.Dict[str, typing.Any]] = None,
)
The evaluation metric response for classification metrics.
Parameters |
Name |
Description |
label_name |
str
Optional. The name of the label associated with the metrics. This is only returned when only_summary_metrics=False is passed to evaluate().
|
auPrc |
float
Optional. The area under the precision recall curve.
|
auRoc |
float
Optional. The area under the receiver operating characteristic curve.
|
logLoss |
float
Optional. Logarithmic loss.
|
confidenceMetrics |
List[Dict[str, Any]]
Optional. This is only returned when only_summary_metrics=False is passed to evaluate().
|
confusionMatrix |
Dict[str, Any]
Optional. This is only returned when only_summary_metrics=False is passed to evaluate().
|
Properties
The Google Cloud Storage paths to the dataset used for this evaluation.
task_name
The type of evaluation task for the evaluation..
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Class EvaluationClassificationMetric (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.preview.language_models.EvaluationClassificationMetric)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.preview.language_models.EvaluationClassificationMetric) \n\n EvaluationClassificationMetric(\n label_name: typing.Optional[str] = None,\n auPrc: typing.Optional[float] = None,\n auRoc: typing.Optional[float] = None,\n logLoss: typing.Optional[float] = None,\n confidenceMetrics: typing.Optional[\n typing.List[typing.Dict[str, typing.Any]]\n ] = None,\n confusionMatrix: typing.Optional[typing.Dict[str, typing.Any]] = None,\n )\n\nThe evaluation metric response for classification metrics.\n\nProperties\n----------\n\n### input_dataset_paths\n\nThe Google Cloud Storage paths to the dataset used for this evaluation.\n\n### task_name\n\nThe type of evaluation task for the evaluation.."]]