Describes an evaluation between a machine learning model's
predictions and ground truth labels. Created when an
EvaluationJob
runs successfully.
Attributes
Name
Description
name
str
Output only. Resource name of an evaluation. The name has
the following format:
"projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}'
google.cloud.datalabeling_v1beta1.types.AnnotationType
Output only. Type of task that the model version being
evaluated performs, as defined in the
evaluationJobConfig.inputConfig.annotationType
field of the evaluation job that created this evaluation.
evaluated_item_count
int
Output only. The number of items in the
ground truth dataset that were used for this
evaluation. Only populated when the evaulation
is for certain AnnotationTypes.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Class Evaluation (1.13.2)\n\nVersion latestkeyboard_arrow_down\n\n- [1.13.2 (latest)](/python/docs/reference/datalabeling/latest/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.13.1](/python/docs/reference/datalabeling/1.13.1/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.12.0](/python/docs/reference/datalabeling/1.12.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.11.1](/python/docs/reference/datalabeling/1.11.1/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.10.5](/python/docs/reference/datalabeling/1.10.5/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.9.0](/python/docs/reference/datalabeling/1.9.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.8.3](/python/docs/reference/datalabeling/1.8.3/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.7.0](/python/docs/reference/datalabeling/1.7.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.6.3](/python/docs/reference/datalabeling/1.6.3/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.5.2](/python/docs/reference/datalabeling/1.5.2/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.4.0](/python/docs/reference/datalabeling/1.4.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.3.0](/python/docs/reference/datalabeling/1.3.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.2.4](/python/docs/reference/datalabeling/1.2.4/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.1.0](/python/docs/reference/datalabeling/1.1.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [1.0.0](/python/docs/reference/datalabeling/1.0.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [0.4.2](/python/docs/reference/datalabeling/0.4.2/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [0.3.0](/python/docs/reference/datalabeling/0.3.0/google.cloud.datalabeling_v1beta1.types.Evaluation)\n- [0.2.1](/python/docs/reference/datalabeling/0.2.1/google.cloud.datalabeling_v1beta1.types.Evaluation) \n\n Evaluation(mapping=None, *, ignore_unknown_fields=False, **kwargs)\n\nDescribes an evaluation between a machine learning model's\npredictions and ground truth labels. Created when an\nEvaluationJob\nruns successfully."]]