Class ModelEvaluation (2.13.4)

ModelEvaluation(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Evaluation results of a model.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Attributes

Name Description
classification_evaluation_metrics google.cloud.automl_v1beta1.types.ClassificationEvaluationMetrics
Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType. This field is a member of oneof_ metrics.
regression_evaluation_metrics google.cloud.automl_v1beta1.types.RegressionEvaluationMetrics
Model evaluation metrics for Tables regression. Tables problem is considered a regression when the target column has FLOAT64 DataType. This field is a member of oneof_ metrics.
translation_evaluation_metrics google.cloud.automl_v1beta1.types.TranslationEvaluationMetrics
Model evaluation metrics for translation. This field is a member of oneof_ metrics.
image_object_detection_evaluation_metrics google.cloud.automl_v1beta1.types.ImageObjectDetectionEvaluationMetrics
Model evaluation metrics for image object detection. This field is a member of oneof_ metrics.
video_object_tracking_evaluation_metrics google.cloud.automl_v1beta1.types.VideoObjectTrackingEvaluationMetrics
Model evaluation metrics for video object tracking. This field is a member of oneof_ metrics.
text_sentiment_evaluation_metrics google.cloud.automl_v1beta1.types.TextSentimentEvaluationMetrics
Evaluation metrics for text sentiment models. This field is a member of oneof_ metrics.
text_extraction_evaluation_metrics google.cloud.automl_v1beta1.types.TextExtractionEvaluationMetrics
Evaluation metrics for text extraction models. This field is a member of oneof_ metrics.
name str
Output only. Resource name of the model evaluation. Format: projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
annotation_spec_id str
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION prediction_type-s the display_name field is used.
display_name str
Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings. For Tables CLASSIFICATION prediction_type-s distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
create_time google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp when this model evaluation was created.
evaluated_example_count int
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the annotation_spec_id.