Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of "segment_classification" type.
Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.
Output only. The Log Loss metric.
Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.
Classes
ConfidenceMetricsEntry
Metrics for a single confidence threshold.
Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.
Output only. Precision for the given confidence threshold.
Output only. The harmonic mean of recall and precision.
Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Output only. The harmonic mean of [recall_at1][google.cloud.a utoml.v1beta1.ClassificationEvaluationMetrics.ConfidenceMetric sEntry.recall_at1] and [precision_at1][google.cloud.automl.v 1beta1.ClassificationEvaluationMetrics.ConfidenceMetricsEntry. precision_at1].
Output only. The number of model created labels that do not match a ground truth label.
Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.
ConfusionMatrix
Confusion matrix of the model running the classification.
Output only. Display name of the annotation specs used in the confusion matrix, as they were at the moment of the evaluation. For Tables CLASSIFICATION [prediction_type-s][go ogle.cloud.automl.v1beta1.TablesModelMetadata.prediction_type ], distinct values of the target column at the moment of the model evaluation are populated here.