Stay organized with collections
Save and categorize content based on your preferences.
Resource: ModelEvaluation
A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.
Fields
name
string
Output only. The resource name of the ModelEvaluation.
displayName
string
The display name of the ModelEvaluation.
metricsSchemaUri
string
Points to a YAML file stored on Google Cloud Storage describing the metrics of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 Schema Object.
Output only. timestamp when this ModelEvaluation was created.
Uses RFC 3339, where generated output will always be Z-normalized and uses 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".
Points to a YAML file stored on Google Cloud Storage describing [EvaluatedDataItemView.data_item_payload][] and EvaluatedAnnotation.data_item_payload. The schema is defined as an OpenAPI 3.0.2 Schema Object.
This field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation.
Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.
The metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of "pipelineJobId", "evaluation_dataset_type", "evaluation_dataset_path", "row_based_metrics_path".
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-27 UTC."],[],[],null,["# REST Resource: projects.locations.models.evaluations\n\nResource: ModelEvaluation\n-------------------------\n\nA collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.\nFields `name` `string` \nOutput only. The resource name of the ModelEvaluation.\n`displayName` `string` \nThe display name of the ModelEvaluation.\n`metricsSchemaUri` `string` \nPoints to a YAML file stored on Google Cloud Storage describing the [metrics](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations#ModelEvaluation.FIELDS.metrics) of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject).\n`metrics` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nEvaluation metrics of the Model. The schema of the metrics is stored in [metricsSchemaUri](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations#ModelEvaluation.FIELDS.metrics_schema_uri)\n`createTime` `string (`[Timestamp](https://protobuf.dev/reference/protobuf/google.protobuf/#timestamp)` format)` \nOutput only. timestamp when this ModelEvaluation was created.\n\nUses RFC 3339, where generated output will always be Z-normalized and uses 0, 3, 6 or 9 fractional digits. Offsets other than \"Z\" are also accepted. Examples: `\"2014-10-02T15:01:23Z\"`, `\"2014-10-02T15:01:23.045123456Z\"` or `\"2014-10-02T15:01:23+05:30\"`.\n`sliceDimensions[]` `string` \nAll possible [dimensions](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices#Slice.FIELDS.dimension) of ModelEvaluationSlices. The dimensions can be used as the filter of the [ModelService.ListModelEvaluationSlices](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/list#google.cloud.aiplatform.v1.ModelService.ListModelEvaluationSlices) request, in the form of `slice.dimension = \u003cdimension\u003e`.\n`dataItemSchemaUri` `string` \nPoints to a YAML file stored on Google Cloud Storage describing \\[EvaluatedDataItemView.data_item_payload\\]\\[\\] and [EvaluatedAnnotation.data_item_payload](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.data_item_payload). The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject).\n\nThis field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation.\n`annotationSchemaUri` `string` \nPoints to a YAML file stored on Google Cloud Storage describing \\[EvaluatedDataItemView.predictions\\]\\[\\], \\[EvaluatedDataItemView.ground_truths\\]\\[\\], [EvaluatedAnnotation.predictions](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.predictions), and [EvaluatedAnnotation.ground_truths](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.ground_truths). The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject).\n\nThis field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation.\n`modelExplanation` `object (`[ModelExplanation](/vertex-ai/docs/reference/rest/v1/ModelExplanation)`)` \nAggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.\n`explanationSpecs[]` `object (`[ModelEvaluationExplanationSpec](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations#ModelEvaluationExplanationSpec)`)` \nDescribes the values of [ExplanationSpec](/vertex-ai/docs/reference/rest/v1/ExplanationSpec) that are used for explaining the predicted values on the evaluated data.\n`metadata` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nThe metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of \"pipelineJobId\", \"evaluation_dataset_type\", \"evaluation_dataset_path\", \"row_based_metrics_path\". \n\nModelEvaluationExplanationSpec\n------------------------------\n\nFields `explanationType` `string` \nExplanation type.\n\nFor AutoML Image Classification models, possible values are:\n\n- `image-integrated-gradients`\n- `image-xrai`\n`explanationSpec` `object (`[ExplanationSpec](/vertex-ai/docs/reference/rest/v1/ExplanationSpec)`)` \nExplanation spec details."]]