- 1.68.0 (latest)
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Attribution(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Attribution that explains a particular prediction output.
Attributes |
|
---|---|
Name | Description |
baseline_output_value |
float
Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located by output_index. If there are multiple baselines, their output values are averaged. |
instance_output_value |
float
Output only. Model predicted output on the corresponding explanation instance][ExplainRequest.instances] . The field
name of the output is determined by the key in
ExplanationMetadata.outputs.
If the Model predicted output has multiple dimensions, this
is the value in the output located by
output_index.
|
feature_attributions |
google.protobuf.struct_pb2.Value
Output only. Attributions of each explained feature. Features are extracted from the [prediction instances][google.cloud.aiplatform.v1beta1.ExplainRequest.instances] according to [explanation metadata for inputs][google.cloud.aiplatform.v1beta1.ExplanationMetadata.inputs]. The value is a struct, whose keys are the name of the feature. The values are how much the feature in the instance contributed to the predicted result. The format of the value is determined by the feature's input format: - If the feature is a scalar value, the attribution value is a [floating number][google.protobuf.Value.number_value]. - If the feature is an array of scalar values, the attribution value is an array][google.protobuf.Value.list_value] .
- If the feature is a struct, the attribution value is a
struct][google.protobuf.Value.struct_value] . The keys in
the attribution value struct are the same as the keys in
the feature struct. The formats of the values in the
attribution struct are determined by the formats of the
values in the feature struct.
The
ExplanationMetadata.feature_attributions_schema_uri
field, pointed to by the
ExplanationSpec
field of the
Endpoint.deployed_models
object, points to the schema file that describes the
features and their attribution values (if it is populated).
|
output_index |
MutableSequence[int]
Output only. The index that locates the explained prediction output. If the prediction output is a scalar value, output_index is not populated. If the prediction output has multiple dimensions, the length of the output_index list is the same as the number of dimensions of the output. The i-th element in output_index is the element index of the i-th dimension of the output vector. Indices start from 0. |
output_display_name |
str
Output only. The display name of the output identified by output_index. For example, the predicted class name by a multi-classification Model. This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using output_index. |
approximation_error |
float
Output only. Error of feature_attributions caused by approximation used in the explanation method. Lower value means more precise attributions. - For Sampled Shapley attribution, increasing path_count might reduce the error. - For Integrated Gradients attribution, increasing step_count might reduce the error. - For [XRAI attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.xrai_attribution], increasing step_count might reduce the error. See `this introduction `__ for more information. |
output_name |
str
Output only. Name of the explain output. Specified as the key in ExplanationMetadata.outputs. |
Methods
Attribution
Attribution(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Attribution that explains a particular prediction output.