Class ExplainRequest (0.3.1)

ExplainRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Request message for PredictionService.Explain.

Attributes

Name Description
endpoint str
Required. The name of the Endpoint requested to serve the explanation. Format: ``projects/{project}/locations/{location}/endpoints/{endpoint}``
instances Sequence[`.struct.Value`]
Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] ``instance_schema_uri``.
parameters `.struct.Value`
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] ``parameters_schema_uri``.
deployed_model_id str
If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding ``Endpoint.traffic_split``.

Inheritance

builtins.object > proto.message.Message > ExplainRequest