- 1.68.0 (latest)
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)
A trained machine learning Model.
Attributes |
|
---|---|
Name | Description |
name |
str
The resource name of the Model. |
version_id |
str
Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation. |
version_aliases |
MutableSequence[str]
User provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_alias}
instead of auto-generated version id (i.e.
projects/{project}/locations/{location}/models/{model_id}@{version_id}) .
The format is a-z][a-zA-Z0-9-] {0,126}[a-z0-9] to
distinguish from version_id. A default version alias will be
created for the first version of the model, and there must
be exactly one default version alias for a model.
|
version_create_time |
google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp when this version was created. |
version_update_time |
google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp when this version was most recently updated. |
display_name |
str
Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters. |
description |
str
The description of the Model. |
version_description |
str
The description of this version. |
predict_schemata |
google.cloud.aiplatform_v1beta1.types.PredictSchemata
The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain. |
metadata_schema_uri |
str
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 `Schema Object |
metadata |
google.protobuf.struct_pb2.Value
Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information. |
supported_export_formats |
MutableSequence[google.cloud.aiplatform_v1beta1.types.Model.ExportFormat]
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export. |
training_pipeline |
str
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any. |
container_spec |
google.cloud.aiplatform_v1beta1.types.ModelContainerSpec
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not present for AutoML Models or Large Models. |
artifact_uri |
str
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models or Large Models. |
supported_deployment_resources_types |
MutableSequence[google.cloud.aiplatform_v1beta1.types.Model.DeploymentResourcesType]
Output only. When this Model is deployed, its prediction resources are described by the prediction_resources
field of the
Endpoint.deployed_models
object. Because not all Models support all resource
configuration types, the configuration types this Model
supports are listed here. If no configuration types are
listed, the Model cannot be deployed to an
Endpoint and
does not support online predictions
(PredictionService.Predict
or
PredictionService.Explain).
Such a Model can serve predictions by using a
BatchPredictionJob,
if it has at least one entry each in
supported_input_storage_formats
and
supported_output_storage_formats.
|
supported_input_storage_formats |
MutableSequence[str]
Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are: - jsonl The JSON Lines format, where each instance is a
single line. Uses
GcsSource.
- csv The CSV format, where each instance is a single
comma-separated line. The first line in the file is the
header, containing comma-separated field names. Uses
GcsSource.
- tf-record The TFRecord format, where each instance is
a single record in tfrecord syntax. Uses
GcsSource.
- tf-record-gzip Similar to tf-record , but the file
is gzipped. Uses
GcsSource.
- bigquery Each instance is a single row in BigQuery.
Uses
BigQuerySource.
- file-list Each line of the file is the location of an
instance to process, uses gcs_source field of the
InputConfig
object.
If this Model doesn't support any of these formats it means
it cannot be used with a
BatchPredictionJob.
However, if it has
supported_deployment_resources_types,
it could serve online predictions by using
PredictionService.Predict
or
PredictionService.Explain.
|
supported_output_storage_formats |
MutableSequence[str]
Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: - jsonl The JSON Lines format, where each prediction is
a single line. Uses
GcsDestination.
- csv The CSV format, where each prediction is a single
comma-separated line. The first line in the file is the
header, containing comma-separated field names. Uses
GcsDestination.
- bigquery Each prediction is a single row in a
BigQuery table, uses
BigQueryDestination
.
If this Model doesn't support any of these formats it means
it cannot be used with a
BatchPredictionJob.
However, if it has
supported_deployment_resources_types,
it could serve online predictions by using
PredictionService.Predict
or
PredictionService.Explain.
|
create_time |
google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp when this Model was uploaded into Vertex AI. |
update_time |
google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp when this Model was most recently updated. |
deployed_models |
MutableSequence[google.cloud.aiplatform_v1beta1.types.DeployedModelRef]
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations. |
explanation_spec |
google.cloud.aiplatform_v1beta1.types.ExplanationSpec
The default explanation specification for this Model. The Model can be used for [requesting explanation][google.cloud.aiplatform.v1beta1.PredictionService.Explain] after being deployed if it is populated. The Model can be used for [batch explanation][google.cloud.aiplatform.v1beta1.BatchPredictionJob.generate_explanation] if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for [requesting explanation][google.cloud.aiplatform.v1beta1.PredictionService.Explain] by setting explanation_spec of DeployModelRequest.deployed_model and for [batch explanation][google.cloud.aiplatform.v1beta1.BatchPredictionJob.generate_explanation] by setting explanation_spec of BatchPredictionJob. |
etag |
str
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
labels |
MutableMapping[str, str]
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. |
encryption_spec |
google.cloud.aiplatform_v1beta1.types.EncryptionSpec
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key. |
model_source_info |
google.cloud.aiplatform_v1beta1.types.ModelSourceInfo
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or existing Vertex AI Model. |
original_model_info |
google.cloud.aiplatform_v1beta1.types.Model.OriginalModelInfo
Output only. If this Model is a copy of another Model, this contains info about the original. |
metadata_artifact |
str
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact} .
|
Classes
DeploymentResourcesType
DeploymentResourcesType(value)
Identifies a type of Model's prediction resources.
Values: DEPLOYMENT_RESOURCES_TYPE_UNSPECIFIED (0): Should not be used. DEDICATED_RESOURCES (1): Resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. AUTOMATIC_RESOURCES (2): Resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. SHARED_RESOURCES (3): Resources that can be shared by multiple DeployedModels. A pre-configured DeploymentResourcePool is required.
ExportFormat
ExportFormat(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Represents export format supported by the Model. All formats export to Google Cloud Storage.
LabelsEntry
LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)
The abstract base class for a message.
Parameters | |
---|---|
Name | Description |
kwargs |
dict
Keys and values corresponding to the fields of the message. |
mapping |
Union[dict,
A dictionary or message to be used to determine the values for this message. |
ignore_unknown_fields |
Optional(bool)
If True, do not raise errors for unknown fields. Only applied if |
OriginalModelInfo
OriginalModelInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Contains information about the original Model if this Model is a copy.