Types overview

CloudAiLargeModelsVisionFilteredText

Details for filtered input text.
Fields
category

enum

Confidence level

Enum type. Can be one of the following:
RAI_CATEGORY_UNSPECIFIED (No description provided)
OBSCENE (No description provided)
SEXUALLY_EXPLICIT Porn
IDENTITY_ATTACK Hate
VIOLENCE_ABUSE (No description provided)
CSAI (No description provided)
SPII (No description provided)
CELEBRITY (No description provided)
FACE_IMG (No description provided)
WATERMARK_IMG (No description provided)
MEMORIZATION_IMG (No description provided)
CSAI_IMG (No description provided)
PORN_IMG (No description provided)
VIOLENCE_IMG (No description provided)
CHILD_IMG (No description provided)
TOXIC (No description provided)
SENSITIVE_WORD (No description provided)
PERSON_IMG (No description provided)
ICA_IMG (No description provided)
SEXUAL_IMG (No description provided)
IU_IMG (No description provided)
RACY_IMG (No description provided)
PEDO_IMG (No description provided)
DEATH_HARM_TRAGEDY SafetyAttributes returned but not filtered on
HEALTH (No description provided)
FIREARMS_WEAPONS (No description provided)
RELIGIOUS_BELIEF (No description provided)
ILLICIT_DRUGS (No description provided)
WAR_CONFLICT (No description provided)
POLITICS (No description provided)
HATE_SYMBOL_IMG End of list
CHILD_TEXT (No description provided)
DANGEROUS_CONTENT Text category from SafetyCat v3
RECITATION_TEXT (No description provided)
CELEBRITY_IMG (No description provided)
WATERMARK_IMG_REMOVAL Error message when user attempts to remove watermark from editing image
confidence

enum

Filtered category

Enum type. Can be one of the following:
CONFIDENCE_UNSPECIFIED (No description provided)
CONFIDENCE_LOW (No description provided)
CONFIDENCE_MEDIUM (No description provided)
CONFIDENCE_HIGH (No description provided)
prompt

string

Input prompt

score

number (double format)

Score for category

CloudAiLargeModelsVisionGenerateVideoResponse

Generate video response.
Fields
generatedSamples[]

object (CloudAiLargeModelsVisionMedia)

The generates samples.

raiErrorMessage

string

Returns rai error message for filtered videos.

raiMediaFilteredCount

integer (int32 format)

Returns if any videos were filtered due to RAI policies.

raiMediaFilteredReasons[]

string

Returns rai failure reasons if any.

raiTextFilteredReason

object (CloudAiLargeModelsVisionFilteredText)

Returns filtered text rai info.

CloudAiLargeModelsVisionImage

Image.
Fields
encoding

string

Image encoding, encoded as "image/png" or "image/jpg".

image

string (bytes format)

Raw bytes.

imageRaiScores

object (CloudAiLargeModelsVisionImageRAIScores)

RAI scores for generated image.

raiInfo

object (CloudAiLargeModelsVisionRaiInfo)

RAI info for image.

semanticFilterResponse

object (CloudAiLargeModelsVisionSemanticFilterResponse)

Semantic filter info for image.

text

string

Text/Expanded text input for imagen.

uri

string

Path to another storage (typically Google Cloud Storage).

CloudAiLargeModelsVisionImageRAIScores

RAI scores for generated image returned.
Fields
agileWatermarkDetectionScore

number (double format)

Agile watermark score for image.

CloudAiLargeModelsVisionMedia

Media.
Fields
image

object (CloudAiLargeModelsVisionImage)

Image.

video

object (CloudAiLargeModelsVisionVideo)

Video

CloudAiLargeModelsVisionNamedBoundingBox

(No description provided)
Fields
classes[]

string

(No description provided)

entities[]

string

(No description provided)

scores[]

number (float format)

(No description provided)

x1

number (float format)

(No description provided)

x2

number (float format)

(No description provided)

y1

number (float format)

(No description provided)

y2

number (float format)

(No description provided)

CloudAiLargeModelsVisionRaiInfo

(No description provided)
Fields
raiCategories[]

string

List of rai categories' information to return

scores[]

number (float format)

List of rai scores mapping to the rai categories. Rounded to 1 decimal place.

CloudAiLargeModelsVisionSemanticFilterResponse

(No description provided)
Fields
namedBoundingBoxes[]

object (CloudAiLargeModelsVisionNamedBoundingBox)

Class labels of the bounding boxes that failed the semantic filtering. Bounding box coordinates.

passedSemanticFilter

boolean

This response is added when semantic filter config is turned on in EditConfig. It reports if this image is passed semantic filter response. If passed_semantic_filter is false, the bounding box information will be populated for user to check what caused the semantic filter to fail.

CloudAiLargeModelsVisionVideo

Video
Fields
uri

string

Path to another storage (typically Google Cloud Storage).

video

string (bytes format)

Raw bytes.

GoogleApiHttpBody

Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.
Fields
contentType

string

The HTTP Content-Type header value specifying the content type of the body.

data

string (bytes format)

The HTTP request/response body as raw binary.

extensions[]

object

Application specific response metadata. Must be set in the first response for streaming APIs.

GoogleCloudAiplatformV1ActiveLearningConfig

Parameters that configure the active learning pipeline. Active learning will label the data incrementally by several iterations. For every iteration, it will select a batch of data based on the sampling strategy.
Fields
maxDataItemCount

string (int64 format)

Max number of human labeled DataItems.

maxDataItemPercentage

integer (int32 format)

Max percent of total DataItems for human labeling.

sampleConfig

object (GoogleCloudAiplatformV1SampleConfig)

Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.

trainingConfig

object (GoogleCloudAiplatformV1TrainingConfig)

CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.

GoogleCloudAiplatformV1AddContextArtifactsAndExecutionsRequest

Request message for MetadataService.AddContextArtifactsAndExecutions.
Fields
artifacts[]

string

The resource names of the Artifacts to attribute to the Context. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}

executions[]

string

The resource names of the Executions to associate with the Context. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}

GoogleCloudAiplatformV1AddContextChildrenRequest

Request message for MetadataService.AddContextChildren.
Fields
childContexts[]

string

The resource names of the child Contexts.

GoogleCloudAiplatformV1AddExecutionEventsRequest

Request message for MetadataService.AddExecutionEvents.
Fields
events[]

object (GoogleCloudAiplatformV1Event)

The Events to create and add.

GoogleCloudAiplatformV1AddTrialMeasurementRequest

Request message for VizierService.AddTrialMeasurement.
Fields
measurement

object (GoogleCloudAiplatformV1Measurement)

Required. The measurement to be added to a Trial.

GoogleCloudAiplatformV1Annotation

Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.
Fields
annotationSource

object (GoogleCloudAiplatformV1UserActionReference)

Output only. The source of the Annotation.

createTime

string (Timestamp format)

Output only. Timestamp when this Annotation was created.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your Annotations. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Annotation(System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Annotation: * "aiplatform.googleapis.com/annotation_set_name": optional, name of the UI's annotation set this Annotation belongs to. If not set, the Annotation is not visible in the UI. * "aiplatform.googleapis.com/payload_schema": output only, its value is the payload_schema's title.

name

string

Output only. Resource name of the Annotation.

payload

any

Required. The schema of the payload can be found in payload_schema.

payloadSchemaUri

string

Required. Google Cloud Storage URI points to a YAML file describing payload. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with the parent Dataset's metadata.

updateTime

string (Timestamp format)

Output only. Timestamp when this Annotation was last updated.

GoogleCloudAiplatformV1AnnotationSpec

Identifies a concept with which DataItems may be annotated with.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this AnnotationSpec was created.

displayName

string

Required. The user-defined name of the AnnotationSpec. The name can be up to 128 characters long and can consist of any UTF-8 characters.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

name

string

Output only. Resource name of the AnnotationSpec.

updateTime

string (Timestamp format)

Output only. Timestamp when AnnotationSpec was last updated.

GoogleCloudAiplatformV1Artifact

Instance of a general artifact.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this Artifact was created.

description

string

Description of the Artifact

displayName

string

User provided display name of the Artifact. May be up to 128 Unicode characters.

etag

string

An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Artifacts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Artifact (System labels are excluded).

metadata

map (key: string, value: any)

Properties of the Artifact. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.

name

string

Output only. The resource name of the Artifact.

schemaTitle

string

The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

schemaVersion

string

The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

state

enum

The state of this Artifact. This is a property of the Artifact, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines), and the system does not prescribe or check the validity of state transitions.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Unspecified state for the Artifact.
PENDING A state used by systems like Vertex AI Pipelines to indicate that the underlying data item represented by this Artifact is being created.
LIVE A state indicating that the Artifact should exist, unless something external to the system deletes it.
updateTime

string (Timestamp format)

Output only. Timestamp when this Artifact was last updated.

uri

string

The uniform resource identifier of the artifact file. May be empty if there is no actual artifact file.

GoogleCloudAiplatformV1AssignNotebookRuntimeOperationMetadata

Metadata information for NotebookService.AssignNotebookRuntime.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

progressMessage

string

A human-readable message that shows the intermediate progress details of NotebookRuntime.

GoogleCloudAiplatformV1AssignNotebookRuntimeRequest

Request message for NotebookService.AssignNotebookRuntime.
Fields
notebookRuntime

object (GoogleCloudAiplatformV1NotebookRuntime)

Required. Provide runtime specific information (e.g. runtime owner, notebook id) used for NotebookRuntime assignment.

notebookRuntimeId

string

Optional. User specified ID for the notebook runtime.

notebookRuntimeTemplate

string

Required. The resource name of the NotebookRuntimeTemplate based on which a NotebookRuntime will be assigned (reuse or create a new one).

GoogleCloudAiplatformV1Attribution

Attribution that explains a particular prediction output.
Fields
approximationError

number (double format)

Output only. Error of feature_attributions caused by approximation used in the explanation method. Lower value means more precise attributions. * For Sampled Shapley attribution, increasing path_count might reduce the error. * For Integrated Gradients attribution, increasing step_count might reduce the error. * For XRAI attribution, increasing step_count might reduce the error. See this introduction for more information.

baselineOutputValue

number (double format)

Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located by output_index. If there are multiple baselines, their output values are averaged.

featureAttributions

any

Output only. Attributions of each explained feature. Features are extracted from the prediction instances according to explanation metadata for inputs. The value is a struct, whose keys are the name of the feature. The values are how much the feature in the instance contributed to the predicted result. The format of the value is determined by the feature's input format: * If the feature is a scalar value, the attribution value is a floating number. * If the feature is an array of scalar values, the attribution value is an array. * If the feature is a struct, the attribution value is a struct. The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct. The ExplanationMetadata.feature_attributions_schema_uri field, pointed to by the ExplanationSpec field of the Endpoint.deployed_models object, points to the schema file that describes the features and their attribution values (if it is populated).

instanceOutputValue

number (double format)

Output only. Model predicted output on the corresponding explanation instance. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model predicted output has multiple dimensions, this is the value in the output located by output_index.

outputDisplayName

string

Output only. The display name of the output identified by output_index. For example, the predicted class name by a multi-classification Model. This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using output_index.

outputIndex[]

integer (int32 format)

Output only. The index that locates the explained prediction output. If the prediction output is a scalar value, output_index is not populated. If the prediction output has multiple dimensions, the length of the output_index list is the same as the number of dimensions of the output. The i-th element in output_index is the element index of the i-th dimension of the output vector. Indices start from 0.

outputName

string

Output only. Name of the explain output. Specified as the key in ExplanationMetadata.outputs.

GoogleCloudAiplatformV1AutomaticResources

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.
Fields
maxReplicaCount

integer (int32 format)

Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.

minReplicaCount

integer (int32 format)

Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.

GoogleCloudAiplatformV1AutoscalingMetricSpec

The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
Fields
metricName

string

Required. The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization

target

integer (int32 format)

The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.

GoogleCloudAiplatformV1AvroSource

The storage details for Avro input content.
Fields
gcsSource

object (GoogleCloudAiplatformV1GcsSource)

Required. Google Cloud Storage location.

GoogleCloudAiplatformV1BatchCancelPipelineJobsRequest

Request message for PipelineService.BatchCancelPipelineJobs.
Fields
names[]

string

Required. The names of the PipelineJobs to cancel. A maximum of 32 PipelineJobs can be cancelled in a batch. Format: projects/{project}/locations/{location}/pipelineJobs/{pipelineJob}

GoogleCloudAiplatformV1BatchCreateFeaturesOperationMetadata

Details of operations that perform batch create Features.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Feature.

GoogleCloudAiplatformV1BatchCreateFeaturesRequest

Request message for FeaturestoreService.BatchCreateFeatures.
Fields
requests[]

object (GoogleCloudAiplatformV1CreateFeatureRequest)

Required. The request message specifying the Features to create. All Features must be created under the same parent EntityType. The parent field in each child request message can be omitted. If parent is set in a child request, then the value must match the parent value in this request message.

GoogleCloudAiplatformV1BatchCreateFeaturesResponse

Response message for FeaturestoreService.BatchCreateFeatures.
Fields
features[]

object (GoogleCloudAiplatformV1Feature)

The Features created.

GoogleCloudAiplatformV1BatchCreateTensorboardRunsRequest

Request message for TensorboardService.BatchCreateTensorboardRuns.
Fields
requests[]

object (GoogleCloudAiplatformV1CreateTensorboardRunRequest)

Required. The request message specifying the TensorboardRuns to create. A maximum of 1000 TensorboardRuns can be created in a batch.

GoogleCloudAiplatformV1BatchCreateTensorboardRunsResponse

Response message for TensorboardService.BatchCreateTensorboardRuns.
Fields
tensorboardRuns[]

object (GoogleCloudAiplatformV1TensorboardRun)

The created TensorboardRuns.

GoogleCloudAiplatformV1BatchCreateTensorboardTimeSeriesRequest

Request message for TensorboardService.BatchCreateTensorboardTimeSeries.
Fields
requests[]

object (GoogleCloudAiplatformV1CreateTensorboardTimeSeriesRequest)

Required. The request message specifying the TensorboardTimeSeries to create. A maximum of 1000 TensorboardTimeSeries can be created in a batch.

GoogleCloudAiplatformV1BatchCreateTensorboardTimeSeriesResponse

Response message for TensorboardService.BatchCreateTensorboardTimeSeries.
Fields
tensorboardTimeSeries[]

object (GoogleCloudAiplatformV1TensorboardTimeSeries)

The created TensorboardTimeSeries.

GoogleCloudAiplatformV1BatchDedicatedResources

A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.
Fields
machineSpec

object (GoogleCloudAiplatformV1MachineSpec)

Required. Immutable. The specification of a single machine.

maxReplicaCount

integer (int32 format)

Immutable. The maximum number of machine replicas the batch operation may be scaled to. The default value is 10.

startingReplicaCount

integer (int32 format)

Immutable. The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than max_replica_count

GoogleCloudAiplatformV1BatchDeletePipelineJobsRequest

Request message for PipelineService.BatchDeletePipelineJobs.
Fields
names[]

string

Required. The names of the PipelineJobs to delete. A maximum of 32 PipelineJobs can be deleted in a batch. Format: projects/{project}/locations/{location}/pipelineJobs/{pipelineJob}

GoogleCloudAiplatformV1BatchImportEvaluatedAnnotationsRequest

Request message for ModelService.BatchImportEvaluatedAnnotations
Fields
evaluatedAnnotations[]

object (GoogleCloudAiplatformV1EvaluatedAnnotation)

Required. Evaluated annotations resource to be imported.

GoogleCloudAiplatformV1BatchImportEvaluatedAnnotationsResponse

Response message for ModelService.BatchImportEvaluatedAnnotations
Fields
importedEvaluatedAnnotationsCount

integer (int32 format)

Output only. Number of EvaluatedAnnotations imported.

GoogleCloudAiplatformV1BatchImportModelEvaluationSlicesRequest

Request message for ModelService.BatchImportModelEvaluationSlices
Fields
modelEvaluationSlices[]

object (GoogleCloudAiplatformV1ModelEvaluationSlice)

Required. Model evaluation slice resource to be imported.

GoogleCloudAiplatformV1BatchImportModelEvaluationSlicesResponse

Response message for ModelService.BatchImportModelEvaluationSlices
Fields
importedModelEvaluationSlices[]

string

Output only. List of imported ModelEvaluationSlice.name.

GoogleCloudAiplatformV1BatchMigrateResourcesOperationMetadata

Runtime operation information for MigrationService.BatchMigrateResources.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

partialResults[]

object (GoogleCloudAiplatformV1BatchMigrateResourcesOperationMetadataPartialResult)

Partial results that reflect the latest migration operation progress.

GoogleCloudAiplatformV1BatchMigrateResourcesOperationMetadataPartialResult

Represents a partial result in batch migration operation for one MigrateResourceRequest.
Fields
dataset

string

Migrated dataset resource name.

error

object (GoogleRpcStatus)

The error result of the migration request in case of failure.

model

string

Migrated model resource name.

request

object (GoogleCloudAiplatformV1MigrateResourceRequest)

It's the same as the value in MigrateResourceRequest.migrate_resource_requests.

GoogleCloudAiplatformV1BatchMigrateResourcesRequest

Request message for MigrationService.BatchMigrateResources.
Fields
migrateResourceRequests[]

object (GoogleCloudAiplatformV1MigrateResourceRequest)

Required. The request messages specifying the resources to migrate. They must be in the same location as the destination. Up to 50 resources can be migrated in one batch.

GoogleCloudAiplatformV1BatchMigrateResourcesResponse

Response message for MigrationService.BatchMigrateResources.
Fields
migrateResourceResponses[]

object (GoogleCloudAiplatformV1MigrateResourceResponse)

Successfully migrated resources.

GoogleCloudAiplatformV1BatchPredictionJob

A job that uses a Model to produce predictions on multiple input instances. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.
Fields
completionStats

object (GoogleCloudAiplatformV1CompletionStats)

Output only. Statistics on completed and failed prediction instances.

createTime

string (Timestamp format)

Output only. Time when the BatchPredictionJob was created.

dedicatedResources

object (GoogleCloudAiplatformV1BatchDedicatedResources)

The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.

disableContainerLogging

boolean

For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing. User can disable container logging by setting this flag to true.

displayName

string

Required. The user-defined name of this BatchPredictionJob.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.

endTime

string (Timestamp format)

Output only. Time when the BatchPredictionJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

error

object (GoogleRpcStatus)

Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

explanationSpec

object (GoogleCloudAiplatformV1ExplanationSpec)

Explanation configuration for this BatchPredictionJob. Can be specified only if generate_explanation is set to true. This value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of the explanation_spec object is not populated, the corresponding field of the Model.explanation_spec object is inherited.

generateExplanation

boolean

Generate explanation with the batch prediction results. When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object: * bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object. * jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object. * csv: Generating explanations for CSV format is not supported. If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.

inputConfig

object (GoogleCloudAiplatformV1BatchPredictionJobInputConfig)

Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's PredictSchemata's instance_schema_uri.

instanceConfig

object (GoogleCloudAiplatformV1BatchPredictionJobInstanceConfig)

Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize BatchPredictionJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

manualBatchTuningParameters

object (GoogleCloudAiplatformV1ManualBatchTuningParameters)

Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources are used (in other cases Vertex AI does the tuning itself).

model

string

The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set. The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed. The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

modelParameters

any

The parameters that govern the predictions. The schema of the parameters may be specified via the Model's PredictSchemata's parameters_schema_uri.

modelVersionId

string

Output only. The version ID of the Model that produces the predictions via this job.

name

string

Output only. Resource name of the BatchPredictionJob.

outputConfig

object (GoogleCloudAiplatformV1BatchPredictionJobOutputConfig)

Required. The Configuration specifying where output predictions should be written. The schema of any single prediction may be specified as a concatenation of Model's PredictSchemata's instance_schema_uri and prediction_schema_uri.

outputInfo

object (GoogleCloudAiplatformV1BatchPredictionJobOutputInfo)

Output only. Information further describing the output of this job.

partialFailures[]

object (GoogleRpcStatus)

Output only. Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.

resourcesConsumed

object (GoogleCloudAiplatformV1ResourcesConsumed)

Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes. Note: This field currently may be not populated for batch predictions that use AutoML Models.

serviceAccount

string

The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources. Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

startTime

string (Timestamp format)

Output only. Time when the BatchPredictionJob for the first time entered the JOB_STATE_RUNNING state.

state

enum

Output only. The detailed state of the job.

Enum type. Can be one of the following:
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.
unmanagedContainerModel

object (GoogleCloudAiplatformV1UnmanagedContainerModel)

Contains model information necessary to perform batch prediction without requiring uploading to model registry. Exactly one of model and unmanaged_container_model must be set.

updateTime

string (Timestamp format)

Output only. Time when the BatchPredictionJob was most recently updated.

GoogleCloudAiplatformV1BatchPredictionJobInputConfig

Configures the input to BatchPredictionJob. See Model.supported_input_storage_formats for Model's supported input formats, and how instances should be expressed via any of them.
Fields
bigquerySource

object (GoogleCloudAiplatformV1BigQuerySource)

The BigQuery location of the input table. The schema of the table should be in the format described by the given context OpenAPI Schema, if one is provided. The table may contain additional columns that are not described by the schema, and they will be ignored.

gcsSource

object (GoogleCloudAiplatformV1GcsSource)

The Cloud Storage location for the input instances.

instancesFormat

string

Required. The format in which instances are given, must be one of the Model's supported_input_storage_formats.

GoogleCloudAiplatformV1BatchPredictionJobInstanceConfig

Configuration defining how to transform batch prediction input instances to the instances that the Model accepts.
Fields
excludedFields[]

string

Fields that will be excluded in the prediction instance that is sent to the Model. Excluded will be attached to the batch prediction output if key_field is not specified. When excluded_fields is populated, included_fields must be empty. The input must be JSONL with objects at each line, BigQuery or TfRecord.

includedFields[]

string

Fields that will be included in the prediction instance that is sent to the Model. If instance_type is array, the order of field names in included_fields also determines the order of the values in the array. When included_fields is populated, excluded_fields must be empty. The input must be JSONL with objects at each line, BigQuery or TfRecord.

instanceType

string

The format of the instance that the Model accepts. Vertex AI will convert compatible batch prediction input instance formats to the specified format. Supported values are: * object: Each input is converted to JSON object format. * For bigquery, each row is converted to an object. * For jsonl, each line of the JSONL input must be an object. * Does not apply to csv, file-list, tf-record, or tf-record-gzip. * array: Each input is converted to JSON array format. * For bigquery, each row is converted to an array. The order of columns is determined by the BigQuery column order, unless included_fields is populated. included_fields must be populated for specifying field orders. * For jsonl, if each line of the JSONL input is an object, included_fields must be populated for specifying field orders. * Does not apply to csv, file-list, tf-record, or tf-record-gzip. If not specified, Vertex AI converts the batch prediction input as follows: * For bigquery and csv, the behavior is the same as array. The order of columns is the same as defined in the file or table, unless included_fields is populated. * For jsonl, the prediction instance format is determined by each line of the input. * For tf-record/tf-record-gzip, each record will be converted to an object in the format of {"b64": }, where is the Base64-encoded string of the content of the record. * For `file-list`, each file in the list will be converted to an object in the format of `{"b64": }`, where is the Base64-encoded string of the content of the file.

keyField

string

The name of the field that is considered as a key. The values identified by the key field is not included in the transformed instances that is sent to the Model. This is similar to specifying this name of the field in excluded_fields. In addition, the batch prediction output will not include the instances. Instead the output will only include the value of the key field, in a field named key in the output: * For jsonl output format, the output will have a key field instead of the instance field. * For csv/bigquery output format, the output will have have a key column instead of the instance feature columns. The input must be JSONL with objects at each line, CSV, BigQuery or TfRecord.

GoogleCloudAiplatformV1BatchPredictionJobOutputConfig

Configures the output of BatchPredictionJob. See Model.supported_output_storage_formats for supported output formats, and how predictions are expressed via any of them.
Fields
bigqueryDestination

object (GoogleCloudAiplatformV1BigQueryDestination)

The BigQuery project or dataset location where the output is to be written to. If project is provided, a new dataset is created with name prediction__ where is made BigQuery-dataset-name compatible (for example, most special characters become underscores), and timestamp is in YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In the dataset two tables will be created, predictions, and errors. If the Model has both instance and prediction schemata defined then the tables have columns as follows: The predictions table contains instances for which the prediction succeeded, it has columns as per a concatenation of the Model's instance and prediction schemata. The errors table contains rows for which the prediction has failed, it has instance columns, as per the instance schema, followed by a single "errors" column, which as values has google.rpc.Status represented as a STRUCT, and containing only code and message.

gcsDestination

object (GoogleCloudAiplatformV1GcsDestination)

The Cloud Storage location of the directory where the output is to be written to. In the given directory a new directory is created. Its name is prediction--, where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. Inside of it files predictions_0001., predictions_0002., ..., predictions_N. are created where ` depends on chosen predictions_format, and N may equal 0001 and depends on the total number of successfully predicted instances. If the Model has both instance and prediction schemata defined then each such file contains predictions as per the predictions_format. If prediction for any instance failed (partially or completely), then an additionalerrors_0001.,errors_0002.,...,errors_N.files are created (N depends on total number of failed predictions). These files contain the failed instances, as per their schema, followed by an additionalerrorfield which as value has google.rpc.Status containing onlycodeandmessage` fields.

predictionsFormat

string

Required. The format in which Vertex AI gives the predictions, must be one of the Model's supported_output_storage_formats.

GoogleCloudAiplatformV1BatchPredictionJobOutputInfo

Further describes this job's output. Supplements output_config.
Fields
bigqueryOutputDataset

string

Output only. The path of the BigQuery dataset created, in bq://projectId.bqDatasetId format, into which the prediction output is written.

bigqueryOutputTable

string

Output only. The name of the BigQuery table created, in predictions_ format, into which the prediction output is written. Can be used by UI to generate the BigQuery output path, for example.

gcsOutputDirectory

string

Output only. The full path of the Cloud Storage directory created, into which the prediction output is written.

GoogleCloudAiplatformV1BatchReadFeatureValuesOperationMetadata

Details of operations that batch reads Feature values.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Featurestore batch read Features values.

GoogleCloudAiplatformV1BatchReadFeatureValuesRequest

Request message for FeaturestoreService.BatchReadFeatureValues.
Fields
bigqueryReadInstances

object (GoogleCloudAiplatformV1BigQuerySource)

Similar to csv_read_instances, but from BigQuery source.

csvReadInstances

object (GoogleCloudAiplatformV1CsvSource)

Each read instance consists of exactly one read timestamp and one or more entity IDs identifying entities of the corresponding EntityTypes whose Features are requested. Each output instance contains Feature values of requested entities concatenated together as of the read time. An example read instance may be foo_entity_id, bar_entity_id, 2020-01-01T10:00:00.123Z. An example output instance may be foo_entity_id, bar_entity_id, 2020-01-01T10:00:00.123Z, foo_entity_feature1_value, bar_entity_feature2_value. Timestamp in each read instance must be millisecond-aligned. csv_read_instances are read instances stored in a plain-text CSV file. The header should be: [ENTITY_TYPE_ID1], [ENTITY_TYPE_ID2], ..., timestamp The columns can be in any order. Values in the timestamp column must use the RFC 3339 format, e.g. 2012-07-30T10:43:17.123Z.

destination

object (GoogleCloudAiplatformV1FeatureValueDestination)

Required. Specifies output location and format.

entityTypeSpecs[]

object (GoogleCloudAiplatformV1BatchReadFeatureValuesRequestEntityTypeSpec)

Required. Specifies EntityType grouping Features to read values of and settings.

passThroughFields[]

object (GoogleCloudAiplatformV1BatchReadFeatureValuesRequestPassThroughField)

When not empty, the specified fields in the *_read_instances source will be joined as-is in the output, in addition to those fields from the Featurestore Entity. For BigQuery source, the type of the pass-through values will be automatically inferred. For CSV source, the pass-through values will be passed as opaque bytes.

startTime

string (Timestamp format)

Optional. Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.

GoogleCloudAiplatformV1BatchReadFeatureValuesRequestEntityTypeSpec

Selects Features of an EntityType to read values of and specifies read settings.
Fields
entityTypeId

string

Required. ID of the EntityType to select Features. The EntityType id is the entity_type_id specified during EntityType creation.

featureSelector

object (GoogleCloudAiplatformV1FeatureSelector)

Required. Selectors choosing which Feature values to read from the EntityType.

settings[]

object (GoogleCloudAiplatformV1DestinationFeatureSetting)

Per-Feature settings for the batch read.

GoogleCloudAiplatformV1BatchReadFeatureValuesRequestPassThroughField

Describe pass-through fields in read_instance source.
Fields
fieldName

string

Required. The name of the field in the CSV header or the name of the column in BigQuery table. The naming restriction is the same as Feature.name.

GoogleCloudAiplatformV1BatchReadTensorboardTimeSeriesDataResponse

Response message for TensorboardService.BatchReadTensorboardTimeSeriesData.
Fields
timeSeriesData[]

object (GoogleCloudAiplatformV1TimeSeriesData)

The returned time series data.

GoogleCloudAiplatformV1BigQueryDestination

The BigQuery location for the output content.
Fields
outputUri

string

Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.

GoogleCloudAiplatformV1BigQuerySource

The BigQuery location for the input content.
Fields
inputUri

string

Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.

GoogleCloudAiplatformV1Blob

Content blob. It's preferred to send as text directly rather than raw bytes.
Fields
data

string (bytes format)

Required. Raw bytes.

mimeType

string

Required. The IANA standard MIME type of the source data.

GoogleCloudAiplatformV1BlurBaselineConfig

Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383
Fields
maxBlurSigma

number (float format)

The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.

GoogleCloudAiplatformV1BoolArray

A list of boolean values.
Fields
values[]

boolean

A list of bool values.

GoogleCloudAiplatformV1Candidate

A response candidate generated from the model.
Fields
citationMetadata

object (GoogleCloudAiplatformV1CitationMetadata)

Output only. Source attribution of the generated content.

content

object (GoogleCloudAiplatformV1Content)

Output only. Content parts of the candidate.

finishMessage

string

Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when finish_reason is set.

finishReason

enum

Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

Enum type. Can be one of the following:
FINISH_REASON_UNSPECIFIED The finish reason is unspecified.
STOP Natural stop point of the model or provided stop sequence.
MAX_TOKENS The maximum number of tokens as specified in the request was reached.
SAFETY The token generation was stopped as the response was flagged for safety reasons. NOTE: When streaming the Candidate.content will be empty if content filters blocked the output.
RECITATION The token generation was stopped as the response was flagged for unauthorized citations.
OTHER All other reasons that stopped the token generation
BLOCKLIST The token generation was stopped as the response was flagged for the terms which are included from the terminology blocklist.
PROHIBITED_CONTENT The token generation was stopped as the response was flagged for the prohibited contents.
SPII The token generation was stopped as the response was flagged for Sensitive Personally Identifiable Information (SPII) contents.
groundingMetadata

object (GoogleCloudAiplatformV1GroundingMetadata)

Output only. Metadata specifies sources used to ground generated content.

index

integer (int32 format)

Output only. Index of the candidate.

safetyRatings[]

object (GoogleCloudAiplatformV1SafetyRating)

Output only. List of ratings for the safety of a response candidate. There is at most one rating per category.

GoogleCloudAiplatformV1CheckTrialEarlyStoppingStateMetatdata

This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for suggesting Trials.

study

string

The name of the Study that the Trial belongs to.

trial

string

The Trial name.

GoogleCloudAiplatformV1CheckTrialEarlyStoppingStateResponse

Response message for VizierService.CheckTrialEarlyStoppingState.
Fields
shouldStop

boolean

True if the Trial should stop.

GoogleCloudAiplatformV1Citation

Source attributions for content.
Fields
endIndex

integer (int32 format)

Output only. End index into the content.

license

string

Output only. License of the attribution.

publicationDate

object (GoogleTypeDate)

Output only. Publication date of the attribution.

startIndex

integer (int32 format)

Output only. Start index into the content.

title

string

Output only. Title of the attribution.

uri

string

Output only. Url reference of the attribution.

GoogleCloudAiplatformV1CitationMetadata

A collection of source attributions for a piece of content.
Fields
citations[]

object (GoogleCloudAiplatformV1Citation)

Output only. List of citations.

GoogleCloudAiplatformV1CompleteTrialRequest

Request message for VizierService.CompleteTrial.
Fields
finalMeasurement

object (GoogleCloudAiplatformV1Measurement)

Optional. If provided, it will be used as the completed Trial's final_measurement; Otherwise, the service will auto-select a previously reported measurement as the final-measurement

infeasibleReason

string

Optional. A human readable reason why the trial was infeasible. This should only be provided if trial_infeasible is true.

trialInfeasible

boolean

Optional. True if the Trial cannot be run with the given Parameter, and final_measurement will be ignored.

GoogleCloudAiplatformV1CompletionStats

Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.
Fields
failedCount

string (int64 format)

Output only. The number of entities for which any error was encountered.

incompleteCount

string (int64 format)

Output only. In cases when enough errors are encountered a job, pipeline, or operation may be failed as a whole. Below is the number of entities for which the processing had not been finished (either in successful or failed state). Set to -1 if the number is unknown (for example, the operation failed before the total entity number could be collected).

successfulCount

string (int64 format)

Output only. The number of entities that had been processed successfully.

successfulForecastPointCount

string (int64 format)

Output only. The number of the successful forecast points that are generated by the forecasting model. This is ONLY used by the forecasting batch prediction.

GoogleCloudAiplatformV1ComputeTokensRequest

Request message for ComputeTokens RPC call.
Fields
instances[]

any

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

GoogleCloudAiplatformV1ComputeTokensResponse

Response message for ComputeTokens RPC call.
Fields
tokensInfo[]

object (GoogleCloudAiplatformV1TokensInfo)

Lists of tokens info from the input. A ComputeTokensRequest could have multiple instances with a prompt in each instance. We also need to return lists of tokens info for the request with multiple instances.

GoogleCloudAiplatformV1ContainerRegistryDestination

The Container Registry location for the container image.
Fields
outputUri

string

Required. Container Registry URI of a container image. Only Google Container Registry and Artifact Registry are supported now. Accepted forms: * Google Container Registry path. For example: gcr.io/projectId/imageName:tag. * Artifact Registry path. For example: us-central1-docker.pkg.dev/projectId/repoName/imageName:tag. If a tag is not specified, "latest" will be used as the default tag.

GoogleCloudAiplatformV1ContainerSpec

The spec of a Container.
Fields
args[]

string

The arguments to be passed when starting the container.

command[]

string

The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.

env[]

object (GoogleCloudAiplatformV1EnvVar)

Environment variables to be passed to the container. Maximum limit is 100.

imageUri

string

Required. The URI of a container image in the Container Registry that is to be run on each worker replica.

GoogleCloudAiplatformV1Content

The base structured datatype containing multi-part content of a message. A Content includes a role field designating the producer of the Content and a parts field containing multi-part data that contains the content of the message turn.
Fields
parts[]

object (GoogleCloudAiplatformV1Part)

Required. Ordered Parts that constitute a single message. Parts may have different IANA MIME types.

role

string

Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.

GoogleCloudAiplatformV1Context

Instance of a general context.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this Context was created.

description

string

Description of the Context

displayName

string

User provided display name of the Context. May be up to 128 Unicode characters.

etag

string

An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).

metadata

map (key: string, value: any)

Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.

name

string

Immutable. The resource name of the Context.

parentContexts[]

string

Output only. A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.

schemaTitle

string

The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

schemaVersion

string

The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

updateTime

string (Timestamp format)

Output only. Timestamp when this Context was last updated.

GoogleCloudAiplatformV1CopyModelOperationMetadata

Details of ModelService.CopyModel operation.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1CopyModelRequest

Request message for ModelService.CopyModel.
Fields
encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key options. If this is set, then the Model copy will be encrypted with the provided encryption key.

modelId

string

Optional. Copy source_model into a new Model with this ID. The ID will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are [a-z0-9_-]. The first character cannot be a number or hyphen.

parentModel

string

Optional. Specify this field to copy source_model into this existing Model as a new version. Format: projects/{project}/locations/{location}/models/{model}

sourceModel

string

Required. The resource name of the Model to copy. That Model must be in the same Project. Format: projects/{project}/locations/{location}/models/{model}

GoogleCloudAiplatformV1CopyModelResponse

Response message of ModelService.CopyModel operation.
Fields
model

string

The name of the copied Model resource. Format: projects/{project}/locations/{location}/models/{model}

modelVersionId

string

Output only. The version ID of the model that is copied.

GoogleCloudAiplatformV1CountTokensRequest

Request message for PredictionService.CountTokens.
Fields
contents[]

object (GoogleCloudAiplatformV1Content)

Required. Input content.

instances[]

any

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

model

string

Required. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*

GoogleCloudAiplatformV1CountTokensResponse

Response message for PredictionService.CountTokens.
Fields
totalBillableCharacters

integer (int32 format)

The total number of billable characters counted across all instances from the request.

totalTokens

integer (int32 format)

The total number of tokens counted across all instances from the request.

GoogleCloudAiplatformV1CreateDatasetOperationMetadata

Runtime operation information for DatasetService.CreateDataset.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1CreateDatasetVersionOperationMetadata

Runtime operation information for DatasetService.CreateDatasetVersion.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1CreateDeploymentResourcePoolOperationMetadata

Runtime operation information for CreateDeploymentResourcePool method.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1CreateDeploymentResourcePoolRequest

Request message for CreateDeploymentResourcePool method.
Fields
deploymentResourcePool

object (GoogleCloudAiplatformV1DeploymentResourcePool)

Required. The DeploymentResourcePool to create.

deploymentResourcePoolId

string

Required. The ID to use for the DeploymentResourcePool, which will become the final component of the DeploymentResourcePool's resource name. The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.

GoogleCloudAiplatformV1CreateEndpointOperationMetadata

Runtime operation information for EndpointService.CreateEndpoint.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1CreateEntityTypeOperationMetadata

Details of operations that perform create EntityType.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for EntityType.

GoogleCloudAiplatformV1CreateFeatureGroupOperationMetadata

Details of operations that perform create FeatureGroup.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for FeatureGroup.

GoogleCloudAiplatformV1CreateFeatureOnlineStoreOperationMetadata

Details of operations that perform create FeatureOnlineStore.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for FeatureOnlineStore.

GoogleCloudAiplatformV1CreateFeatureOperationMetadata

Details of operations that perform create Feature.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Feature.

GoogleCloudAiplatformV1CreateFeatureRequest

Request message for FeaturestoreService.CreateFeature. Request message for FeatureRegistryService.CreateFeature.
Fields
feature

object (GoogleCloudAiplatformV1Feature)

Required. The Feature to create.

featureId

string

Required. The ID to use for the Feature, which will become the final component of the Feature's resource name. This value may be up to 128 characters, and valid characters are [a-z0-9_]. The first character cannot be a number. The value must be unique within an EntityType/FeatureGroup.

parent

string

Required. The resource name of the EntityType or FeatureGroup to create a Feature. Format for entity_type as parent: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type} Format for feature_group as parent: projects/{project}/locations/{location}/featureGroups/{feature_group}

GoogleCloudAiplatformV1CreateFeatureViewOperationMetadata

Details of operations that perform create FeatureView.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for FeatureView Create.

GoogleCloudAiplatformV1CreateFeaturestoreOperationMetadata

Details of operations that perform create Featurestore.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Featurestore.

GoogleCloudAiplatformV1CreateIndexEndpointOperationMetadata

Runtime operation information for IndexEndpointService.CreateIndexEndpoint.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1CreateIndexOperationMetadata

Runtime operation information for IndexService.CreateIndex.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

nearestNeighborSearchOperationMetadata

object (GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadata)

The operation metadata with regard to Matching Engine Index operation.

GoogleCloudAiplatformV1CreateMetadataStoreOperationMetadata

Details of operations that perform MetadataService.CreateMetadataStore.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for creating a MetadataStore.

GoogleCloudAiplatformV1CreateNotebookRuntimeTemplateOperationMetadata

Metadata information for NotebookService.CreateNotebookRuntimeTemplate.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1CreatePersistentResourceOperationMetadata

Details of operations that perform create PersistentResource.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for PersistentResource.

progressMessage

string

Progress Message for Create LRO

GoogleCloudAiplatformV1CreatePipelineJobRequest

Request message for PipelineService.CreatePipelineJob.
Fields
parent

string

Required. The resource name of the Location to create the PipelineJob in. Format: projects/{project}/locations/{location}

pipelineJob

object (GoogleCloudAiplatformV1PipelineJob)

Required. The PipelineJob to create.

pipelineJobId

string

The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /a-z-/.

GoogleCloudAiplatformV1CreateRegistryFeatureOperationMetadata

Details of operations that perform create FeatureGroup.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Feature.

GoogleCloudAiplatformV1CreateSpecialistPoolOperationMetadata

Runtime operation information for SpecialistPoolService.CreateSpecialistPool.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1CreateTensorboardOperationMetadata

Details of operations that perform create Tensorboard.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Tensorboard.

GoogleCloudAiplatformV1CreateTensorboardRunRequest

Request message for TensorboardService.CreateTensorboardRun.
Fields
parent

string

Required. The resource name of the TensorboardExperiment to create the TensorboardRun in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

tensorboardRun

object (GoogleCloudAiplatformV1TensorboardRun)

Required. The TensorboardRun to create.

tensorboardRunId

string

Required. The ID to use for the Tensorboard run, which becomes the final component of the Tensorboard run's resource name. This value should be 1-128 characters, and valid characters are /a-z-/.

GoogleCloudAiplatformV1CreateTensorboardTimeSeriesRequest

Request message for TensorboardService.CreateTensorboardTimeSeries.
Fields
parent

string

Required. The resource name of the TensorboardRun to create the TensorboardTimeSeries in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

tensorboardTimeSeries

object (GoogleCloudAiplatformV1TensorboardTimeSeries)

Required. The TensorboardTimeSeries to create.

tensorboardTimeSeriesId

string

Optional. The user specified unique ID to use for the TensorboardTimeSeries, which becomes the final component of the TensorboardTimeSeries's resource name. This value should match "a-z0-9{0, 127}"

GoogleCloudAiplatformV1CsvDestination

The storage details for CSV output content.
Fields
gcsDestination

object (GoogleCloudAiplatformV1GcsDestination)

Required. Google Cloud Storage location.

GoogleCloudAiplatformV1CsvSource

The storage details for CSV input content.
Fields
gcsSource

object (GoogleCloudAiplatformV1GcsSource)

Required. Google Cloud Storage location.

GoogleCloudAiplatformV1CustomJob

Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).
Fields
createTime

string (Timestamp format)

Output only. Time when the CustomJob was created.

displayName

string

Required. The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.

endTime

string (Timestamp format)

Output only. Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

error

object (GoogleRpcStatus)

Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

jobSpec

object (GoogleCloudAiplatformV1CustomJobSpec)

Required. Job spec.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

name

string

Output only. Resource name of a CustomJob.

startTime

string (Timestamp format)

Output only. Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.

state

enum

Output only. The detailed state of the job.

Enum type. Can be one of the following:
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.
updateTime

string (Timestamp format)

Output only. Time when the CustomJob was most recently updated.

webAccessUris

map (key: string, value: string)

Output only. URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.

GoogleCloudAiplatformV1CustomJobSpec

Represents the spec of a CustomJob.
Fields
baseOutputDirectory

object (GoogleCloudAiplatformV1GcsDestination)

The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/

enableDashboardAccess

boolean

Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).

enableWebAccess

boolean

Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).

experiment

string

Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}

experimentRun

string

Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}

models[]

string

Optional. The name of the Model resources for which to generate a mapping to artifact URIs. Applicable only to some of the Google-provided custom jobs. Format: projects/{project}/locations/{location}/models/{model} In order to retrieve a specific version of the model, also provide the version ID or version alias. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden If no version ID or alias is specified, the "default" version will be returned. The "default" version alias is created for the first version of the model, and can be moved to other versions later on. There will be exactly one default version.

network

string

Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.

persistentResourceId

string

Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.

protectedArtifactLocationId

string

The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations

reservedIpRanges[]

string

Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].

scheduling

object (GoogleCloudAiplatformV1Scheduling)

Scheduling options for a CustomJob.

serviceAccount

string

Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.

tensorboard

string

Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

workerPoolSpecs[]

object (GoogleCloudAiplatformV1WorkerPoolSpec)

Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.

GoogleCloudAiplatformV1DataItem

A piece of data in a Dataset. Could be an image, a video, a document or plain text.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this DataItem was created.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your DataItems. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one DataItem(System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

name

string

Output only. The resource name of the DataItem.

payload

any

Required. The data that the DataItem represents (for example, an image or a text snippet). The schema of the payload is stored in the parent Dataset's metadata schema's dataItemSchemaUri field.

updateTime

string (Timestamp format)

Output only. Timestamp when this DataItem was last updated.

GoogleCloudAiplatformV1DataItemView

A container for a single DataItem and Annotations on it.
Fields
annotations[]

object (GoogleCloudAiplatformV1Annotation)

The Annotations on the DataItem. If too many Annotations should be returned for the DataItem, this field will be truncated per annotations_limit in request. If it was, then the has_truncated_annotations will be set to true.

dataItem

object (GoogleCloudAiplatformV1DataItem)

The DataItem.

hasTruncatedAnnotations

boolean

True if and only if the Annotations field has been truncated. It happens if more Annotations for this DataItem met the request's annotation_filter than are allowed to be returned by annotations_limit. Note that if Annotations field is not being returned due to field mask, then this field will not be set to true no matter how many Annotations are there.

GoogleCloudAiplatformV1DataLabelingJob

DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset:
Fields
activeLearningConfig

object (GoogleCloudAiplatformV1ActiveLearningConfig)

Parameters that configure the active learning pipeline. Active learning will label the data incrementally via several iterations. For every iteration, it will select a batch of data based on the sampling strategy.

annotationLabels

map (key: string, value: string)

Labels to assign to annotations generated by this DataLabelingJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

createTime

string (Timestamp format)

Output only. Timestamp when this DataLabelingJob was created.

currentSpend

object (GoogleTypeMoney)

Output only. Estimated cost(in US dollars) that the DataLabelingJob has incurred to date.

datasets[]

string

Required. Dataset resource names. Right now we only support labeling from a single Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}

displayName

string

Required. The user-defined name of the DataLabelingJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a DataLabelingJob.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a DataLabelingJob. If set, this DataLabelingJob will be secured by this key. Note: Annotations created in the DataLabelingJob are associated with the EncryptionSpec of the Dataset they are exported to.

error

object (GoogleRpcStatus)

Output only. DataLabelingJob errors. It is only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

inputs

any

Required. Input config parameters for the DataLabelingJob.

inputsSchemaUri

string

Required. Points to a YAML file stored on Google Cloud Storage describing the config for a specific type of DataLabelingJob. The schema files that can be used here are found in the https://storage.googleapis.com/google-cloud-aiplatform bucket in the /schema/datalabelingjob/inputs/ folder.

instructionUri

string

Required. The Google Cloud Storage location of the instruction pdf. This pdf is shared with labelers, and provides detailed description on how to label DataItems in Datasets.

labelerCount

integer (int32 format)

Required. Number of labelers to work on each DataItem.

labelingProgress

integer (int32 format)

Output only. Current labeling job progress percentage scaled in interval [0, 100], indicating the percentage of DataItems that has been finished.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your DataLabelingJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each DataLabelingJob: * "aiplatform.googleapis.com/schema": output only, its value is the inputs_schema's title.

name

string

Output only. Resource name of the DataLabelingJob.

specialistPools[]

string

The SpecialistPools' resource names associated with this job.

state

enum

Output only. The detailed state of the job.

Enum type. Can be one of the following:
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.
updateTime

string (Timestamp format)

Output only. Timestamp when this DataLabelingJob was updated most recently.

GoogleCloudAiplatformV1Dataset

A collection of DataItems and Annotations on them.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this Dataset was created.

dataItemCount

string (int64 format)

Output only. The number of DataItems in this Dataset. Only apply for non-structured Dataset.

description

string

The description of the Dataset.

displayName

string

Required. The user-defined name of the Dataset. The name can be up to 128 characters long and can consist of any UTF-8 characters.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a Dataset. If set, this Dataset and all sub-resources of this Dataset will be secured by this key.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Datasets. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Dataset: * "aiplatform.googleapis.com/dataset_metadata_schema": output only, its value is the metadata_schema's title.

metadata

any

Required. Additional information about the Dataset.

metadataArtifact

string

Output only. The resource name of the Artifact that was created in MetadataStore when creating the Dataset. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}.

metadataSchemaUri

string

Required. Points to a YAML file stored on Google Cloud Storage describing additional information about the Dataset. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/metadata/.

modelReference

string

Optional. Reference to the public base model last used by the dataset. Only set for prompt datasets.

name

string

Output only. The resource name of the Dataset.

savedQueries[]

object (GoogleCloudAiplatformV1SavedQuery)

All SavedQueries belong to the Dataset will be returned in List/Get Dataset response. The annotation_specs field will not be populated except for UI cases which will only use annotation_spec_count. In CreateDataset request, a SavedQuery is created together if this field is set, up to one SavedQuery can be set in CreateDatasetRequest. The SavedQuery should not contain any AnnotationSpec.

updateTime

string (Timestamp format)

Output only. Timestamp when this Dataset was last updated.

GoogleCloudAiplatformV1DatasetVersion

Describes the dataset version.
Fields
bigQueryDatasetName

string

Output only. Name of the associated BigQuery dataset.

createTime

string (Timestamp format)

Output only. Timestamp when this DatasetVersion was created.

displayName

string

The user-defined name of the DatasetVersion. The name can be up to 128 characters long and can consist of any UTF-8 characters.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

metadata

any

Required. Output only. Additional information about the DatasetVersion.

modelReference

string

Output only. Reference to the public base model last used by the dataset version. Only set for prompt dataset versions.

name

string

Output only. The resource name of the DatasetVersion.

updateTime

string (Timestamp format)

Output only. Timestamp when this DatasetVersion was last updated.

GoogleCloudAiplatformV1DedicatedResources

A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.
Fields
autoscalingMetricSpecs[]

object (GoogleCloudAiplatformV1AutoscalingMetricSpec)

Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.

machineSpec

object (GoogleCloudAiplatformV1MachineSpec)

Required. Immutable. The specification of a single machine used by the prediction.

maxReplicaCount

integer (int32 format)

Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).

minReplicaCount

integer (int32 format)

Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.

GoogleCloudAiplatformV1DeleteFeatureValuesOperationMetadata

Details of operations that delete Feature values.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Featurestore delete Features values.

GoogleCloudAiplatformV1DeleteFeatureValuesRequest

Request message for FeaturestoreService.DeleteFeatureValues.
Fields
selectEntity

object (GoogleCloudAiplatformV1DeleteFeatureValuesRequestSelectEntity)

Select feature values to be deleted by specifying entities.

selectTimeRangeAndFeature

object (GoogleCloudAiplatformV1DeleteFeatureValuesRequestSelectTimeRangeAndFeature)

Select feature values to be deleted by specifying time range and features.

GoogleCloudAiplatformV1DeleteFeatureValuesRequestSelectEntity

Message to select entity. If an entity id is selected, all the feature values corresponding to the entity id will be deleted, including the entityId.
Fields
entityIdSelector

object (GoogleCloudAiplatformV1EntityIdSelector)

Required. Selectors choosing feature values of which entity id to be deleted from the EntityType.

GoogleCloudAiplatformV1DeleteFeatureValuesRequestSelectTimeRangeAndFeature

Message to select time range and feature. Values of the selected feature generated within an inclusive time range will be deleted. Using this option permanently deletes the feature values from the specified feature IDs within the specified time range. This might include data from the online storage. If you want to retain any deleted historical data in the online storage, you must re-ingest it.
Fields
featureSelector

object (GoogleCloudAiplatformV1FeatureSelector)

Required. Selectors choosing which feature values to be deleted from the EntityType.

skipOnlineStorageDelete

boolean

If set, data will not be deleted from online storage. When time range is older than the data in online storage, setting this to be true will make the deletion have no impact on online serving.

timeRange

object (GoogleTypeInterval)

Required. Select feature generated within a half-inclusive time range. The time range is lower inclusive and upper exclusive.

GoogleCloudAiplatformV1DeleteFeatureValuesResponse

Response message for FeaturestoreService.DeleteFeatureValues.
Fields
selectEntity

object (GoogleCloudAiplatformV1DeleteFeatureValuesResponseSelectEntity)

Response for request specifying the entities to delete

selectTimeRangeAndFeature

object (GoogleCloudAiplatformV1DeleteFeatureValuesResponseSelectTimeRangeAndFeature)

Response for request specifying time range and feature

GoogleCloudAiplatformV1DeleteFeatureValuesResponseSelectEntity

Response message if the request uses the SelectEntity option.
Fields
offlineStorageDeletedEntityRowCount

string (int64 format)

The count of deleted entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage.

onlineStorageDeletedEntityCount

string (int64 format)

The count of deleted entities in the online storage. Each entity ID corresponds to one entity.

GoogleCloudAiplatformV1DeleteFeatureValuesResponseSelectTimeRangeAndFeature

Response message if the request uses the SelectTimeRangeAndFeature option.
Fields
impactedFeatureCount

string (int64 format)

The count of the features or columns impacted. This is the same as the feature count in the request.

offlineStorageModifiedEntityRowCount

string (int64 format)

The count of modified entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage. Within each row, only the features specified in the request are deleted.

onlineStorageModifiedEntityCount

string (int64 format)

The count of modified entities in the online storage. Each entity ID corresponds to one entity. Within each entity, only the features specified in the request are deleted.

GoogleCloudAiplatformV1DeleteMetadataStoreOperationMetadata

Details of operations that perform MetadataService.DeleteMetadataStore.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for deleting a MetadataStore.

GoogleCloudAiplatformV1DeleteOperationMetadata

Details of operations that perform deletes of any entities.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1DeployIndexOperationMetadata

Runtime operation information for IndexEndpointService.DeployIndex.
Fields
deployedIndexId

string

The unique index id specified by user

genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1DeployIndexRequest

Request message for IndexEndpointService.DeployIndex.
Fields
deployedIndex

object (GoogleCloudAiplatformV1DeployedIndex)

Required. The DeployedIndex to be created within the IndexEndpoint.

GoogleCloudAiplatformV1DeployIndexResponse

Response message for IndexEndpointService.DeployIndex.
Fields
deployedIndex

object (GoogleCloudAiplatformV1DeployedIndex)

The DeployedIndex that had been deployed in the IndexEndpoint.

GoogleCloudAiplatformV1DeployModelOperationMetadata

Runtime operation information for EndpointService.DeployModel.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1DeployModelRequest

Request message for EndpointService.DeployModel.
Fields
deployedModel

object (GoogleCloudAiplatformV1DeployedModel)

Required. The DeployedModel to be created within the Endpoint. Note that Endpoint.traffic_split must be updated for the DeployedModel to start receiving traffic, either as part of this call, or via EndpointService.UpdateEndpoint.

trafficSplit

map (key: string, value: integer (int32 format))

A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If this field is non-empty, then the Endpoint's traffic_split will be overwritten with it. To refer to the ID of the just being deployed Model, a "0" should be used, and the actual ID of the new DeployedModel will be filled in its place by this method. The traffic percentage values must add up to 100. If this field is empty, then the Endpoint's traffic_split is not updated.

GoogleCloudAiplatformV1DeployModelResponse

Response message for EndpointService.DeployModel.
Fields
deployedModel

object (GoogleCloudAiplatformV1DeployedModel)

The DeployedModel that had been deployed in the Endpoint.

GoogleCloudAiplatformV1DeployedIndex

A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.
Fields
automaticResources

object (GoogleCloudAiplatformV1AutomaticResources)

Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.

createTime

string (Timestamp format)

Output only. Timestamp when the DeployedIndex was created.

dedicatedResources

object (GoogleCloudAiplatformV1DedicatedResources)

Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.

deployedIndexAuthConfig

object (GoogleCloudAiplatformV1DeployedIndexAuthConfig)

Optional. If set, the authentication is enabled for the private endpoint.

deploymentGroup

string

Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').

displayName

string

The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.

enableAccessLogging

boolean

Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.

id

string

Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.

index

string

Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.

indexSyncTime

string (Timestamp format)

Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.

privateEndpoints

object (GoogleCloudAiplatformV1IndexPrivateEndpoints)

Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.

reservedIpRanges[]

string

Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.

GoogleCloudAiplatformV1DeployedIndexAuthConfig

Used to set up the auth on the DeployedIndex's private endpoint.
Fields
authProvider

object (GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProvider)

Defines the authentication provider that the DeployedIndex uses.

GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProvider

Configuration for an authentication provider, including support for JSON Web Token (JWT).
Fields
allowedIssuers[]

string

A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com

audiences[]

string

The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.

GoogleCloudAiplatformV1DeployedIndexRef

Points to a DeployedIndex.
Fields
deployedIndexId

string

Immutable. The ID of the DeployedIndex in the above IndexEndpoint.

displayName

string

Output only. The display name of the DeployedIndex.

indexEndpoint

string

Immutable. A resource name of the IndexEndpoint.

GoogleCloudAiplatformV1DeployedModel

A deployment of a Model. Endpoints contain one or more DeployedModels.
Fields
automaticResources

object (GoogleCloudAiplatformV1AutomaticResources)

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.

createTime

string (Timestamp format)

Output only. Timestamp when the DeployedModel was created.

dedicatedResources

object (GoogleCloudAiplatformV1DedicatedResources)

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

disableContainerLogging

boolean

For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing. User can disable container logging by setting this flag to true.

disableExplanations

boolean

If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.

displayName

string

The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.

enableAccessLogging

boolean

If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

explanationSpec

object (GoogleCloudAiplatformV1ExplanationSpec)

Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.

id

string

Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are /[0-9]/.

model

string

Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

modelVersionId

string

Output only. The version ID of the model that is deployed.

privateEndpoints

object (GoogleCloudAiplatformV1PrivateEndpoints)

Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.

serviceAccount

string

The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

sharedResources

string

The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

GoogleCloudAiplatformV1DeployedModelRef

Points to a DeployedModel.
Fields
deployedModelId

string

Immutable. An ID of a DeployedModel in the above Endpoint.

endpoint

string

Immutable. A resource name of an Endpoint.

GoogleCloudAiplatformV1DeploymentResourcePool

A description of resources that can be shared by multiple DeployedModels, whose underlying specification consists of a DedicatedResources.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this DeploymentResourcePool was created.

dedicatedResources

object (GoogleCloudAiplatformV1DedicatedResources)

Required. The underlying DedicatedResources that the DeploymentResourcePool uses.

name

string

Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

GoogleCloudAiplatformV1DestinationFeatureSetting

(No description provided)
Fields
destinationField

string

Specify the field name in the export destination. If not specified, Feature ID is used.

featureId

string

Required. The ID of the Feature to apply the setting to.

GoogleCloudAiplatformV1DirectPredictRequest

Request message for PredictionService.DirectPredict.
Fields
inputs[]

object (GoogleCloudAiplatformV1Tensor)

The prediction input.

parameters

object (GoogleCloudAiplatformV1Tensor)

The parameters that govern the prediction.

GoogleCloudAiplatformV1DirectPredictResponse

Response message for PredictionService.DirectPredict.
Fields
outputs[]

object (GoogleCloudAiplatformV1Tensor)

The prediction output.

parameters

object (GoogleCloudAiplatformV1Tensor)

The parameters that govern the prediction.

GoogleCloudAiplatformV1DirectRawPredictRequest

Request message for PredictionService.DirectRawPredict.
Fields
input

string (bytes format)

The prediction input.

methodName

string

Fully qualified name of the API method being invoked to perform predictions. Format: /namespace.Service/Method/ Example: /tensorflow.serving.PredictionService/Predict

GoogleCloudAiplatformV1DirectRawPredictResponse

Response message for PredictionService.DirectRawPredict.
Fields
output

string (bytes format)

The prediction output.

GoogleCloudAiplatformV1DiskSpec

Represents the spec of disk options.
Fields
bootDiskSizeGb

integer (int32 format)

Size in GB of the boot disk (default is 100GB).

bootDiskType

string

Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

GoogleCloudAiplatformV1DoubleArray

A list of double values.
Fields
values[]

number (double format)

A list of double values.

GoogleCloudAiplatformV1EncryptionSpec

Represents a customer-managed encryption key spec that can be applied to a top-level resource.
Fields
kmsKeyName

string

Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

GoogleCloudAiplatformV1Endpoint

Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this Endpoint was created.

deployedModels[]

object (GoogleCloudAiplatformV1DeployedModel)

Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.

description

string

The description of the Endpoint.

displayName

string

Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.

enablePrivateServiceConnect

boolean

Deprecated: If true, expose the Endpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

modelDeploymentMonitoringJob

string

Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}

name

string

Output only. The resource name of the Endpoint.

network

string

Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, network or enable_private_service_connect, can be set. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is network name.

predictRequestResponseLoggingConfig

object (GoogleCloudAiplatformV1PredictRequestResponseLoggingConfig)

Configures the request-response logging for online prediction.

privateServiceConnectConfig

object (GoogleCloudAiplatformV1PrivateServiceConnectConfig)

Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.

trafficSplit

map (key: string, value: integer (int32 format))

A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.

updateTime

string (Timestamp format)

Output only. Timestamp when this Endpoint was last updated.

GoogleCloudAiplatformV1EntityIdSelector

Selector for entityId. Getting ids from the given source.
Fields
csvSource

object (GoogleCloudAiplatformV1CsvSource)

Source of Csv

entityIdField

string

Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id.

GoogleCloudAiplatformV1EntityType

An entity type is a type of object in a system that needs to be modeled and have stored information about. For example, driver is an entity type, and driver0 is an instance of an entity type driver.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this EntityType was created.

description

string

Optional. Description of the EntityType.

etag

string

Optional. Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your EntityTypes. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one EntityType (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

monitoringConfig

object (GoogleCloudAiplatformV1FeaturestoreMonitoringConfig)

Optional. The default monitoring configuration for all Features with value type (Feature.ValueType) BOOL, STRING, DOUBLE or INT64 under this EntityType. If this is populated with [FeaturestoreMonitoringConfig.monitoring_interval] specified, snapshot analysis monitoring is enabled. Otherwise, snapshot analysis monitoring is disabled.

name

string

Immutable. Name of the EntityType. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type} The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.

offlineStorageTtlDays

integer (int32 format)

Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than offline_storage_ttl_days since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.

updateTime

string (Timestamp format)

Output only. Timestamp when this EntityType was most recently updated.

GoogleCloudAiplatformV1EnvVar

Represents an environment variable present in a Container or Python Module.
Fields
name

string

Required. Name of the environment variable. Must be a valid C identifier.

value

string

Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.

GoogleCloudAiplatformV1ErrorAnalysisAnnotation

Model error analysis for each annotation.
Fields
attributedItems[]

object (GoogleCloudAiplatformV1ErrorAnalysisAnnotationAttributedItem)

Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.

outlierScore

number (double format)

The outlier score of this annotated item. Usually defined as the min of all distances from attributed items.

outlierThreshold

number (double format)

The threshold used to determine if this annotation is an outlier or not.

queryType

enum

The query type used for finding the attributed items.

Enum type. Can be one of the following:
QUERY_TYPE_UNSPECIFIED Unspecified query type for model error analysis.
ALL_SIMILAR Query similar samples across all classes in the dataset.
SAME_CLASS_SIMILAR Query similar samples from the same class of the input sample.
SAME_CLASS_DISSIMILAR Query dissimilar samples from the same class of the input sample.

GoogleCloudAiplatformV1ErrorAnalysisAnnotationAttributedItem

Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.
Fields
annotationResourceName

string

The unique ID for each annotation. Used by FE to allocate the annotation in DB.

distance

number (double format)

The distance of this item to the annotation.

GoogleCloudAiplatformV1EvaluatedAnnotation

True positive, false positive, or false negative. EvaluatedAnnotation is only available under ModelEvaluationSlice with slice of annotationSpec dimension.
Fields
dataItemPayload

any

Output only. The data item payload that the Model predicted this EvaluatedAnnotation on.

errorAnalysisAnnotations[]

object (GoogleCloudAiplatformV1ErrorAnalysisAnnotation)

Annotations of model error analysis results.

evaluatedDataItemViewId

string

Output only. ID of the EvaluatedDataItemView under the same ancestor ModelEvaluation. The EvaluatedDataItemView consists of all ground truths and predictions on data_item_payload.

explanations[]

object (GoogleCloudAiplatformV1EvaluatedAnnotationExplanation)

Explanations of predictions. Each element of the explanations indicates the explanation for one explanation Method. The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list.

groundTruths[]

any

Output only. The ground truth Annotations, i.e. the Annotations that exist in the test data the Model is evaluated on. For true positive, there is one and only one ground truth annotation, which matches the only prediction in predictions. For false positive, there are zero or more ground truth annotations that are similar to the only prediction in predictions, but not enough for a match. For false negative, there is one and only one ground truth annotation, which doesn't match any predictions created by the model. The schema of the ground truth is stored in ModelEvaluation.annotation_schema_uri

predictions[]

any

Output only. The model predicted annotations. For true positive, there is one and only one prediction, which matches the only one ground truth annotation in ground_truths. For false positive, there is one and only one prediction, which doesn't match any ground truth annotation of the corresponding data_item_view_id. For false negative, there are zero or more predictions which are similar to the only ground truth annotation in ground_truths but not enough for a match. The schema of the prediction is stored in ModelEvaluation.annotation_schema_uri

type

enum

Output only. Type of the EvaluatedAnnotation.

Enum type. Can be one of the following:
EVALUATED_ANNOTATION_TYPE_UNSPECIFIED Invalid value.
TRUE_POSITIVE The EvaluatedAnnotation is a true positive. It has a prediction created by the Model and a ground truth Annotation which the prediction matches.
FALSE_POSITIVE The EvaluatedAnnotation is false positive. It has a prediction created by the Model which does not match any ground truth annotation.
FALSE_NEGATIVE The EvaluatedAnnotation is false negative. It has a ground truth annotation which is not matched by any of the model created predictions.

GoogleCloudAiplatformV1EvaluatedAnnotationExplanation

Explanation result of the prediction produced by the Model.
Fields
explanation

object (GoogleCloudAiplatformV1Explanation)

Explanation attribution response details.

explanationType

string

Explanation type. For AutoML Image Classification models, possible values are: * image-integrated-gradients * image-xrai

GoogleCloudAiplatformV1Event

An edge describing the relationship between an Artifact and an Execution in a lineage graph.
Fields
artifact

string

Required. The relative resource name of the Artifact in the Event.

eventTime

string (Timestamp format)

Output only. Time the Event occurred.

execution

string

Output only. The relative resource name of the Execution in the Event.

labels

map (key: string, value: string)

The labels with user-defined metadata to annotate Events. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Event (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

type

enum

Required. The type of the Event.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED Unspecified whether input or output of the Execution.
INPUT An input of the Execution.
OUTPUT An output of the Execution.

GoogleCloudAiplatformV1Examples

Example-based explainability that returns the nearest neighbors from the provided dataset.
Fields
exampleGcsSource

object (GoogleCloudAiplatformV1ExamplesExampleGcsSource)

The Cloud Storage input instances.

nearestNeighborSearchConfig

any

The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.

neighborCount

integer (int32 format)

The number of neighbors to return when querying for examples.

presets

object (GoogleCloudAiplatformV1Presets)

Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.

GoogleCloudAiplatformV1ExamplesExampleGcsSource

The Cloud Storage input instances.
Fields
dataFormat

enum

The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.

Enum type. Can be one of the following:
DATA_FORMAT_UNSPECIFIED Format unspecified, used when unset.
JSONL Examples are stored in JSONL files.
gcsSource

object (GoogleCloudAiplatformV1GcsSource)

The Cloud Storage location for the input instances.

GoogleCloudAiplatformV1ExamplesOverride

Overrides for example-based explanations.
Fields
crowdingCount

integer (int32 format)

The number of neighbors to return that have the same crowding tag.

dataFormat

enum

The format of the data being provided with each call.

Enum type. Can be one of the following:
DATA_FORMAT_UNSPECIFIED Unspecified format. Must not be used.
INSTANCES Provided data is a set of model inputs.
EMBEDDINGS Provided data is a set of embeddings.
neighborCount

integer (int32 format)

The number of neighbors to return.

restrictions[]

object (GoogleCloudAiplatformV1ExamplesRestrictionsNamespace)

Restrict the resulting nearest neighbors to respect these constraints.

returnEmbeddings

boolean

If true, return the embeddings instead of neighbors.

GoogleCloudAiplatformV1ExamplesRestrictionsNamespace

Restrictions namespace for example-based explanations overrides.
Fields
allow[]

string

The list of allowed tags.

deny[]

string

The list of deny tags.

namespaceName

string

The namespace name.

GoogleCloudAiplatformV1Execution

Instance of a general execution.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this Execution was created.

description

string

Description of the Execution

displayName

string

User provided display name of the Execution. May be up to 128 Unicode characters.

etag

string

An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).

metadata

map (key: string, value: any)

Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.

name

string

Output only. The resource name of the Execution.

schemaTitle

string

The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

schemaVersion

string

The version of the schema in schema_title to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

state

enum

The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Unspecified Execution state
NEW The Execution is new
RUNNING The Execution is running
COMPLETE The Execution has finished running
FAILED The Execution has failed
CACHED The Execution completed through Cache hit.
CANCELLED The Execution was cancelled.
updateTime

string (Timestamp format)

Output only. Timestamp when this Execution was last updated.

GoogleCloudAiplatformV1ExplainRequest

Request message for PredictionService.Explain.
Fields
deployedModelId

string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.

explanationSpecOverride

object (GoogleCloudAiplatformV1ExplanationSpecOverride)

If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as: - Explaining top-5 predictions results as opposed to top-1; - Increasing path count or step count of the attribution methods to reduce approximate errors; - Using different baselines for explaining the prediction results.

instances[]

any

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parameters

any

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

GoogleCloudAiplatformV1ExplainResponse

Response message for PredictionService.Explain.
Fields
deployedModelId

string

ID of the Endpoint's DeployedModel that served this explanation.

explanations[]

object (GoogleCloudAiplatformV1Explanation)

The explanations of the Model's PredictResponse.predictions. It has the same number of elements as instances to be explained.

predictions[]

any

The predictions that are the output of the predictions call. Same as PredictResponse.predictions.

GoogleCloudAiplatformV1Explanation

Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.
Fields
attributions[]

object (GoogleCloudAiplatformV1Attribution)

Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

neighbors[]

object (GoogleCloudAiplatformV1Neighbor)

Output only. List of the nearest neighbors for example-based explanations. For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

GoogleCloudAiplatformV1ExplanationMetadata

Metadata describing the Model's input and output for explanation.
Fields
featureAttributionsSchemaUri

string

Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

inputs

map (key: string, value: object (GoogleCloudAiplatformV1ExplanationMetadataInputMetadata))

Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in instance.

latentSpaceSource

string

Name of the source to generate embeddings for example based explanations.

outputs

map (key: string, value: object (GoogleCloudAiplatformV1ExplanationMetadataOutputMetadata))

Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed.

GoogleCloudAiplatformV1ExplanationMetadataInputMetadata

Metadata of the input of a feature. Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.
Fields
denseShapeTensorName

string

Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.

encodedBaselines[]

any

A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.

encodedTensorName

string

Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table.

encoding

enum

Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.

Enum type. Can be one of the following:
ENCODING_UNSPECIFIED Default value. This is the same as IDENTITY.
IDENTITY The tensor represents one feature.
BAG_OF_FEATURES The tensor represents a bag of features where each index maps to a feature. InputMetadata.index_feature_mapping must be provided for this encoding. For example: input = [27, 6.0, 150] index_feature_mapping = ["age", "height", "weight"]
BAG_OF_FEATURES_SPARSE The tensor represents a bag of features where each index maps to a feature. Zero values in the tensor indicates feature being non-existent. InputMetadata.index_feature_mapping must be provided for this encoding. For example: input = [2, 0, 5, 0, 1] index_feature_mapping = ["a", "b", "c", "d", "e"]
INDICATOR The tensor is a list of binaries representing whether a feature exists or not (1 indicates existence). InputMetadata.index_feature_mapping must be provided for this encoding. For example: input = [1, 0, 1, 0, 1] index_feature_mapping = ["a", "b", "c", "d", "e"]
COMBINED_EMBEDDING The tensor is encoded into a 1-dimensional array represented by an encoded tensor. InputMetadata.encoded_tensor_name must be provided for this encoding. For example: input = ["This", "is", "a", "test", "."] encoded = [0.1, 0.2, 0.3, 0.4, 0.5]
CONCAT_EMBEDDING Select this encoding when the input tensor is encoded into a 2-dimensional array represented by an encoded tensor. InputMetadata.encoded_tensor_name must be provided for this encoding. The first dimension of the encoded tensor's shape is the same as the input tensor's shape. For example: input = ["This", "is", "a", "test", "."] encoded = [[0.1, 0.2, 0.3, 0.4, 0.5], [0.2, 0.1, 0.4, 0.3, 0.5], [0.5, 0.1, 0.3, 0.5, 0.4], [0.5, 0.3, 0.1, 0.2, 0.4], [0.4, 0.3, 0.2, 0.5, 0.1]]
featureValueDomain

object (GoogleCloudAiplatformV1ExplanationMetadataInputMetadataFeatureValueDomain)

The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.

groupName

string

Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.

indexFeatureMapping[]

string

A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.

indicesTensorName

string

Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.

inputBaselines[]

any

Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

inputTensorName

string

Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.

modality

string

Modality of the feature. Valid values are: numeric, image. Defaults to numeric.

visualization

object (GoogleCloudAiplatformV1ExplanationMetadataInputMetadataVisualization)

Visualization configurations for image explanation.

GoogleCloudAiplatformV1ExplanationMetadataInputMetadataFeatureValueDomain

Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained.
Fields
maxValue

number (float format)

The maximum permissible value for this feature.

minValue

number (float format)

The minimum permissible value for this feature.

originalMean

number (float format)

If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.

originalStddev

number (float format)

If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.

GoogleCloudAiplatformV1ExplanationMetadataInputMetadataVisualization

Visualization configurations for image explanation.
Fields
clipPercentLowerbound

number (float format)

Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.

clipPercentUpperbound

number (float format)

Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.

colorMap

enum

The color scheme used for the highlighted areas. Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.

Enum type. Can be one of the following:
COLOR_MAP_UNSPECIFIED Should not be used.
PINK_GREEN Positive: green. Negative: pink.
VIRIDIS Viridis color map: A perceptually uniform color mapping which is easier to see by those with colorblindness and progresses from yellow to green to blue. Positive: yellow. Negative: blue.
RED Positive: red. Negative: red.
GREEN Positive: green. Negative: green.
RED_GREEN Positive: green. Negative: red.
PINK_WHITE_GREEN PiYG palette.
overlayType

enum

How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.

Enum type. Can be one of the following:
OVERLAY_TYPE_UNSPECIFIED Default value. This is the same as NONE.
NONE No overlay.
ORIGINAL The attributions are shown on top of the original image.
GRAYSCALE The attributions are shown on top of grayscaled version of the original image.
MASK_BLACK The attributions are used as a mask to reveal predictive parts of the image and hide the un-predictive parts.
polarity

enum

Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.

Enum type. Can be one of the following:
POLARITY_UNSPECIFIED Default value. This is the same as POSITIVE.
POSITIVE Highlights the pixels/outlines that were most influential to the model's prediction.
NEGATIVE Setting polarity to negative highlights areas that does not lead to the models's current prediction.
BOTH Shows both positive and negative attributions.
type

enum

Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED Should not be used.
PIXELS Shows which pixel contributed to the image prediction.
OUTLINES Shows which region contributed to the image prediction by outlining the region.

GoogleCloudAiplatformV1ExplanationMetadataOutputMetadata

Metadata of the prediction output to be explained.
Fields
displayNameMappingKey

string

Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.

indexDisplayNameMapping

any

Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.

outputTensorName

string

Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.

GoogleCloudAiplatformV1ExplanationMetadataOverride

The ExplanationMetadata entries that can be overridden at online explanation time.
Fields
inputs

map (key: string, value: object (GoogleCloudAiplatformV1ExplanationMetadataOverrideInputMetadataOverride))

Required. Overrides the input metadata of the features. The key is the name of the feature to be overridden. The keys specified here must exist in the input metadata to be overridden. If a feature is not specified here, the corresponding feature's input metadata is not overridden.

GoogleCloudAiplatformV1ExplanationMetadataOverrideInputMetadataOverride

The input metadata entries to be overridden.
Fields
inputBaselines[]

any

Baseline inputs for this feature. This overrides the input_baseline field of the ExplanationMetadata.InputMetadata object of the corresponding feature's input metadata. If it's not specified, the original baselines are not overridden.

GoogleCloudAiplatformV1ExplanationParameters

Parameters to configure explaining for Model's predictions.
Fields
examples

object (GoogleCloudAiplatformV1Examples)

Example-based explanations that returns the nearest neighbors from the provided dataset.

integratedGradientsAttribution

object (GoogleCloudAiplatformV1IntegratedGradientsAttribution)

An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

outputIndices[]

any

If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

sampledShapleyAttribution

object (GoogleCloudAiplatformV1SampledShapleyAttribution)

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

topK

integer (int32 format)

If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.

xraiAttribution

object (GoogleCloudAiplatformV1XraiAttribution)

An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.

GoogleCloudAiplatformV1ExplanationSpec

Specification of Model explanation.
Fields
metadata

object (GoogleCloudAiplatformV1ExplanationMetadata)

Optional. Metadata describing the Model's input and output for explanation.

parameters

object (GoogleCloudAiplatformV1ExplanationParameters)

Required. Parameters that configure explaining of the Model's predictions.

GoogleCloudAiplatformV1ExplanationSpecOverride

The ExplanationSpec entries that can be overridden at online explanation time.
Fields
examplesOverride

object (GoogleCloudAiplatformV1ExamplesOverride)

The example-based explanations parameter overrides.

metadata

object (GoogleCloudAiplatformV1ExplanationMetadataOverride)

The metadata to be overridden. If not specified, no metadata is overridden.

parameters

object (GoogleCloudAiplatformV1ExplanationParameters)

The parameters to be overridden. Note that the attribution method cannot be changed. If not specified, no parameter is overridden.

GoogleCloudAiplatformV1ExportDataConfig

Describes what part of the Dataset is to be exported, the destination of the export and how to export.
Fields
annotationSchemaUri

string

The Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only used for custom training data export use cases. Only applicable to Datasets that have DataItems and Annotations. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

annotationsFilter

string

An expression for filtering what part of the Dataset is to be exported. Only Annotations that match this filter will be exported. The filter syntax is the same as in ListAnnotations.

exportUse

enum

Indicates the usage of the exported files.

Enum type. Can be one of the following:
EXPORT_USE_UNSPECIFIED Regular user export.
CUSTOM_CODE_TRAINING Export for custom code training.
filterSplit

object (GoogleCloudAiplatformV1ExportFilterSplit)

Split based on the provided filters for each set.

fractionSplit

object (GoogleCloudAiplatformV1ExportFractionSplit)

Split based on fractions defining the size of each set.

gcsDestination

object (GoogleCloudAiplatformV1GcsDestination)

The Google Cloud Storage location where the output is to be written to. In the given directory a new directory will be created with name: export-data-- where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All export output will be written into that directory. Inside that directory, annotations with the same schema will be grouped into sub directories which are named with the corresponding annotations' schema title. Inside these sub directories, a schema.yaml will be created to describe the output format.

savedQueryId

string

The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only used for custom training data export use cases. Only applicable to Datasets that have SavedQueries. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

GoogleCloudAiplatformV1ExportDataOperationMetadata

Runtime operation information for DatasetService.ExportData.
Fields
gcsOutputDirectory

string

A Google Cloud Storage directory which path ends with '/'. The exported data is stored in the directory.

genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1ExportDataRequest

Request message for DatasetService.ExportData.
Fields
exportConfig

object (GoogleCloudAiplatformV1ExportDataConfig)

Required. The desired output location.

GoogleCloudAiplatformV1ExportDataResponse

Response message for DatasetService.ExportData.
Fields
dataStats

object (GoogleCloudAiplatformV1ModelDataStats)

Only present for custom code training export use case. Records data stats, i.e., train/validation/test item/annotation counts calculated during the export operation.

exportedFiles[]

string

All of the files that are exported in this export operation. For custom code training export, only three (training, validation and test) Cloud Storage paths in wildcard format are populated (for example, gs://.../training-*).

GoogleCloudAiplatformV1ExportFeatureValuesOperationMetadata

Details of operations that exports Features values.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Featurestore export Feature values.

GoogleCloudAiplatformV1ExportFeatureValuesRequest

Request message for FeaturestoreService.ExportFeatureValues.
Fields
destination

object (GoogleCloudAiplatformV1FeatureValueDestination)

Required. Specifies destination location and format.

featureSelector

object (GoogleCloudAiplatformV1FeatureSelector)

Required. Selects Features to export values of.

fullExport

object (GoogleCloudAiplatformV1ExportFeatureValuesRequestFullExport)

Exports all historical values of all entities of the EntityType within a time range

settings[]

object (GoogleCloudAiplatformV1DestinationFeatureSetting)

Per-Feature export settings.

snapshotExport

object (GoogleCloudAiplatformV1ExportFeatureValuesRequestSnapshotExport)

Exports the latest Feature values of all entities of the EntityType within a time range.

GoogleCloudAiplatformV1ExportFeatureValuesRequestFullExport

Describes exporting all historical Feature values of all entities of the EntityType between [start_time, end_time].
Fields
endTime

string (Timestamp format)

Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision.

startTime

string (Timestamp format)

Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.

GoogleCloudAiplatformV1ExportFeatureValuesRequestSnapshotExport

Describes exporting the latest Feature values of all entities of the EntityType between [start_time, snapshot_time].
Fields
snapshotTime

string (Timestamp format)

Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision.

startTime

string (Timestamp format)

Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.

GoogleCloudAiplatformV1ExportFilterSplit

Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.
Fields
testFilter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

trainingFilter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

validationFilter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

GoogleCloudAiplatformV1ExportFractionSplit

Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction, validation_fraction and test_fraction may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
Fields
testFraction

number (double format)

The fraction of the input data that is to be used to evaluate the Model.

trainingFraction

number (double format)

The fraction of the input data that is to be used to train the Model.

validationFraction

number (double format)

The fraction of the input data that is to be used to validate the Model.

GoogleCloudAiplatformV1ExportModelOperationMetadata

Details of ModelService.ExportModel operation.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

outputInfo

object (GoogleCloudAiplatformV1ExportModelOperationMetadataOutputInfo)

Output only. Information further describing the output of this Model export.

GoogleCloudAiplatformV1ExportModelOperationMetadataOutputInfo

Further describes the output of the ExportModel. Supplements ExportModelRequest.OutputConfig.
Fields
artifactOutputUri

string

Output only. If the Model artifact is being exported to Google Cloud Storage this is the full path of the directory created, into which the Model files are being written to.

imageOutputUri

string

Output only. If the Model image is being exported to Google Container Registry or Artifact Registry this is the full path of the image created.

GoogleCloudAiplatformV1ExportModelRequest

Request message for ModelService.ExportModel.
Fields
outputConfig

object (GoogleCloudAiplatformV1ExportModelRequestOutputConfig)

Required. The desired output location and configuration.

GoogleCloudAiplatformV1ExportModelRequestOutputConfig

Output configuration for the Model export.
Fields
artifactDestination

object (GoogleCloudAiplatformV1GcsDestination)

The Cloud Storage location where the Model artifact is to be written to. Under the directory given as the destination a new one with name "model-export--", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format, will be created. Inside, the Model and any of its supporting files will be written. This field should only be set when the exportableContent field of the [Model.supported_export_formats] object contains ARTIFACT.

exportFormatId

string

The ID of the format in which the Model must be exported. Each Model lists the export formats it supports. If no value is provided here, then the first from the list of the Model's supported formats is used by default.

imageDestination

object (GoogleCloudAiplatformV1ContainerRegistryDestination)

The Google Container Registry or Artifact Registry uri where the Model container image will be copied to. This field should only be set when the exportableContent field of the [Model.supported_export_formats] object contains IMAGE.

GoogleCloudAiplatformV1ExportTensorboardTimeSeriesDataRequest

Request message for TensorboardService.ExportTensorboardTimeSeriesData.
Fields
filter

string

Exports the TensorboardTimeSeries' data that match the filter expression.

orderBy

string

Field to use to sort the TensorboardTimeSeries' data. By default, TensorboardTimeSeries' data is returned in a pseudo random order.

pageSize

integer (int32 format)

The maximum number of data points to return per page. The default page_size is 1000. Values must be between 1 and 10000. Values above 10000 are coerced to 10000.

pageToken

string

A page token, received from a previous ExportTensorboardTimeSeriesData call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to ExportTensorboardTimeSeriesData must match the call that provided the page token.

GoogleCloudAiplatformV1ExportTensorboardTimeSeriesDataResponse

Response message for TensorboardService.ExportTensorboardTimeSeriesData.
Fields
nextPageToken

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

timeSeriesDataPoints[]

object (GoogleCloudAiplatformV1TimeSeriesDataPoint)

The returned time series data points.

GoogleCloudAiplatformV1Feature

Feature Metadata information. For example, color is a feature that describes an apple.
Fields
createTime

string (Timestamp format)

Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was created.

description

string

Description of the Feature.

disableMonitoring

boolean

Optional. Only applicable for Vertex AI Feature Store (Legacy). If not set, use the monitoring_config defined for the EntityType this Feature belongs to. Only Features with type (Feature.ValueType) BOOL, STRING, DOUBLE or INT64 can enable monitoring. If set to true, all types of data monitoring are disabled despite the config on EntityType.

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your Features. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Feature (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

monitoringStatsAnomalies[]

object (GoogleCloudAiplatformV1FeatureMonitoringStatsAnomaly)

Output only. Only applicable for Vertex AI Feature Store (Legacy). The list of historical stats and anomalies with specified objectives.

name

string

Immutable. Name of the Feature. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}/features/{feature} projects/{project}/locations/{location}/featureGroups/{feature_group}/features/{feature} The last part feature is assigned by the client. The feature can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given an entity type.

pointOfContact

string

Entity responsible for maintaining this feature. Can be comma separated list of email addresses or URIs.

updateTime

string (Timestamp format)

Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was most recently updated.

valueType

enum

Immutable. Only applicable for Vertex AI Feature Store (Legacy). Type of Feature value.

Enum type. Can be one of the following:
VALUE_TYPE_UNSPECIFIED The value type is unspecified.
BOOL Used for Feature that is a boolean.
BOOL_ARRAY Used for Feature that is a list of boolean.
DOUBLE Used for Feature that is double.
DOUBLE_ARRAY Used for Feature that is a list of double.
INT64 Used for Feature that is INT64.
INT64_ARRAY Used for Feature that is a list of INT64.
STRING Used for Feature that is string.
STRING_ARRAY Used for Feature that is a list of String.
BYTES Used for Feature that is bytes.
versionColumnName

string

Only applicable for Vertex AI Feature Store. The name of the BigQuery Table/View column hosting data for this version. If no value is provided, will use feature_id.

GoogleCloudAiplatformV1FeatureGroup

Vertex AI Feature Group.
Fields
bigQuery

object (GoogleCloudAiplatformV1FeatureGroupBigQuery)

Indicates that features for this group come from BigQuery Table/View. By default treats the source as a sparse time series source. The BigQuery source table or view must have at least one entity ID column and a column named feature_timestamp.

createTime

string (Timestamp format)

Output only. Timestamp when this FeatureGroup was created.

description

string

Optional. Description of the FeatureGroup.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your FeatureGroup. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureGroup(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

name

string

Identifier. Name of the FeatureGroup. Format: projects/{project}/locations/{location}/featureGroups/{featureGroup}

updateTime

string (Timestamp format)

Output only. Timestamp when this FeatureGroup was last updated.

GoogleCloudAiplatformV1FeatureGroupBigQuery

Input source type for BigQuery Tables and Views.
Fields
bigQuerySource

object (GoogleCloudAiplatformV1BigQuerySource)

Required. Immutable. The BigQuery source URI that points to either a BigQuery Table or View.

entityIdColumns[]

string

Optional. Columns to construct entity_id / row keys. If not provided defaults to entity_id.

GoogleCloudAiplatformV1FeatureMonitoringStatsAnomaly

A list of historical SnapshotAnalysis or ImportFeaturesAnalysis stats requested by user, sorted by FeatureStatsAnomaly.start_time descending.
Fields
featureStatsAnomaly

object (GoogleCloudAiplatformV1FeatureStatsAnomaly)

Output only. The stats and anomalies generated at specific timestamp.

objective

enum

Output only. The objective for each stats.

Enum type. Can be one of the following:
OBJECTIVE_UNSPECIFIED If it's OBJECTIVE_UNSPECIFIED, monitoring_stats will be empty.
IMPORT_FEATURE_ANALYSIS Stats are generated by Import Feature Analysis.
SNAPSHOT_ANALYSIS Stats are generated by Snapshot Analysis.

GoogleCloudAiplatformV1FeatureNoiseSigma

Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients.
Fields
noiseSigma[]

object (GoogleCloudAiplatformV1FeatureNoiseSigmaNoiseSigmaForFeature)

Noise sigma per feature. No noise is added to features that are not set.

GoogleCloudAiplatformV1FeatureNoiseSigmaNoiseSigmaForFeature

Noise sigma for a single feature.
Fields
name

string

The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.

sigma

number (float format)

This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.

GoogleCloudAiplatformV1FeatureOnlineStore

Vertex AI Feature Online Store provides a centralized repository for serving ML features and embedding indexes at low latency. The Feature Online Store is a top-level container.
Fields
bigtable

object (GoogleCloudAiplatformV1FeatureOnlineStoreBigtable)

Contains settings for the Cloud Bigtable instance that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore.

createTime

string (Timestamp format)

Output only. Timestamp when this FeatureOnlineStore was created.

dedicatedServingEndpoint

object (GoogleCloudAiplatformV1FeatureOnlineStoreDedicatedServingEndpoint)

Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

name

string

Identifier. Name of the FeatureOnlineStore. Format: projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}

optimized

object (GoogleCloudAiplatformV1FeatureOnlineStoreOptimized)

Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.

state

enum

Output only. State of the featureOnlineStore.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Default value. This value is unused.
STABLE State when the featureOnlineStore configuration is not being updated and the fields reflect the current configuration of the featureOnlineStore. The featureOnlineStore is usable in this state.
UPDATING The state of the featureOnlineStore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featureOnlineStore. The featureOnlineStore is still usable in this state.
updateTime

string (Timestamp format)

Output only. Timestamp when this FeatureOnlineStore was last updated.

GoogleCloudAiplatformV1FeatureOnlineStoreBigtable

(No description provided)
Fields
autoScaling

object (GoogleCloudAiplatformV1FeatureOnlineStoreBigtableAutoScaling)

Required. Autoscaling config applied to Bigtable Instance.

GoogleCloudAiplatformV1FeatureOnlineStoreBigtableAutoScaling

(No description provided)
Fields
cpuUtilizationTarget

integer (int32 format)

Optional. A percentage of the cluster's CPU capacity. Can be from 10% to 80%. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set will default to 50%.

maxNodeCount

integer (int32 format)

Required. The maximum number of nodes to scale up to. Must be greater than or equal to min_node_count, and less than or equal to 10 times of 'min_node_count'.

minNodeCount

integer (int32 format)

Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1.

GoogleCloudAiplatformV1FeatureOnlineStoreDedicatedServingEndpoint

The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default.
Fields
publicEndpointDomainName

string

Output only. This field will be populated with the domain name to use for this FeatureOnlineStore

GoogleCloudAiplatformV1FeatureSelector

Selector for Features of an EntityType.
Fields
idMatcher

object (GoogleCloudAiplatformV1IdMatcher)

Required. Matches Features based on ID.

GoogleCloudAiplatformV1FeatureStatsAnomaly

Stats and Anomaly generated at specific timestamp for specific Feature. The start_time and end_time are used to define the time range of the dataset that current stats belongs to, e.g. prediction traffic is bucketed into prediction datasets by time window. If the Dataset is not defined by time window, start_time = end_time. Timestamp of the stats and anomalies always refers to end_time. Raw stats and anomalies are stored in stats_uri or anomaly_uri in the tensorflow defined protos. Field data_stats contains almost identical information with the raw stats in Vertex AI defined proto, for UI to display.
Fields
anomalyDetectionThreshold

number (double format)

This is the threshold used when detecting anomalies. The threshold can be changed by user, so this one might be different from ThresholdConfig.value.

anomalyUri

string

Path of the anomaly file for current feature values in Cloud Storage bucket. Format: gs:////anomalies. Example: gs://monitoring_bucket/feature_name/anomalies. Stats are stored as binary format with Protobuf message Anoamlies are stored as binary format with Protobuf message tensorflow.metadata.v0.AnomalyInfo.

distributionDeviation

number (double format)

Deviation from the current stats to baseline stats. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence.

endTime

string (Timestamp format)

The end timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), end_time indicates the timestamp of the data used to generate stats (e.g. timestamp we take snapshots for feature values).

score

number (double format)

Feature importance score, only populated when cross-feature monitoring is enabled. For now only used to represent feature attribution score within range [0, 1] for ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_SKEW and ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_DRIFT.

startTime

string (Timestamp format)

The start timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), start_time is only used to indicate the monitoring intervals, so it always equals to (end_time - monitoring_interval).

statsUri

string

Path of the stats file for current feature values in Cloud Storage bucket. Format: gs:////stats. Example: gs://monitoring_bucket/feature_name/stats. Stats are stored as binary format with Protobuf message tensorflow.metadata.v0.FeatureNameStatistics.

GoogleCloudAiplatformV1FeatureValue

Value for a feature.
Fields
boolArrayValue

object (GoogleCloudAiplatformV1BoolArray)

A list of bool type feature value.

boolValue

boolean

Bool type feature value.

bytesValue

string (bytes format)

Bytes feature value.

doubleArrayValue

object (GoogleCloudAiplatformV1DoubleArray)

A list of double type feature value.

doubleValue

number (double format)

Double type feature value.

int64ArrayValue

object (GoogleCloudAiplatformV1Int64Array)

A list of int64 type feature value.

int64Value

string (int64 format)

Int64 feature value.

metadata

object (GoogleCloudAiplatformV1FeatureValueMetadata)

Metadata of feature value.

stringArrayValue

object (GoogleCloudAiplatformV1StringArray)

A list of string type feature value.

stringValue

string

String feature value.

GoogleCloudAiplatformV1FeatureValueDestination

A destination location for Feature values and format.
Fields
bigqueryDestination

object (GoogleCloudAiplatformV1BigQueryDestination)

Output in BigQuery format. BigQueryDestination.output_uri in FeatureValueDestination.bigquery_destination must refer to a table.

csvDestination

object (GoogleCloudAiplatformV1CsvDestination)

Output in CSV format. Array Feature value types are not allowed in CSV format.

tfrecordDestination

object (GoogleCloudAiplatformV1TFRecordDestination)

Output in TFRecord format. Below are the mapping from Feature value type in Featurestore to Feature value type in TFRecord: Value type in Featurestore | Value type in TFRecord DOUBLE, DOUBLE_ARRAY | FLOAT_LIST INT64, INT64_ARRAY | INT64_LIST STRING, STRING_ARRAY, BYTES | BYTES_LIST true -> byte_string("true"), false -> byte_string("false") BOOL, BOOL_ARRAY (true, false) | BYTES_LIST

GoogleCloudAiplatformV1FeatureValueList

Container for list of values.
Fields
values[]

object (GoogleCloudAiplatformV1FeatureValue)

A list of feature values. All of them should be the same data type.

GoogleCloudAiplatformV1FeatureValueMetadata

Metadata of feature value.
Fields
generateTime

string (Timestamp format)

Feature generation timestamp. Typically, it is provided by user at feature ingestion time. If not, feature store will use the system timestamp when the data is ingested into feature store. For streaming ingestion, the time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future.

GoogleCloudAiplatformV1FeatureView

FeatureView is representation of values that the FeatureOnlineStore will serve based on its syncConfig.
Fields
bigQuerySource

object (GoogleCloudAiplatformV1FeatureViewBigQuerySource)

Optional. Configures how data is supposed to be extracted from a BigQuery source to be loaded onto the FeatureOnlineStore.

createTime

string (Timestamp format)

Output only. Timestamp when this FeatureView was created.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

featureRegistrySource

object (GoogleCloudAiplatformV1FeatureViewFeatureRegistrySource)

Optional. Configures the features from a Feature Registry source that need to be loaded onto the FeatureOnlineStore.

indexConfig

object (GoogleCloudAiplatformV1FeatureViewIndexConfig)

Optional. Configuration for index preparation for vector search. It contains the required configurations to create an index from source data, so that approximate nearest neighbor (a.k.a ANN) algorithms search can be performed during online serving.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your FeatureViews. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

name

string

Identifier. Name of the FeatureView. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

syncConfig

object (GoogleCloudAiplatformV1FeatureViewSyncConfig)

Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.

updateTime

string (Timestamp format)

Output only. Timestamp when this FeatureView was last updated.

GoogleCloudAiplatformV1FeatureViewBigQuerySource

(No description provided)
Fields
entityIdColumns[]

string

Required. Columns to construct entity_id / row keys.

uri

string

Required. The BigQuery view URI that will be materialized on each sync trigger based on FeatureView.SyncConfig.

GoogleCloudAiplatformV1FeatureViewDataKey

Lookup key for a feature view.
Fields
compositeKey

object (GoogleCloudAiplatformV1FeatureViewDataKeyCompositeKey)

The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec.

key

string

String key to use for lookup.

GoogleCloudAiplatformV1FeatureViewDataKeyCompositeKey

ID that is comprised from several parts (columns).
Fields
parts[]

string

Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order.

GoogleCloudAiplatformV1FeatureViewFeatureRegistrySource

A Feature Registry source for features that need to be synced to Online Store.
Fields
featureGroups[]

object (GoogleCloudAiplatformV1FeatureViewFeatureRegistrySourceFeatureGroup)

Required. List of features that need to be synced to Online Store.

projectNumber

string (int64 format)

Optional. The project number of the parent project of the Feature Groups.

GoogleCloudAiplatformV1FeatureViewFeatureRegistrySourceFeatureGroup

Features belonging to a single feature group that will be synced to Online Store.
Fields
featureGroupId

string

Required. Identifier of the feature group.

featureIds[]

string

Required. Identifiers of features under the feature group.

GoogleCloudAiplatformV1FeatureViewIndexConfig

Configuration for vector indexing.
Fields
bruteForceConfig

object (GoogleCloudAiplatformV1FeatureViewIndexConfigBruteForceConfig)

Optional. Configuration options for using brute force search, which simply implements the standard linear search in the database for each query. It is primarily meant for benchmarking and to generate the ground truth for approximate search.

crowdingColumn

string

Optional. Column of crowding. This column contains crowding attribute which is a constraint on a neighbor list produced by FeatureOnlineStoreService.SearchNearestEntities to diversify search results. If NearestNeighborQuery.per_crowding_attribute_neighbor_count is set to K in SearchNearestEntitiesRequest, it's guaranteed that no more than K entities of the same crowding attribute are returned in the response.

distanceMeasureType

enum

Optional. The distance measure used in nearest neighbor search.

Enum type. Can be one of the following:
DISTANCE_MEASURE_TYPE_UNSPECIFIED Should not be set.
SQUARED_L2_DISTANCE Euclidean (L_2) Distance.
COSINE_DISTANCE Cosine Distance. Defined as 1 - cosine similarity. We strongly suggest using DOT_PRODUCT_DISTANCE + UNIT_L2_NORM instead of COSINE distance. Our algorithms have been more optimized for DOT_PRODUCT distance which, when combined with UNIT_L2_NORM, is mathematically equivalent to COSINE distance and results in the same ranking.
DOT_PRODUCT_DISTANCE Dot Product Distance. Defined as a negative of the dot product.
embeddingColumn

string

Optional. Column of embedding. This column contains the source data to create index for vector search. embedding_column must be set when using vector search.

embeddingDimension

integer (int32 format)

Optional. The number of dimensions of the input embedding.

filterColumns[]

string

Optional. Columns of features that're used to filter vector search results.

treeAhConfig

object (GoogleCloudAiplatformV1FeatureViewIndexConfigTreeAHConfig)

Optional. Configuration options for the tree-AH algorithm (Shallow tree + Asymmetric Hashing). Please refer to this paper for more details: https://arxiv.org/abs/1908.10396

GoogleCloudAiplatformV1FeatureViewIndexConfigTreeAHConfig

Configuration options for the tree-AH algorithm.
Fields
leafNodeEmbeddingCount

string (int64 format)

Optional. Number of embeddings on each leaf node. The default value is 1000 if not set.

GoogleCloudAiplatformV1FeatureViewSync

FeatureViewSync is a representation of sync operation which copies data from data source to Feature View in Online Store.
Fields
createTime

string (Timestamp format)

Output only. Time when this FeatureViewSync is created. Creation of a FeatureViewSync means that the job is pending / waiting for sufficient resources but may not have started the actual data transfer yet.

finalStatus

object (GoogleRpcStatus)

Output only. Final status of the FeatureViewSync.

name

string

Identifier. Name of the FeatureViewSync. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}

runTime

object (GoogleTypeInterval)

Output only. Time when this FeatureViewSync is finished.

syncSummary

object (GoogleCloudAiplatformV1FeatureViewSyncSyncSummary)

Output only. Summary of the sync job.

GoogleCloudAiplatformV1FeatureViewSyncConfig

Configuration for Sync. Only one option is set.
Fields
cron

string

Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * ", or "TZ=America/New_York 1 * * * ".

GoogleCloudAiplatformV1FeatureViewSyncSyncSummary

Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync.
Fields
rowSynced

string (int64 format)

Output only. Total number of rows synced.

totalSlot

string (int64 format)

Output only. BigQuery slot milliseconds consumed for the sync job.

GoogleCloudAiplatformV1Featurestore

Vertex AI Feature Store provides a centralized repository for organizing, storing, and serving ML features. The Featurestore is a top-level container for your features and their values.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this Featurestore was created.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Optional. Customer-managed encryption key spec for data storage. If set, both of the online and offline data storage will be secured by this key.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize your Featurestore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Featurestore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

name

string

Output only. Name of the Featurestore. Format: projects/{project}/locations/{location}/featurestores/{featurestore}

onlineServingConfig

object (GoogleCloudAiplatformV1FeaturestoreOnlineServingConfig)

Optional. Config for online storage resources. The field should not co-exist with the field of OnlineStoreReplicationConfig. If both of it and OnlineStoreReplicationConfig are unset, the feature store will not have an online store and cannot be used for online serving.

onlineStorageTtlDays

integer (int32 format)

Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than online_storage_ttl_days since the feature generation time. Note that online_storage_ttl_days should be less than or equal to offline_storage_ttl_days for each EntityType under a featurestore. If not set, default to 4000 days

state

enum

Output only. State of the featurestore.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Default value. This value is unused.
STABLE State when the featurestore configuration is not being updated and the fields reflect the current configuration of the featurestore. The featurestore is usable in this state.
UPDATING The state of the featurestore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featurestore. For example, online_serving_config.fixed_node_count can take minutes to update. While the update is in progress, the featurestore is in the UPDATING state, and the value of fixed_node_count can be the original value or the updated value, depending on the progress of the operation. Until the update completes, the actual number of nodes can still be the original value of fixed_node_count. The featurestore is still usable in this state.
updateTime

string (Timestamp format)

Output only. Timestamp when this Featurestore was last updated.

GoogleCloudAiplatformV1FeaturestoreMonitoringConfig

Configuration of how features in Featurestore are monitored.
Fields
categoricalThresholdConfig

object (GoogleCloudAiplatformV1FeaturestoreMonitoringConfigThresholdConfig)

Threshold for categorical features of anomaly detection. This is shared by all types of Featurestore Monitoring for categorical features (i.e. Features with type (Feature.ValueType) BOOL or STRING).

importFeaturesAnalysis

object (GoogleCloudAiplatformV1FeaturestoreMonitoringConfigImportFeaturesAnalysis)

The config for ImportFeatures Analysis Based Feature Monitoring.

numericalThresholdConfig

object (GoogleCloudAiplatformV1FeaturestoreMonitoringConfigThresholdConfig)

Threshold for numerical features of anomaly detection. This is shared by all objectives of Featurestore Monitoring for numerical features (i.e. Features with type (Feature.ValueType) DOUBLE or INT64).

snapshotAnalysis

object (GoogleCloudAiplatformV1FeaturestoreMonitoringConfigSnapshotAnalysis)

The config for Snapshot Analysis Based Feature Monitoring.

GoogleCloudAiplatformV1FeaturestoreMonitoringConfigImportFeaturesAnalysis

Configuration of the Featurestore's ImportFeature Analysis Based Monitoring. This type of analysis generates statistics for values of each Feature imported by every ImportFeatureValues operation.
Fields
anomalyDetectionBaseline

enum

The baseline used to do anomaly detection for the statistics generated by import features analysis.

Enum type. Can be one of the following:
BASELINE_UNSPECIFIED Should not be used.
LATEST_STATS Choose the later one statistics generated by either most recent snapshot analysis or previous import features analysis. If non of them exists, skip anomaly detection and only generate a statistics.
MOST_RECENT_SNAPSHOT_STATS Use the statistics generated by the most recent snapshot analysis if exists.
PREVIOUS_IMPORT_FEATURES_STATS Use the statistics generated by the previous import features analysis if exists.
state

enum

Whether to enable / disable / inherite default hebavior for import features analysis.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Should not be used.
DEFAULT The default behavior of whether to enable the monitoring. EntityType-level config: disabled. Feature-level config: inherited from the configuration of EntityType this Feature belongs to.
ENABLED Explicitly enables import features analysis. EntityType-level config: by default enables import features analysis for all Features under it. Feature-level config: enables import features analysis regardless of the EntityType-level config.
DISABLED Explicitly disables import features analysis. EntityType-level config: by default disables import features analysis for all Features under it. Feature-level config: disables import features analysis regardless of the EntityType-level config.

GoogleCloudAiplatformV1FeaturestoreMonitoringConfigSnapshotAnalysis

Configuration of the Featurestore's Snapshot Analysis Based Monitoring. This type of analysis generates statistics for each Feature based on a snapshot of the latest feature value of each entities every monitoring_interval.
Fields
disabled

boolean

The monitoring schedule for snapshot analysis. For EntityType-level config: unset / disabled = true indicates disabled by default for Features under it; otherwise by default enable snapshot analysis monitoring with monitoring_interval for Features under it. Feature-level config: disabled = true indicates disabled regardless of the EntityType-level config; unset monitoring_interval indicates going with EntityType-level config; otherwise run snapshot analysis monitoring with monitoring_interval regardless of the EntityType-level config. Explicitly Disable the snapshot analysis based monitoring.

monitoringIntervalDays

integer (int32 format)

Configuration of the snapshot analysis based monitoring pipeline running interval. The value indicates number of days.

stalenessDays

integer (int32 format)

Customized export features time window for snapshot analysis. Unit is one day. Default value is 3 weeks. Minimum value is 1 day. Maximum value is 4000 days.

GoogleCloudAiplatformV1FeaturestoreMonitoringConfigThresholdConfig

The config for Featurestore Monitoring threshold.
Fields
value

number (double format)

Specify a threshold value that can trigger the alert. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.

GoogleCloudAiplatformV1FeaturestoreOnlineServingConfig

OnlineServingConfig specifies the details for provisioning online serving resources.
Fields
fixedNodeCount

integer (int32 format)

The number of nodes for the online store. The number of nodes doesn't scale automatically, but you can manually update the number of nodes. If set to 0, the featurestore will not have an online store and cannot be used for online serving.

scaling

object (GoogleCloudAiplatformV1FeaturestoreOnlineServingConfigScaling)

Online serving scaling configuration. Only one of fixed_node_count and scaling can be set. Setting one will reset the other.

GoogleCloudAiplatformV1FeaturestoreOnlineServingConfigScaling

Online serving scaling configuration. If min_node_count and max_node_count are set to the same value, the cluster will be configured with the fixed number of node (no auto-scaling).
Fields
cpuUtilizationTarget

integer (int32 format)

Optional. The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set or set to 0, default to 50.

maxNodeCount

integer (int32 format)

The maximum number of nodes to scale up to. Must be greater than min_node_count, and less than or equal to 10 times of 'min_node_count'.

minNodeCount

integer (int32 format)

Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1.

GoogleCloudAiplatformV1FetchFeatureValuesRequest

Request message for FeatureOnlineStoreService.FetchFeatureValues. All the features under the requested feature view will be returned.
Fields
dataFormat

enum

Optional. Response data format. If not set, FeatureViewDataFormat.KEY_VALUE will be used.

Enum type. Can be one of the following:
FEATURE_VIEW_DATA_FORMAT_UNSPECIFIED Not set. Will be treated as the KeyValue format.
KEY_VALUE Return response data in key-value format.
PROTO_STRUCT Return response data in proto Struct format.
dataKey

object (GoogleCloudAiplatformV1FeatureViewDataKey)

Optional. The request key to fetch feature values for.

GoogleCloudAiplatformV1FetchFeatureValuesResponse

Response message for FeatureOnlineStoreService.FetchFeatureValues
Fields
dataKey

object (GoogleCloudAiplatformV1FeatureViewDataKey)

The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs.

keyValues

object (GoogleCloudAiplatformV1FetchFeatureValuesResponseFeatureNameValuePairList)

Feature values in KeyValue format.

protoStruct

map (key: string, value: any)

Feature values in proto Struct format.

GoogleCloudAiplatformV1FetchFeatureValuesResponseFeatureNameValuePairList

Response structure in the format of key (feature name) and (feature) value pair.
Fields
features[]

object (GoogleCloudAiplatformV1FetchFeatureValuesResponseFeatureNameValuePairListFeatureNameValuePair)

List of feature names and values.

GoogleCloudAiplatformV1FetchFeatureValuesResponseFeatureNameValuePairListFeatureNameValuePair

Feature name & value pair.
Fields
name

string

Feature short name.

value

object (GoogleCloudAiplatformV1FeatureValue)

Feature value.

GoogleCloudAiplatformV1FileData

URI based data.
Fields
fileUri

string

Required. URI.

mimeType

string

Required. The IANA standard MIME type of the source data.

GoogleCloudAiplatformV1FilterSplit

Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.
Fields
testFilter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

trainingFilter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

validationFilter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

GoogleCloudAiplatformV1FindNeighborsRequest

The request message for MatchService.FindNeighbors.
Fields
deployedIndexId

string

The ID of the DeployedIndex that will serve the request. This request is sent to a specific IndexEndpoint, as per the IndexEndpoint.network. That IndexEndpoint also has IndexEndpoint.deployed_indexes, and each such index has a DeployedIndex.id field. The value of the field below must equal one of the DeployedIndex.id fields of the IndexEndpoint that is being called for this request.

queries[]

object (GoogleCloudAiplatformV1FindNeighborsRequestQuery)

The list of queries.

returnFullDatapoint

boolean

If set to true, the full datapoints (including all vector values and restricts) of the nearest neighbors are returned. Note that returning full datapoint will significantly increase the latency and cost of the query.

GoogleCloudAiplatformV1FindNeighborsRequestQuery

A query to find a number of the nearest neighbors (most similar vectors) of a vector.
Fields
approximateNeighborCount

integer (int32 format)

The number of neighbors to find via approximate search before exact reordering is performed. If not set, the default value from scam config is used; if set, this value must be > 0.

datapoint

object (GoogleCloudAiplatformV1IndexDatapoint)

Required. The datapoint/vector whose nearest neighbors should be searched for.

fractionLeafNodesToSearchOverride

number (double format)

The fraction of the number of leaves to search, set at query time allows user to tune search performance. This value increase result in both search accuracy and latency increase. The value should be between 0.0 and 1.0. If not set or set to 0.0, query uses the default value specified in NearestNeighborSearchConfig.TreeAHConfig.fraction_leaf_nodes_to_search.

neighborCount

integer (int32 format)

The number of nearest neighbors to be retrieved from database for each query. If not set, will use the default from the service configuration (https://cloud.google.com/vertex-ai/docs/matching-engine/configuring-indexes#nearest-neighbor-search-config).

perCrowdingAttributeNeighborCount

integer (int32 format)

Crowding is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute. It's used for improving result diversity. This field is the maximum number of matches with the same crowding tag.

rrf

object (GoogleCloudAiplatformV1FindNeighborsRequestQueryRRF)

Optional. Represents RRF algorithm that combines search results.

GoogleCloudAiplatformV1FindNeighborsRequestQueryRRF

Parameters for RRF algorithm that combines search results.
Fields
alpha

number (float format)

Required. Users can provide an alpha value to give more weight to dense vs sparse results. For example, if the alpha is 0, we only return sparse and if the alpha is 1, we only return dense.

GoogleCloudAiplatformV1FindNeighborsResponse

The response message for MatchService.FindNeighbors.
Fields
nearestNeighbors[]

object (GoogleCloudAiplatformV1FindNeighborsResponseNearestNeighbors)

The nearest neighbors of the query datapoints.

GoogleCloudAiplatformV1FindNeighborsResponseNearestNeighbors

Nearest neighbors for one query.
Fields
id

string

The ID of the query datapoint.

neighbors[]

object (GoogleCloudAiplatformV1FindNeighborsResponseNeighbor)

All its neighbors.

GoogleCloudAiplatformV1FindNeighborsResponseNeighbor

A neighbor of the query vector.
Fields
datapoint

object (GoogleCloudAiplatformV1IndexDatapoint)

The datapoint of the neighbor. Note that full datapoints are returned only when "return_full_datapoint" is set to true. Otherwise, only the "datapoint_id" and "crowding_tag" fields are populated.

distance

number (double format)

The distance between the neighbor and the dense embedding query.

sparseDistance

number (double format)

The distance between the neighbor and the query sparse_embedding.

GoogleCloudAiplatformV1FractionSplit

Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction, validation_fraction and test_fraction may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
Fields
testFraction

number (double format)

The fraction of the input data that is to be used to evaluate the Model.

trainingFraction

number (double format)

The fraction of the input data that is to be used to train the Model.

validationFraction

number (double format)

The fraction of the input data that is to be used to validate the Model.

GoogleCloudAiplatformV1FunctionCall

A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values.
Fields
args

map (key: string, value: any)

Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.

name

string

Required. The name of the function to call. Matches [FunctionDeclaration.name].

GoogleCloudAiplatformV1FunctionDeclaration

Structured representation of a function declaration as defined by the OpenAPI 3.0 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.
Fields
description

string

Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.

name

string

Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.

parameters

object (GoogleCloudAiplatformV1Schema)

Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1

GoogleCloudAiplatformV1FunctionResponse

The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction.
Fields
name

string

Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].

response

map (key: string, value: any)

Required. The function response in JSON object format.

GoogleCloudAiplatformV1GcsDestination

The Google Cloud Storage location where the output is to be written to.
Fields
outputUriPrefix

string

Required. Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.

GoogleCloudAiplatformV1GcsSource

The Google Cloud Storage location for the input content.
Fields
uris[]

string

Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.

GoogleCloudAiplatformV1GenerateContentRequest

Request message for [PredictionService.GenerateContent].
Fields
contents[]

object (GoogleCloudAiplatformV1Content)

Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.

generationConfig

object (GoogleCloudAiplatformV1GenerationConfig)

Optional. Generation config.

safetySettings[]

object (GoogleCloudAiplatformV1SafetySetting)

Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates.

systemInstruction

object (GoogleCloudAiplatformV1Content)

Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.

tools[]

object (GoogleCloudAiplatformV1Tool)

Optional. A list of Tools the model may use to generate the next response. A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

GoogleCloudAiplatformV1GenerateContentResponse

Response message for [PredictionService.GenerateContent].
Fields
candidates[]

object (GoogleCloudAiplatformV1Candidate)

Output only. Generated candidates.

promptFeedback

object (GoogleCloudAiplatformV1GenerateContentResponsePromptFeedback)

Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.

usageMetadata

object (GoogleCloudAiplatformV1GenerateContentResponseUsageMetadata)

Usage metadata about the response(s).

GoogleCloudAiplatformV1GenerateContentResponsePromptFeedback

Content filter results for a prompt sent in the request.
Fields
blockReason

enum

Output only. Blocked reason.

Enum type. Can be one of the following:
BLOCKED_REASON_UNSPECIFIED Unspecified blocked reason.
SAFETY Candidates blocked due to safety.
OTHER Candidates blocked due to other reason.
BLOCKLIST Candidates blocked due to the terms which are included from the terminology blocklist.
PROHIBITED_CONTENT Candidates blocked due to prohibited content.
blockReasonMessage

string

Output only. A readable block reason message.

safetyRatings[]

object (GoogleCloudAiplatformV1SafetyRating)

Output only. Safety ratings.

GoogleCloudAiplatformV1GenerateContentResponseUsageMetadata

Usage metadata about response(s).
Fields
candidatesTokenCount

integer (int32 format)

Number of tokens in the response(s).

promptTokenCount

integer (int32 format)

Number of tokens in the request.

totalTokenCount

integer (int32 format)

(No description provided)

GoogleCloudAiplatformV1GenerationConfig

Generation config.
Fields
candidateCount

integer (int32 format)

Optional. Number of candidates to generate.

frequencyPenalty

number (float format)

Optional. Frequency penalties.

maxOutputTokens

integer (int32 format)

Optional. The maximum number of output tokens to generate per message.

presencePenalty

number (float format)

Optional. Positive penalties.

responseMimeType

string

Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain: (default) Text output. - application/json: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.

responseStyle

enum

Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED

Enum type. Can be one of the following:
RESPONSE_STYLE_UNSPECIFIED response style unspecified.
RESPONSE_STYLE_PRECISE Precise response.
RESPONSE_STYLE_BALANCED Default response style.
RESPONSE_STYLE_CREATIVE Creative response style.
stopSequences[]

string

Optional. Stop sequences.

temperature

number (float format)

Optional. Controls the randomness of predictions.

topK

number (float format)

Optional. If specified, top-k sampling will be used.

topP

number (float format)

Optional. If specified, nucleus sampling will be used.

GoogleCloudAiplatformV1GenericOperationMetadata

Generic Metadata shared by all operations.
Fields
createTime

string (Timestamp format)

Output only. Time when the operation was created.

partialFailures[]

object (GoogleRpcStatus)

Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard Google Cloud error details.

updateTime

string (Timestamp format)

Output only. Time when the operation was updated for the last time. If the operation has finished (successfully or not), this is the finish time.

GoogleCloudAiplatformV1GenieSource

Contains information about the source of the models generated from Generative AI Studio.
Fields
baseModelUri

string

Required. The public base model URI.

GoogleCloudAiplatformV1GroundingMetadata

Metadata returned to client when grounding is enabled.
Fields
searchEntryPoint

object (GoogleCloudAiplatformV1SearchEntryPoint)

Optional. Google search entry for the following-up web searches.

webSearchQueries[]

string

Optional. Web search queries for the following-up web search.

GoogleCloudAiplatformV1HyperparameterTuningJob

Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.
Fields
createTime

string (Timestamp format)

Output only. Time when the HyperparameterTuningJob was created.

displayName

string

Required. The display name of the HyperparameterTuningJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key options for a HyperparameterTuningJob. If this is set, then all resources created by the HyperparameterTuningJob will be encrypted with the provided encryption key.

endTime

string (Timestamp format)

Output only. Time when the HyperparameterTuningJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

error

object (GoogleRpcStatus)

Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize HyperparameterTuningJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

maxFailedTrialCount

integer (int32 format)

The number of failed Trials that need to be seen before failing the HyperparameterTuningJob. If set to 0, Vertex AI decides how many Trials must fail before the whole job fails.

maxTrialCount

integer (int32 format)

Required. The desired total number of Trials.

name

string

Output only. Resource name of the HyperparameterTuningJob.

parallelTrialCount

integer (int32 format)

Required. The desired number of Trials to run in parallel.

startTime

string (Timestamp format)

Output only. Time when the HyperparameterTuningJob for the first time entered the JOB_STATE_RUNNING state.

state

enum

Output only. The detailed state of the job.

Enum type. Can be one of the following:
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.
studySpec

object (GoogleCloudAiplatformV1StudySpec)

Required. Study configuration of the HyperparameterTuningJob.

trialJobSpec

object (GoogleCloudAiplatformV1CustomJobSpec)

Required. The spec of a trial job. The same spec applies to the CustomJobs created in all the trials.

trials[]

object (GoogleCloudAiplatformV1Trial)

Output only. Trials of the HyperparameterTuningJob.

updateTime

string (Timestamp format)

Output only. Time when the HyperparameterTuningJob was most recently updated.

GoogleCloudAiplatformV1IdMatcher

Matcher for Features of an EntityType by Feature ID.
Fields
ids[]

string

Required. The following are accepted as ids: * A single-element list containing only *, which selects all Features in the target EntityType, or * A list containing only Feature IDs, which selects only Features with those IDs in the target EntityType.

GoogleCloudAiplatformV1ImportDataConfig

Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.
Fields
annotationLabels

map (key: string, value: string)

Labels that will be applied to newly imported Annotations. If two Annotations are identical, one of them will be deduped. Two Annotations are considered identical if their payload, payload_schema_uri and all of their labels are the same. These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri, e.g. jsonl file.

dataItemLabels

map (key: string, value: string)

Labels that will be applied to newly imported DataItems. If an identical DataItem as one being imported already exists in the Dataset, then these labels will be appended to these of the already existing one, and if labels with identical key is imported before, the old label value will be overwritten. If two DataItems are identical in the same import data operation, the labels will be combined and if key collision happens in this case, one of the values will be picked randomly. Two DataItems are considered identical if their content bytes are identical (e.g. image bytes or pdf bytes). These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri, e.g. jsonl file.

gcsSource

object (GoogleCloudAiplatformV1GcsSource)

The Google Cloud Storage location for the input content.

importSchemaUri

string

Required. Points to a YAML file stored on Google Cloud Storage describing the import format. Validation will be done against the schema. The schema is defined as an OpenAPI 3.0.2 Schema Object.

GoogleCloudAiplatformV1ImportDataOperationMetadata

Runtime operation information for DatasetService.ImportData.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1ImportDataRequest

Request message for DatasetService.ImportData.
Fields
importConfigs[]

object (GoogleCloudAiplatformV1ImportDataConfig)

Required. The desired input locations. The contents of all input locations will be imported in one batch.

GoogleCloudAiplatformV1ImportFeatureValuesOperationMetadata

Details of operations that perform import Feature values.
Fields
blockingOperationIds[]

string (int64 format)

List of ImportFeatureValues operations running under a single EntityType that are blocking this operation.

genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Featurestore import Feature values.

importedEntityCount

string (int64 format)

Number of entities that have been imported by the operation.

importedFeatureValueCount

string (int64 format)

Number of Feature values that have been imported by the operation.

invalidRowCount

string (int64 format)

The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources).

sourceUris[]

string

The source URI from where Feature values are imported.

timestampOutsideRetentionRowsCount

string (int64 format)

The number rows that weren't ingested due to having timestamps outside the retention boundary.

GoogleCloudAiplatformV1ImportFeatureValuesRequest

Request message for FeaturestoreService.ImportFeatureValues.
Fields
avroSource

object (GoogleCloudAiplatformV1AvroSource)

(No description provided)

bigquerySource

object (GoogleCloudAiplatformV1BigQuerySource)

(No description provided)

csvSource

object (GoogleCloudAiplatformV1CsvSource)

(No description provided)

disableIngestionAnalysis

boolean

If true, API doesn't start ingestion analysis pipeline.

disableOnlineServing

boolean

If set, data will not be imported for online serving. This is typically used for backfilling, where Feature generation timestamps are not in the timestamp range needed for online serving.

entityIdField

string

Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id.

featureSpecs[]

object (GoogleCloudAiplatformV1ImportFeatureValuesRequestFeatureSpec)

Required. Specifications defining which Feature values to import from the entity. The request fails if no feature_specs are provided, and having multiple feature_specs for one Feature is not allowed.

featureTime

string (Timestamp format)

Single Feature timestamp for all entities being imported. The timestamp must not have higher than millisecond precision.

featureTimeField

string

Source column that holds the Feature timestamp for all Feature values in each entity.

workerCount

integer (int32 format)

Specifies the number of workers that are used to write data to the Featurestore. Consider the online serving capacity that you require to achieve the desired import throughput without interfering with online serving. The value must be positive, and less than or equal to 100. If not set, defaults to using 1 worker. The low count ensures minimal impact on online serving performance.

GoogleCloudAiplatformV1ImportFeatureValuesRequestFeatureSpec

Defines the Feature value(s) to import.
Fields
id

string

Required. ID of the Feature to import values of. This Feature must exist in the target EntityType, or the request will fail.

sourceField

string

Source column to get the Feature values from. If not set, uses the column with the same name as the Feature ID.

GoogleCloudAiplatformV1ImportFeatureValuesResponse

Response message for FeaturestoreService.ImportFeatureValues.
Fields
importedEntityCount

string (int64 format)

Number of entities that have been imported by the operation.

importedFeatureValueCount

string (int64 format)

Number of Feature values that have been imported by the operation.

invalidRowCount

string (int64 format)

The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources).

timestampOutsideRetentionRowsCount

string (int64 format)

The number rows that weren't ingested due to having feature timestamps outside the retention boundary.

GoogleCloudAiplatformV1ImportModelEvaluationRequest

Request message for ModelService.ImportModelEvaluation
Fields
modelEvaluation

object (GoogleCloudAiplatformV1ModelEvaluation)

Required. Model evaluation resource to be imported.

GoogleCloudAiplatformV1Index

A representation of a collection of database items organized in a way that allows for approximate nearest neighbor (a.k.a ANN) algorithms search.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this Index was created.

deployedIndexes[]

object (GoogleCloudAiplatformV1DeployedIndexRef)

Output only. The pointers to DeployedIndexes created from this Index. An Index can be only deleted if all its DeployedIndexes had been undeployed first.

description

string

The description of the Index.

displayName

string

Required. The display name of the Index. The name can be up to 128 characters long and can consist of any UTF-8 characters.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Immutable. Customer-managed encryption key spec for an Index. If set, this Index and all sub-resources of this Index will be secured by this key.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

indexStats

object (GoogleCloudAiplatformV1IndexStats)

Output only. Stats of the index resource.

indexUpdateMethod

enum

Immutable. The update method to use with this Index. If not set, BATCH_UPDATE will be used by default.

Enum type. Can be one of the following:
INDEX_UPDATE_METHOD_UNSPECIFIED Should not be used.
BATCH_UPDATE BatchUpdate: user can call UpdateIndex with files on Cloud Storage of Datapoints to update.
STREAM_UPDATE StreamUpdate: user can call UpsertDatapoints/DeleteDatapoints to update the Index and the updates will be applied in corresponding DeployedIndexes in nearly real-time.
labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Indexes. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

metadata

any

An additional information about the Index; the schema of the metadata can be found in metadata_schema.

metadataSchemaUri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

name

string

Output only. The resource name of the Index.

updateTime

string (Timestamp format)

Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.

GoogleCloudAiplatformV1IndexDatapoint

A datapoint of Index.
Fields
crowdingTag

object (GoogleCloudAiplatformV1IndexDatapointCrowdingTag)

Optional. CrowdingTag of the datapoint, the number of neighbors to return in each crowding can be configured during query.

datapointId

string

Required. Unique identifier of the datapoint.

featureVector[]

number (float format)

Required. Feature embedding vector for dense index. An array of numbers with the length of [NearestNeighborSearchConfig.dimensions].

numericRestricts[]

object (GoogleCloudAiplatformV1IndexDatapointNumericRestriction)

Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses numeric comparisons.

restricts[]

object (GoogleCloudAiplatformV1IndexDatapointRestriction)

Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses categorical tokens. See: https://cloud.google.com/vertex-ai/docs/matching-engine/filtering

sparseEmbedding

object (GoogleCloudAiplatformV1IndexDatapointSparseEmbedding)

Optional. Feature embedding vector for sparse index.

GoogleCloudAiplatformV1IndexDatapointCrowdingTag

Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute.
Fields
crowdingAttribute

string

The attribute value used for crowding. The maximum number of neighbors to return per crowding attribute value (per_crowding_attribute_num_neighbors) is configured per-query. This field is ignored if per_crowding_attribute_num_neighbors is larger than the total number of neighbors to return for a given query.

GoogleCloudAiplatformV1IndexDatapointNumericRestriction

This field allows restricts to be based on numeric comparisons rather than categorical tokens.
Fields
namespace

string

The namespace of this restriction. e.g.: cost.

op

enum

This MUST be specified for queries and must NOT be specified for datapoints.

Enum type. Can be one of the following:
OPERATOR_UNSPECIFIED Default value of the enum.
LESS Datapoints are eligible iff their value is < the query's.
LESS_EQUAL Datapoints are eligible iff their value is <= the query's.
EQUAL Datapoints are eligible iff their value is == the query's.
GREATER_EQUAL Datapoints are eligible iff their value is >= the query's.
GREATER Datapoints are eligible iff their value is > the query's.
NOT_EQUAL Datapoints are eligible iff their value is != the query's.
valueDouble

number (double format)

Represents 64 bit float.

valueFloat

number (float format)

Represents 32 bit float.

valueInt

string (int64 format)

Represents 64 bit integer.

GoogleCloudAiplatformV1IndexDatapointRestriction

Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).
Fields
allowList[]

string

The attributes to allow in this namespace. e.g.: 'red'

denyList[]

string

The attributes to deny in this namespace. e.g.: 'blue'

namespace

string

The namespace of this restriction. e.g.: color.

GoogleCloudAiplatformV1IndexDatapointSparseEmbedding

Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions.
Fields
dimensions[]

string (int64 format)

Required. The list of indexes for the embedding values of the sparse vector.

values[]

number (float format)

Required. The list of embedding values of the sparse vector.

GoogleCloudAiplatformV1IndexEndpoint

Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this IndexEndpoint was created.

deployedIndexes[]

object (GoogleCloudAiplatformV1DeployedIndex)

Output only. The indexes deployed in this endpoint.

description

string

The description of the IndexEndpoint.

displayName

string

Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.

enablePrivateServiceConnect

boolean

Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

name

string

Output only. The resource name of the IndexEndpoint.

network

string

Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.

privateServiceConnectConfig

object (GoogleCloudAiplatformV1PrivateServiceConnectConfig)

Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.

publicEndpointDomainName

string

Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.

publicEndpointEnabled

boolean

Optional. If true, the deployed index will be accessible through public endpoint.

updateTime

string (Timestamp format)

Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.

GoogleCloudAiplatformV1IndexPrivateEndpoints

IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment.
Fields
matchGrpcAddress

string

Output only. The ip address used to send match gRPC requests.

pscAutomatedEndpoints[]

object (GoogleCloudAiplatformV1PscAutomatedEndpoints)

Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.

serviceAttachment

string

Output only. The name of the service attachment resource. Populated if private service connect is enabled.

GoogleCloudAiplatformV1IndexStats

Stats of the Index.
Fields
shardsCount

integer (int32 format)

Output only. The number of shards in the Index.

sparseVectorsCount

string (int64 format)

Output only. The number of sparse vectors in the Index.

vectorsCount

string (int64 format)

Output only. The number of dense vectors in the Index.

GoogleCloudAiplatformV1InputDataConfig

Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.
Fields
annotationSchemaUri

string

Applicable only to custom training with Datasets that have DataItems and Annotations. Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

annotationsFilter

string

Applicable only to Datasets that have DataItems and Annotations. A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.

bigqueryDestination

object (GoogleCloudAiplatformV1BigQueryDestination)

Only applicable to custom training with tabular Dataset with BigQuery source. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset___ where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test. * AIP_DATA_FORMAT = "bigquery". * AIP_TRAINING_DATA_URI = "bigquery_destination.dataset.training" * AIPVALIDATION_DATA_URI = "bigquery_destination.dataset_.validation" * AIP_TEST_DATA_URI = "bigquery_destination.dataset___.test"

datasetId

string

Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's training_task_definition. For tabular Datasets, all their data is exported to training, to pick and choose from.

filterSplit

object (GoogleCloudAiplatformV1FilterSplit)

Split based on the provided filters for each set.

fractionSplit

object (GoogleCloudAiplatformV1FractionSplit)

Split based on fractions defining the size of each set.

gcsDestination

object (GoogleCloudAiplatformV1GcsDestination)

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset--- where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory. The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-.jsonl" * AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data * AIP_TRAINING_DATA_URI = "gcs_destination/dataset---/training-.${AIP_DATA_FORMAT}" * AIP_VALIDATION_DATA_URI = "gcs_destination/dataset---/validation-.${AIP_DATA_FORMAT}" * AIP_TEST_DATA_URI = "gcs_destination/dataset---/test-.${AIP_DATA_FORMAT}"

persistMlUseAssignment

boolean

Whether to persist the ML use assignment to data item system labels.

predefinedSplit

object (GoogleCloudAiplatformV1PredefinedSplit)

Supported only for tabular Datasets. Split based on a predefined key.

savedQueryId

string

Only applicable to Datasets that have SavedQueries. The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training. Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter. Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

stratifiedSplit

object (GoogleCloudAiplatformV1StratifiedSplit)

Supported only for tabular Datasets. Split based on the distribution of the specified column.

timestampSplit

object (GoogleCloudAiplatformV1TimestampSplit)

Supported only for tabular Datasets. Split based on the timestamp of the input data pieces.

GoogleCloudAiplatformV1Int64Array

A list of int64 values.
Fields
values[]

string (int64 format)

A list of int64 values.

GoogleCloudAiplatformV1IntegratedGradientsAttribution

An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
Fields
blurBaselineConfig

object (GoogleCloudAiplatformV1BlurBaselineConfig)

Config for IG with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383

smoothGradConfig

object (GoogleCloudAiplatformV1SmoothGradConfig)

Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf

stepCount

integer (int32 format)

Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range. Valid range of its value is [1, 100], inclusively.

GoogleCloudAiplatformV1LargeModelReference

Contains information about the Large Model.
Fields
name

string

Required. The unique name of the large Foundation or pre-built model. Like "chat-bison", "text-bison". Or model name with version ID, like "chat-bison@001", "text-bison@005", etc.

GoogleCloudAiplatformV1LineageSubgraph

A subgraph of the overall lineage graph. Event edges connect Artifact and Execution nodes.
Fields
artifacts[]

object (GoogleCloudAiplatformV1Artifact)

The Artifact nodes in the subgraph.

events[]

object (GoogleCloudAiplatformV1Event)

The Event edges between Artifacts and Executions in the subgraph.

executions[]

object (GoogleCloudAiplatformV1Execution)

The Execution nodes in the subgraph.

GoogleCloudAiplatformV1ListAnnotationsResponse

Response message for DatasetService.ListAnnotations.
Fields
annotations[]

object (GoogleCloudAiplatformV1Annotation)

A list of Annotations that matches the specified filter in the request.

nextPageToken

string

The standard List next-page token.

GoogleCloudAiplatformV1ListArtifactsResponse

Response message for MetadataService.ListArtifacts.
Fields
artifacts[]

object (GoogleCloudAiplatformV1Artifact)

The Artifacts retrieved from the MetadataStore.

nextPageToken

string

A token, which can be sent as ListArtifactsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

GoogleCloudAiplatformV1ListBatchPredictionJobsResponse

Response message for JobService.ListBatchPredictionJobs
Fields
batchPredictionJobs[]

object (GoogleCloudAiplatformV1BatchPredictionJob)

List of BatchPredictionJobs in the requested page.

nextPageToken

string

A token to retrieve the next page of results. Pass to ListBatchPredictionJobsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListContextsResponse

Response message for MetadataService.ListContexts.
Fields
contexts[]

object (GoogleCloudAiplatformV1Context)

The Contexts retrieved from the MetadataStore.

nextPageToken

string

A token, which can be sent as ListContextsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

GoogleCloudAiplatformV1ListCustomJobsResponse

Response message for JobService.ListCustomJobs
Fields
customJobs[]

object (GoogleCloudAiplatformV1CustomJob)

List of CustomJobs in the requested page.

nextPageToken

string

A token to retrieve the next page of results. Pass to ListCustomJobsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListDataItemsResponse

Response message for DatasetService.ListDataItems.
Fields
dataItems[]

object (GoogleCloudAiplatformV1DataItem)

A list of DataItems that matches the specified filter in the request.

nextPageToken

string

The standard List next-page token.

GoogleCloudAiplatformV1ListDataLabelingJobsResponse

Response message for JobService.ListDataLabelingJobs.
Fields
dataLabelingJobs[]

object (GoogleCloudAiplatformV1DataLabelingJob)

A list of DataLabelingJobs that matches the specified filter in the request.

nextPageToken

string

The standard List next-page token.

GoogleCloudAiplatformV1ListDatasetVersionsResponse

Response message for DatasetService.ListDatasetVersions.
Fields
datasetVersions[]

object (GoogleCloudAiplatformV1DatasetVersion)

A list of DatasetVersions that matches the specified filter in the request.

nextPageToken

string

The standard List next-page token.

GoogleCloudAiplatformV1ListDatasetsResponse

Response message for DatasetService.ListDatasets.
Fields
datasets[]

object (GoogleCloudAiplatformV1Dataset)

A list of Datasets that matches the specified filter in the request.

nextPageToken

string

The standard List next-page token.

GoogleCloudAiplatformV1ListDeploymentResourcePoolsResponse

Response message for ListDeploymentResourcePools method.
Fields
deploymentResourcePools[]

object (GoogleCloudAiplatformV1DeploymentResourcePool)

The DeploymentResourcePools from the specified location.

nextPageToken

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListEndpointsResponse

Response message for EndpointService.ListEndpoints.
Fields
endpoints[]

object (GoogleCloudAiplatformV1Endpoint)

List of Endpoints in the requested page.

nextPageToken

string

A token to retrieve the next page of results. Pass to ListEndpointsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListEntityTypesResponse

Response message for FeaturestoreService.ListEntityTypes.
Fields
entityTypes[]

object (GoogleCloudAiplatformV1EntityType)

The EntityTypes matching the request.

nextPageToken

string

A token, which can be sent as ListEntityTypesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListExecutionsResponse

Response message for MetadataService.ListExecutions.
Fields
executions[]

object (GoogleCloudAiplatformV1Execution)

The Executions retrieved from the MetadataStore.

nextPageToken

string

A token, which can be sent as ListExecutionsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

GoogleCloudAiplatformV1ListFeatureGroupsResponse

Response message for FeatureRegistryService.ListFeatureGroups.
Fields
featureGroups[]

object (GoogleCloudAiplatformV1FeatureGroup)

The FeatureGroups matching the request.

nextPageToken

string

A token, which can be sent as ListFeatureGroupsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListFeatureOnlineStoresResponse

Response message for FeatureOnlineStoreAdminService.ListFeatureOnlineStores.
Fields
featureOnlineStores[]

object (GoogleCloudAiplatformV1FeatureOnlineStore)

The FeatureOnlineStores matching the request.

nextPageToken

string

A token, which can be sent as ListFeatureOnlineStoresRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListFeatureViewSyncsResponse

Response message for FeatureOnlineStoreAdminService.ListFeatureViewSyncs.
Fields
featureViewSyncs[]

object (GoogleCloudAiplatformV1FeatureViewSync)

The FeatureViewSyncs matching the request.

nextPageToken

string

A token, which can be sent as ListFeatureViewSyncsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListFeatureViewsResponse

Response message for FeatureOnlineStoreAdminService.ListFeatureViews.
Fields
featureViews[]

object (GoogleCloudAiplatformV1FeatureView)

The FeatureViews matching the request.

nextPageToken

string

A token, which can be sent as ListFeatureViewsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListFeaturesResponse

Response message for FeaturestoreService.ListFeatures. Response message for FeatureRegistryService.ListFeatures.
Fields
features[]

object (GoogleCloudAiplatformV1Feature)

The Features matching the request.

nextPageToken

string

A token, which can be sent as ListFeaturesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListFeaturestoresResponse

Response message for FeaturestoreService.ListFeaturestores.
Fields
featurestores[]

object (GoogleCloudAiplatformV1Featurestore)

The Featurestores matching the request.

nextPageToken

string

A token, which can be sent as ListFeaturestoresRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1ListHyperparameterTuningJobsResponse

Response message for JobService.ListHyperparameterTuningJobs
Fields
hyperparameterTuningJobs[]

object (GoogleCloudAiplatformV1HyperparameterTuningJob)

List of HyperparameterTuningJobs in the requested page. HyperparameterTuningJob.trials of the jobs will be not be returned.

nextPageToken

string

A token to retrieve the next page of results. Pass to ListHyperparameterTuningJobsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListIndexEndpointsResponse

Response message for IndexEndpointService.ListIndexEndpoints.
Fields
indexEndpoints[]

object (GoogleCloudAiplatformV1IndexEndpoint)

List of IndexEndpoints in the requested page.

nextPageToken

string

A token to retrieve next page of results. Pass to ListIndexEndpointsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListIndexesResponse

Response message for IndexService.ListIndexes.
Fields
indexes[]

object (GoogleCloudAiplatformV1Index)

List of indexes in the requested page.

nextPageToken

string

A token to retrieve next page of results. Pass to ListIndexesRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListMetadataSchemasResponse

Response message for MetadataService.ListMetadataSchemas.
Fields
metadataSchemas[]

object (GoogleCloudAiplatformV1MetadataSchema)

The MetadataSchemas found for the MetadataStore.

nextPageToken

string

A token, which can be sent as ListMetadataSchemasRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

GoogleCloudAiplatformV1ListMetadataStoresResponse

Response message for MetadataService.ListMetadataStores.
Fields
metadataStores[]

object (GoogleCloudAiplatformV1MetadataStore)

The MetadataStores found for the Location.

nextPageToken

string

A token, which can be sent as ListMetadataStoresRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

GoogleCloudAiplatformV1ListModelDeploymentMonitoringJobsResponse

Response message for JobService.ListModelDeploymentMonitoringJobs.
Fields
modelDeploymentMonitoringJobs[]

object (GoogleCloudAiplatformV1ModelDeploymentMonitoringJob)

A list of ModelDeploymentMonitoringJobs that matches the specified filter in the request.

nextPageToken

string

The standard List next-page token.

GoogleCloudAiplatformV1ListModelEvaluationSlicesResponse

Response message for ModelService.ListModelEvaluationSlices.
Fields
modelEvaluationSlices[]

object (GoogleCloudAiplatformV1ModelEvaluationSlice)

List of ModelEvaluations in the requested page.

nextPageToken

string

A token to retrieve next page of results. Pass to ListModelEvaluationSlicesRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListModelEvaluationsResponse

Response message for ModelService.ListModelEvaluations.
Fields
modelEvaluations[]

object (GoogleCloudAiplatformV1ModelEvaluation)

List of ModelEvaluations in the requested page.

nextPageToken

string

A token to retrieve next page of results. Pass to ListModelEvaluationsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListModelVersionsResponse

Response message for ModelService.ListModelVersions
Fields
models[]

object (GoogleCloudAiplatformV1Model)

List of Model versions in the requested page. In the returned Model name field, version ID instead of regvision tag will be included.

nextPageToken

string

A token to retrieve the next page of results. Pass to ListModelVersionsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListModelsResponse

Response message for ModelService.ListModels
Fields
models[]

object (GoogleCloudAiplatformV1Model)

List of Models in the requested page.

nextPageToken

string

A token to retrieve next page of results. Pass to ListModelsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListNasJobsResponse

Response message for JobService.ListNasJobs
Fields
nasJobs[]

object (GoogleCloudAiplatformV1NasJob)

List of NasJobs in the requested page. NasJob.nas_job_output of the jobs will not be returned.

nextPageToken

string

A token to retrieve the next page of results. Pass to ListNasJobsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListNasTrialDetailsResponse

Response message for JobService.ListNasTrialDetails
Fields
nasTrialDetails[]

object (GoogleCloudAiplatformV1NasTrialDetail)

List of top NasTrials in the requested page.

nextPageToken

string

A token to retrieve the next page of results. Pass to ListNasTrialDetailsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1ListNotebookRuntimeTemplatesResponse

Response message for NotebookService.ListNotebookRuntimeTemplates.
Fields
nextPageToken

string

A token to retrieve next page of results. Pass to ListNotebookRuntimeTemplatesRequest.page_token to obtain that page.

notebookRuntimeTemplates[]

object (GoogleCloudAiplatformV1NotebookRuntimeTemplate)

List of NotebookRuntimeTemplates in the requested page.

GoogleCloudAiplatformV1ListNotebookRuntimesResponse

Response message for NotebookService.ListNotebookRuntimes.
Fields
nextPageToken

string

A token to retrieve next page of results. Pass to ListNotebookRuntimesRequest.page_token to obtain that page.

notebookRuntimes[]

object (GoogleCloudAiplatformV1NotebookRuntime)

List of NotebookRuntimes in the requested page.

GoogleCloudAiplatformV1ListOptimalTrialsResponse

Response message for VizierService.ListOptimalTrials.
Fields
optimalTrials[]

object (GoogleCloudAiplatformV1Trial)

The pareto-optimal Trials for multiple objective Study or the optimal trial for single objective Study. The definition of pareto-optimal can be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency

GoogleCloudAiplatformV1ListPersistentResourcesResponse

Response message for PersistentResourceService.ListPersistentResources
Fields
nextPageToken

string

A token to retrieve next page of results. Pass to ListPersistentResourcesRequest.page_token to obtain that page.

persistentResources[]

object (GoogleCloudAiplatformV1PersistentResource)

(No description provided)

GoogleCloudAiplatformV1ListPipelineJobsResponse

Response message for PipelineService.ListPipelineJobs
Fields
nextPageToken

string

A token to retrieve the next page of results. Pass to ListPipelineJobsRequest.page_token to obtain that page.

pipelineJobs[]

object (GoogleCloudAiplatformV1PipelineJob)

List of PipelineJobs in the requested page.

GoogleCloudAiplatformV1ListSavedQueriesResponse

Response message for DatasetService.ListSavedQueries.
Fields
nextPageToken

string

The standard List next-page token.

savedQueries[]

object (GoogleCloudAiplatformV1SavedQuery)

A list of SavedQueries that match the specified filter in the request.

GoogleCloudAiplatformV1ListSchedulesResponse

Response message for ScheduleService.ListSchedules
Fields
nextPageToken

string

A token to retrieve the next page of results. Pass to ListSchedulesRequest.page_token to obtain that page.

schedules[]

object (GoogleCloudAiplatformV1Schedule)

List of Schedules in the requested page.

GoogleCloudAiplatformV1ListSpecialistPoolsResponse

Response message for SpecialistPoolService.ListSpecialistPools.
Fields
nextPageToken

string

The standard List next-page token.

specialistPools[]

object (GoogleCloudAiplatformV1SpecialistPool)

A list of SpecialistPools that matches the specified filter in the request.

GoogleCloudAiplatformV1ListStudiesResponse

Response message for VizierService.ListStudies.
Fields
nextPageToken

string

Passes this token as the page_token field of the request for a subsequent call. If this field is omitted, there are no subsequent pages.

studies[]

object (GoogleCloudAiplatformV1Study)

The studies associated with the project.

GoogleCloudAiplatformV1ListTensorboardExperimentsResponse

Response message for TensorboardService.ListTensorboardExperiments.
Fields
nextPageToken

string

A token, which can be sent as ListTensorboardExperimentsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

tensorboardExperiments[]

object (GoogleCloudAiplatformV1TensorboardExperiment)

The TensorboardExperiments mathching the request.

GoogleCloudAiplatformV1ListTensorboardRunsResponse

Response message for TensorboardService.ListTensorboardRuns.
Fields
nextPageToken

string

A token, which can be sent as ListTensorboardRunsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

tensorboardRuns[]

object (GoogleCloudAiplatformV1TensorboardRun)

The TensorboardRuns mathching the request.

GoogleCloudAiplatformV1ListTensorboardTimeSeriesResponse

Response message for TensorboardService.ListTensorboardTimeSeries.
Fields
nextPageToken

string

A token, which can be sent as ListTensorboardTimeSeriesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

tensorboardTimeSeries[]

object (GoogleCloudAiplatformV1TensorboardTimeSeries)

The TensorboardTimeSeries mathching the request.

GoogleCloudAiplatformV1ListTensorboardsResponse

Response message for TensorboardService.ListTensorboards.
Fields
nextPageToken

string

A token, which can be sent as ListTensorboardsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

tensorboards[]

object (GoogleCloudAiplatformV1Tensorboard)

The Tensorboards mathching the request.

GoogleCloudAiplatformV1ListTrainingPipelinesResponse

Response message for PipelineService.ListTrainingPipelines
Fields
nextPageToken

string

A token to retrieve the next page of results. Pass to ListTrainingPipelinesRequest.page_token to obtain that page.

trainingPipelines[]

object (GoogleCloudAiplatformV1TrainingPipeline)

List of TrainingPipelines in the requested page.

GoogleCloudAiplatformV1ListTrialsResponse

Response message for VizierService.ListTrials.
Fields
nextPageToken

string

Pass this token as the page_token field of the request for a subsequent call. If this field is omitted, there are no subsequent pages.

trials[]

object (GoogleCloudAiplatformV1Trial)

The Trials associated with the Study.

GoogleCloudAiplatformV1ListTuningJobsResponse

Response message for GenAiTuningService.ListTuningJobs
Fields
nextPageToken

string

A token to retrieve the next page of results. Pass to ListTuningJobsRequest.page_token to obtain that page.

tuningJobs[]

object (GoogleCloudAiplatformV1TuningJob)

List of TuningJobs in the requested page.

GoogleCloudAiplatformV1LookupStudyRequest

Request message for VizierService.LookupStudy.
Fields
displayName

string

Required. The user-defined display name of the Study

GoogleCloudAiplatformV1MachineSpec

Specification of a single machine.
Fields
acceleratorCount

integer (int32 format)

The number of accelerators to attach to the machine.

acceleratorType

enum

Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.

Enum type. Can be one of the following:
ACCELERATOR_TYPE_UNSPECIFIED Unspecified accelerator type, which means no accelerator.
NVIDIA_TESLA_K80 Nvidia Tesla K80 GPU.
NVIDIA_TESLA_P100 Nvidia Tesla P100 GPU.
NVIDIA_TESLA_V100 Nvidia Tesla V100 GPU.
NVIDIA_TESLA_P4 Nvidia Tesla P4 GPU.
NVIDIA_TESLA_T4 Nvidia Tesla T4 GPU.
NVIDIA_TESLA_A100 Nvidia Tesla A100 GPU.
NVIDIA_A100_80GB Nvidia A100 80GB GPU.
NVIDIA_L4 Nvidia L4 GPU.
NVIDIA_H100_80GB Nvidia H100 80Gb GPU.
TPU_V2 TPU v2.
TPU_V3 TPU v3.
TPU_V4_POD TPU v4.
TPU_V5_LITEPOD TPU v5.
machineType

string

Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.

tpuTopology

string

Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

GoogleCloudAiplatformV1ManualBatchTuningParameters

Manual batch tuning parameters.
Fields
batchSize

integer (int32 format)

Immutable. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 64.

GoogleCloudAiplatformV1Measurement

A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.
Fields
elapsedDuration

string (Duration format)

Output only. Time that the Trial has been running at the point of this Measurement.

metrics[]

object (GoogleCloudAiplatformV1MeasurementMetric)

Output only. A list of metrics got by evaluating the objective functions using suggested Parameter values.

stepCount

string (int64 format)

Output only. The number of steps the machine learning model has been trained for. Must be non-negative.

GoogleCloudAiplatformV1MeasurementMetric

A message representing a metric in the measurement.
Fields
metricId

string

Output only. The ID of the Metric. The Metric should be defined in StudySpec's Metrics.

value

number (double format)

Output only. The value for this metric.

GoogleCloudAiplatformV1MergeVersionAliasesRequest

Request message for ModelService.MergeVersionAliases.
Fields
versionAliases[]

string

Required. The set of version aliases to merge. The alias should be at most 128 characters, and match a-z{0,126}[a-z-0-9]. Add the - prefix to an alias means removing that alias from the version. - is NOT counted in the 128 characters. Example: -golden means removing the golden alias from the version. There is NO ordering in aliases, which means 1) The aliases returned from GetModel API might not have the exactly same order from this MergeVersionAliases API. 2) Adding and deleting the same alias in the request is not recommended, and the 2 operations will be cancelled out.

GoogleCloudAiplatformV1MetadataSchema

Instance of a general MetadataSchema.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this MetadataSchema was created.

description

string

Description of the Metadata Schema

name

string

Output only. The resource name of the MetadataSchema.

schema

string

Required. The raw YAML string representation of the MetadataSchema. The combination of [MetadataSchema.version] and the schema name given by title in [MetadataSchema.schema] must be unique within a MetadataStore. The schema is defined as an OpenAPI 3.0.2 MetadataSchema Object

schemaType

enum

The type of the MetadataSchema. This is a property that identifies which metadata types will use the MetadataSchema.

Enum type. Can be one of the following:
METADATA_SCHEMA_TYPE_UNSPECIFIED Unspecified type for the MetadataSchema.
ARTIFACT_TYPE A type indicating that the MetadataSchema will be used by Artifacts.
EXECUTION_TYPE A typee indicating that the MetadataSchema will be used by Executions.
CONTEXT_TYPE A state indicating that the MetadataSchema will be used by Contexts.
schemaVersion

string

The version of the MetadataSchema. The version's format must match the following regular expression: ^[0-9]+.+.+$, which would allow to order/compare different versions. Example: 1.0.0, 1.0.1, etc.

GoogleCloudAiplatformV1MetadataStore

Instance of a metadata store. Contains a set of metadata that can be queried.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this MetadataStore was created.

description

string

Description of the MetadataStore.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key.

name

string

Output only. The resource name of the MetadataStore instance.

state

object (GoogleCloudAiplatformV1MetadataStoreMetadataStoreState)

Output only. State information of the MetadataStore.

updateTime

string (Timestamp format)

Output only. Timestamp when this MetadataStore was last updated.

GoogleCloudAiplatformV1MetadataStoreMetadataStoreState

Represents state information for a MetadataStore.
Fields
diskUtilizationBytes

string (int64 format)

The disk utilization of the MetadataStore in bytes.

GoogleCloudAiplatformV1MigratableResource

Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.
Fields
automlDataset

object (GoogleCloudAiplatformV1MigratableResourceAutomlDataset)

Output only. Represents one Dataset in automl.googleapis.com.

automlModel

object (GoogleCloudAiplatformV1MigratableResourceAutomlModel)

Output only. Represents one Model in automl.googleapis.com.

dataLabelingDataset

object (GoogleCloudAiplatformV1MigratableResourceDataLabelingDataset)

Output only. Represents one Dataset in datalabeling.googleapis.com.

lastMigrateTime

string (Timestamp format)

Output only. Timestamp when the last migration attempt on this MigratableResource started. Will not be set if there's no migration attempt on this MigratableResource.

lastUpdateTime

string (Timestamp format)

Output only. Timestamp when this MigratableResource was last updated.

mlEngineModelVersion

object (GoogleCloudAiplatformV1MigratableResourceMlEngineModelVersion)

Output only. Represents one Version in ml.googleapis.com.

GoogleCloudAiplatformV1MigratableResourceAutomlDataset

Represents one Dataset in automl.googleapis.com.
Fields
dataset

string

Full resource name of automl Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}.

datasetDisplayName

string

The Dataset's display name in automl.googleapis.com.

GoogleCloudAiplatformV1MigratableResourceAutomlModel

Represents one Model in automl.googleapis.com.
Fields
model

string

Full resource name of automl Model. Format: projects/{project}/locations/{location}/models/{model}.

modelDisplayName

string

The Model's display name in automl.googleapis.com.

GoogleCloudAiplatformV1MigratableResourceDataLabelingDataset

Represents one Dataset in datalabeling.googleapis.com.
Fields
dataLabelingAnnotatedDatasets[]

object (GoogleCloudAiplatformV1MigratableResourceDataLabelingDatasetDataLabelingAnnotatedDataset)

The migratable AnnotatedDataset in datalabeling.googleapis.com belongs to the data labeling Dataset.

dataset

string

Full resource name of data labeling Dataset. Format: projects/{project}/datasets/{dataset}.

datasetDisplayName

string

The Dataset's display name in datalabeling.googleapis.com.

GoogleCloudAiplatformV1MigratableResourceDataLabelingDatasetDataLabelingAnnotatedDataset

Represents one AnnotatedDataset in datalabeling.googleapis.com.
Fields
annotatedDataset

string

Full resource name of data labeling AnnotatedDataset. Format: projects/{project}/datasets/{dataset}/annotatedDatasets/{annotated_dataset}.

annotatedDatasetDisplayName

string

The AnnotatedDataset's display name in datalabeling.googleapis.com.

GoogleCloudAiplatformV1MigratableResourceMlEngineModelVersion

Represents one model Version in ml.googleapis.com.
Fields
endpoint

string

The ml.googleapis.com endpoint that this model Version currently lives in. Example values: * ml.googleapis.com * us-centrall-ml.googleapis.com * europe-west4-ml.googleapis.com * asia-east1-ml.googleapis.com

version

string

Full resource name of ml engine model Version. Format: projects/{project}/models/{model}/versions/{version}.

GoogleCloudAiplatformV1MigrateResourceRequest

Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.
Fields
migrateAutomlDatasetConfig

object (GoogleCloudAiplatformV1MigrateResourceRequestMigrateAutomlDatasetConfig)

Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.

migrateAutomlModelConfig

object (GoogleCloudAiplatformV1MigrateResourceRequestMigrateAutomlModelConfig)

Config for migrating Model in automl.googleapis.com to Vertex AI's Model.

migrateDataLabelingDatasetConfig

object (GoogleCloudAiplatformV1MigrateResourceRequestMigrateDataLabelingDatasetConfig)

Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.

migrateMlEngineModelVersionConfig

object (GoogleCloudAiplatformV1MigrateResourceRequestMigrateMlEngineModelVersionConfig)

Config for migrating Version in ml.googleapis.com to Vertex AI's Model.

GoogleCloudAiplatformV1MigrateResourceRequestMigrateAutomlDatasetConfig

Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.
Fields
dataset

string

Required. Full resource name of automl Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}.

datasetDisplayName

string

Required. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified.

GoogleCloudAiplatformV1MigrateResourceRequestMigrateAutomlModelConfig

Config for migrating Model in automl.googleapis.com to Vertex AI's Model.
Fields
model

string

Required. Full resource name of automl Model. Format: projects/{project}/locations/{location}/models/{model}.

modelDisplayName

string

Optional. Display name of the model in Vertex AI. System will pick a display name if unspecified.

GoogleCloudAiplatformV1MigrateResourceRequestMigrateDataLabelingDatasetConfig

Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.
Fields
dataset

string

Required. Full resource name of data labeling Dataset. Format: projects/{project}/datasets/{dataset}.

datasetDisplayName

string

Optional. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified.

migrateDataLabelingAnnotatedDatasetConfigs[]

object (GoogleCloudAiplatformV1MigrateResourceRequestMigrateDataLabelingDatasetConfigMigrateDataLabelingAnnotatedDatasetConfig)

Optional. Configs for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery. The specified AnnotatedDatasets have to belong to the datalabeling Dataset.

GoogleCloudAiplatformV1MigrateResourceRequestMigrateDataLabelingDatasetConfigMigrateDataLabelingAnnotatedDatasetConfig

Config for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery.
Fields
annotatedDataset

string

Required. Full resource name of data labeling AnnotatedDataset. Format: projects/{project}/datasets/{dataset}/annotatedDatasets/{annotated_dataset}.

GoogleCloudAiplatformV1MigrateResourceRequestMigrateMlEngineModelVersionConfig

Config for migrating version in ml.googleapis.com to Vertex AI's Model.
Fields
endpoint

string

Required. The ml.googleapis.com endpoint that this model version should be migrated from. Example values: * ml.googleapis.com * us-centrall-ml.googleapis.com * europe-west4-ml.googleapis.com * asia-east1-ml.googleapis.com

modelDisplayName

string

Required. Display name of the model in Vertex AI. System will pick a display name if unspecified.

modelVersion

string

Required. Full resource name of ml engine model version. Format: projects/{project}/models/{model}/versions/{version}.

GoogleCloudAiplatformV1MigrateResourceResponse

Describes a successfully migrated resource.
Fields
dataset

string

Migrated Dataset's resource name.

migratableResource

object (GoogleCloudAiplatformV1MigratableResource)

Before migration, the identifier in ml.googleapis.com, automl.googleapis.com or datalabeling.googleapis.com.

model

string

Migrated Model's resource name.

GoogleCloudAiplatformV1Model

A trained machine learning Model.
Fields
artifactUri

string

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models.

baseModelSource

object (GoogleCloudAiplatformV1ModelBaseModelSource)

Optional. User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.

containerSpec

object (GoogleCloudAiplatformV1ModelContainerSpec)

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not required for AutoML Models.

createTime

string (Timestamp format)

Output only. Timestamp when this Model was uploaded into Vertex AI.

dataStats

object (GoogleCloudAiplatformV1ModelDataStats)

Stats of data used for training or evaluating the Model. Only populated when the Model is trained by a TrainingPipeline with data_input_config.

deployedModels[]

object (GoogleCloudAiplatformV1DeployedModelRef)

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

description

string

The description of the Model.

displayName

string

Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

explanationSpec

object (GoogleCloudAiplatformV1ExplanationSpec)

The default explanation specification for this Model. The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob. If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

metadata

any

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information.

metadataArtifact

string

Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}.

metadataSchemaUri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

modelSourceInfo

object (GoogleCloudAiplatformV1ModelSourceInfo)

Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden.

name

string

The resource name of the Model.

originalModelInfo

object (GoogleCloudAiplatformV1ModelOriginalModelInfo)

Output only. If this Model is a copy of another Model, this contains info about the original.

pipelineJob

string

Optional. This field is populated if the model is produced by a pipeline job.

predictSchemata

object (GoogleCloudAiplatformV1PredictSchemata)

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

supportedDeploymentResourcesTypes[]

string

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

supportedExportFormats[]

object (GoogleCloudAiplatformV1ModelExportFormat)

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

supportedInputStorageFormats[]

string

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are: * jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource. * csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource. * tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource. * tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource. * bigquery Each instance is a single row in BigQuery. Uses BigQuerySource. * file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

supportedOutputStorageFormats[]

string

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination. * csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination. * bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

trainingPipeline

string

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

updateTime

string (Timestamp format)

Output only. Timestamp when this Model was most recently updated.

versionAliases[]

string

User provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_alias} instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id}). The format is a-z{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.

versionCreateTime

string (Timestamp format)

Output only. Timestamp when this version was created.

versionDescription

string

The description of this version.

versionId

string

Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.

versionUpdateTime

string (Timestamp format)

Output only. Timestamp when this version was most recently updated.

GoogleCloudAiplatformV1ModelBaseModelSource

User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
Fields
genieSource

object (GoogleCloudAiplatformV1GenieSource)

Information about the base model of Genie models.

modelGardenSource

object (GoogleCloudAiplatformV1ModelGardenSource)

Source information of Model Garden models.

GoogleCloudAiplatformV1ModelContainerSpec

Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.
Fields
args[]

string

Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's CMD. Specify this field as an array of executable and arguments, similar to a Docker CMD's "default parameters" form. If you don't specify this field but do specify the command field, then the command from the command field runs without any additional arguments. See the Kubernetes documentation about how the command and args fields interact with a container's ENTRYPOINT and CMD. If you don't specify this field and don't specify the command field, then the container's ENTRYPOINT and CMD determine what runs based on their default behavior. See the Docker documentation about how CMD and ENTRYPOINT interact. In this field, you can reference environment variables set by Vertex AI and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example: $$(VARIABLE_NAME) This field corresponds to the args field of the Kubernetes Containers v1 core API.

command[]

string

Immutable. Specifies the command that runs when the container starts. This overrides the container's ENTRYPOINT. Specify this field as an array of executable and arguments, similar to a Docker ENTRYPOINT's "exec" form, not its "shell" form. If you do not specify this field, then the container's ENTRYPOINT runs, in conjunction with the args field or the container's CMD, if either exists. If this field is not specified and the container does not have an ENTRYPOINT, then refer to the Docker documentation about how CMD and ENTRYPOINT interact. If you specify this field, then you can also specify the args field to provide additional arguments for this command. However, if you specify this field, then the container's CMD is ignored. See the Kubernetes documentation about how the command and args fields interact with a container's ENTRYPOINT and CMD. In this field, you can reference environment variables set by Vertex AI and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example: $$(VARIABLE_NAME) This field corresponds to the command field of the Kubernetes Containers v1 core API.

deploymentTimeout

string (Duration format)

Immutable. Deployment timeout. Limit for deployment timeout is 2 hours.

env[]

object (GoogleCloudAiplatformV1EnvVar)

Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable VAR_2 to have the value foo bar: json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ] If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to the env field of the Kubernetes Containers v1 core API.

grpcPorts[]

object (GoogleCloudAiplatformV1Port)

Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, gRPC requests to the container will be disabled. Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API.

healthProbe

object (GoogleCloudAiplatformV1Probe)

Immutable. Specification for Kubernetes readiness probe.

healthRoute

string

Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to /bar, then Vertex AI intermittently sends a GET request to the /bar path on the port of your container specified by the first value of this ModelContainerSpec's ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/ DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following endpoints/)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the AIP_ENDPOINT_ID environment variable.) * DEPLOYED_MODEL: DeployedModel.id of the DeployedModel. (Vertex AI makes this value available to your container code as the AIP_DEPLOYED_MODEL_ID environment variable.)

imageUri

string

Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the Vertex AI Service Agent. The container image is ingested upon ModelService.UploadModel, stored internally, and this original path is afterwards not used. To learn about the requirements for the Docker image itself, see Custom container requirements. You can use the URI to one of Vertex AI's pre-built container images for prediction in this field.

ports[]

object (GoogleCloudAiplatformV1Port)

Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value: json [ { "containerPort": 8080 } ] Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API.

predictRoute

string

Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to /foo, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the /foo path on the port of your container specified by the first value of this ModelContainerSpec's ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following endpoints/)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the AIP_ENDPOINT_ID environment variable.) * DEPLOYED_MODEL: DeployedModel.id of the DeployedModel. (Vertex AI makes this value available to your container code as the AIP_DEPLOYED_MODEL_ID environment variable.)

sharedMemorySizeMb

string (int64 format)

Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes.

startupProbe

object (GoogleCloudAiplatformV1Probe)

Immutable. Specification for Kubernetes startup probe.

GoogleCloudAiplatformV1ModelDataStats

Stats of data used for train or evaluate the Model.
Fields
testAnnotationsCount

string (int64 format)

Number of Annotations that are used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test Annotations used by the first evaluation. If the Model is not evaluated, the number is 0.

testDataItemsCount

string (int64 format)

Number of DataItems that were used for evaluating this Model. If the Model is evaluated multiple times, this will be the number of test DataItems used by the first evaluation. If the Model is not evaluated, the number is 0.

trainingAnnotationsCount

string (int64 format)

Number of Annotations that are used for training this Model.

trainingDataItemsCount

string (int64 format)

Number of DataItems that were used for training this Model.

validationAnnotationsCount

string (int64 format)

Number of Annotations that are used for validating this Model during training.

validationDataItemsCount

string (int64 format)

Number of DataItems that were used for validating this Model during training.

GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTable

ModelDeploymentMonitoringBigQueryTable specifies the BigQuery table name as well as some information of the logs stored in this table.
Fields
bigqueryTablePath

string

The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._

logSource

enum

The source of log.

Enum type. Can be one of the following:
LOG_SOURCE_UNSPECIFIED Unspecified source.
TRAINING Logs coming from Training dataset.
SERVING Logs coming from Serving traffic.
logType

enum

The type of log.

Enum type. Can be one of the following:
LOG_TYPE_UNSPECIFIED Unspecified type.
PREDICT Predict logs.
EXPLAIN Explain logs.
requestResponseLoggingSchemaVersion

string

Output only. The schema version of the request/response logging BigQuery table. Default to v1 if unset.

GoogleCloudAiplatformV1ModelDeploymentMonitoringJob

Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.
Fields
analysisInstanceSchemaUri

string

YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

bigqueryTables[]

object (GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTable)

Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response

createTime

string (Timestamp format)

Output only. Timestamp when this ModelDeploymentMonitoringJob was created.

displayName

string

Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

enableMonitoringPipelineLogs

boolean

If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.

endpoint

string

Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

error

object (GoogleRpcStatus)

Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

latestMonitoringPipelineMetadata

object (GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadata)

Output only. Latest triggered monitoring pipeline metadata.

logTtl

string (Duration format)

The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

loggingSamplingStrategy

object (GoogleCloudAiplatformV1SamplingStrategy)

Required. Sample Strategy for logging.

modelDeploymentMonitoringObjectiveConfigs[]

object (GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfig)

Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.

modelDeploymentMonitoringScheduleConfig

object (GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig)

Required. Schedule config for running the monitoring job.

modelMonitoringAlertConfig

object (GoogleCloudAiplatformV1ModelMonitoringAlertConfig)

Alert config for model monitoring.

name

string

Output only. Resource name of a ModelDeploymentMonitoringJob.

nextScheduleTime

string (Timestamp format)

Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.

predictInstanceSchemaUri

string

YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.

samplePredictInstance

any

Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.

scheduleState

enum

Output only. Schedule state when the monitoring job is in Running state.

Enum type. Can be one of the following:
MONITORING_SCHEDULE_STATE_UNSPECIFIED Unspecified state.
PENDING The pipeline is picked up and wait to run.
OFFLINE The pipeline is offline and will be scheduled for next run.
RUNNING The pipeline is running.
state

enum

Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.

Enum type. Can be one of the following:
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.
statsAnomaliesBaseDirectory

object (GoogleCloudAiplatformV1GcsDestination)

Stats anomalies base folder path.

updateTime

string (Timestamp format)

Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently.

GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadata

All metadata of most recent monitoring pipelines.
Fields
runTime

string (Timestamp format)

The time that most recent monitoring pipelines that is related to this run.

status

object (GoogleRpcStatus)

The status of the most recent monitoring pipeline.

GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfig

ModelDeploymentMonitoringObjectiveConfig contains the pair of deployed_model_id to ModelMonitoringObjectiveConfig.
Fields
deployedModelId

string

The DeployedModel ID of the objective config.

objectiveConfig

object (GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig)

The objective config of for the modelmonitoring job of this deployed model.

GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig

The config for scheduling monitoring job.
Fields
monitorInterval

string (Duration format)

Required. The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.

monitorWindow

string (Duration format)

The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.

GoogleCloudAiplatformV1ModelEvaluation

A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.
Fields
annotationSchemaUri

string

Points to a YAML file stored on Google Cloud Storage describing EvaluatedDataItemView.predictions, EvaluatedDataItemView.ground_truths, EvaluatedAnnotation.predictions, and EvaluatedAnnotation.ground_truths. The schema is defined as an OpenAPI 3.0.2 Schema Object. This field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation.

createTime

string (Timestamp format)

Output only. Timestamp when this ModelEvaluation was created.

dataItemSchemaUri

string

Points to a YAML file stored on Google Cloud Storage describing EvaluatedDataItemView.data_item_payload and EvaluatedAnnotation.data_item_payload. The schema is defined as an OpenAPI 3.0.2 Schema Object. This field is not populated if there are neither EvaluatedDataItemViews nor EvaluatedAnnotations under this ModelEvaluation.

displayName

string

The display name of the ModelEvaluation.

explanationSpecs[]

object (GoogleCloudAiplatformV1ModelEvaluationModelEvaluationExplanationSpec)

Describes the values of ExplanationSpec that are used for explaining the predicted values on the evaluated data.

metadata

any

The metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of "pipeline_job_id", "evaluation_dataset_type", "evaluation_dataset_path", "row_based_metrics_path".

metrics

any

Evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri

metricsSchemaUri

string

Points to a YAML file stored on Google Cloud Storage describing the metrics of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 Schema Object.

modelExplanation

object (GoogleCloudAiplatformV1ModelExplanation)

Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.

name

string

Output only. The resource name of the ModelEvaluation.

sliceDimensions[]

string

All possible dimensions of ModelEvaluationSlices. The dimensions can be used as the filter of the ModelService.ListModelEvaluationSlices request, in the form of slice.dimension =.

GoogleCloudAiplatformV1ModelEvaluationModelEvaluationExplanationSpec

(No description provided)
Fields
explanationSpec

object (GoogleCloudAiplatformV1ExplanationSpec)

Explanation spec details.

explanationType

string

Explanation type. For AutoML Image Classification models, possible values are: * image-integrated-gradients * image-xrai

GoogleCloudAiplatformV1ModelEvaluationSlice

A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this ModelEvaluationSlice was created.

metrics

any

Output only. Sliced evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri

metricsSchemaUri

string

Output only. Points to a YAML file stored on Google Cloud Storage describing the metrics of this ModelEvaluationSlice. The schema is defined as an OpenAPI 3.0.2 Schema Object.

modelExplanation

object (GoogleCloudAiplatformV1ModelExplanation)

Output only. Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for tabular Models.

name

string

Output only. The resource name of the ModelEvaluationSlice.

slice

object (GoogleCloudAiplatformV1ModelEvaluationSliceSlice)

Output only. The slice of the test data that is used to evaluate the Model.

GoogleCloudAiplatformV1ModelEvaluationSliceSlice

Definition of a slice.
Fields
dimension

string

Output only. The dimension of the slice. Well-known dimensions are: * annotationSpec: This slice is on the test data that has either ground truth or prediction with AnnotationSpec.display_name equals to value. * slice: This slice is a user customized slice defined by its SliceSpec.

sliceSpec

object (GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpec)

Output only. Specification for how the data was sliced.

value

string

Output only. The value of the dimension in this slice.

GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpec

Specification for how the data should be sliced.
Fields
configs

map (key: string, value: object (GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecSliceConfig))

Mapping configuration for this SliceSpec. The key is the name of the feature. By default, the key will be prefixed by "instance" as a dictionary prefix for Vertex Batch Predictions output format.

GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecRange

A range of values for slice(s). low is inclusive, high is exclusive.
Fields
high

number (float format)

Exclusive high value for the range.

low

number (float format)

Inclusive low value for the range.

GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecSliceConfig

Specification message containing the config for this SliceSpec. When kind is selected as value and/or range, only a single slice will be computed. When all_values is present, a separate slice will be computed for each possible label/value for the corresponding key in config. Examples, with feature zip_code with values 12345, 23334, 88888 and feature country with values "US", "Canada", "Mexico" in the dataset: Example 1: { "zip_code": { "value": { "float_value": 12345.0 } } } A single slice for any data with zip_code 12345 in the dataset. Example 2: { "zip_code": { "range": { "low": 12345, "high": 20000 } } } A single slice containing data where the zip_codes between 12345 and 20000 For this example, data with the zip_code of 12345 will be in this slice. Example 3: { "zip_code": { "range": { "low": 10000, "high": 20000 } }, "country": { "value": { "string_value": "US" } } } A single slice containing data where the zip_codes between 10000 and 20000 has the country "US". For this example, data with the zip_code of 12345 and country "US" will be in this slice. Example 4: { "country": {"all_values": { "value": true } } } Three slices are computed, one for each unique country in the dataset. Example 5: { "country": { "all_values": { "value": true } }, "zip_code": { "value": { "float_value": 12345.0 } } } Three slices are computed, one for each unique country in the dataset where the zip_code is also 12345. For this example, data with zip_code 12345 and country "US" will be in one slice, zip_code 12345 and country "Canada" in another slice, and zip_code 12345 and country "Mexico" in another slice, totaling 3 slices.
Fields
allValues

boolean

If all_values is set to true, then all possible labels of the keyed feature will have another slice computed. Example: {"all_values":{"value":true}}

range

object (GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecRange)

A range of values for a numerical feature. Example: {"range":{"low":10000.0,"high":50000.0}} will capture 12345 and 23334 in the slice.

value

object (GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecValue)

A unique specific value for a given feature. Example: { "value": { "string_value": "12345" } }

GoogleCloudAiplatformV1ModelEvaluationSliceSliceSliceSpecValue

Single value that supports strings and floats.
Fields
floatValue

number (float format)

Float type.

stringValue

string

String type.

GoogleCloudAiplatformV1ModelExplanation

Aggregated explanation metrics for a Model over a set of instances.
Fields
meanAttributions[]

object (GoogleCloudAiplatformV1Attribution)

Output only. Aggregated attributions explaining the Model's prediction outputs over the set of instances. The attributions are grouped by outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. The baselineOutputValue, instanceOutputValue and featureAttributions fields are averaged over the test data. NOTE: Currently AutoML tabular classification Models produce only one attribution, which averages attributions over all the classes it predicts. Attribution.approximation_error is not populated.

GoogleCloudAiplatformV1ModelExportFormat

Represents export format supported by the Model. All formats export to Google Cloud Storage.
Fields
exportableContents[]

string

Output only. The content of this Model that may be exported.

id

string

Output only. The ID of the export format. The possible format IDs are: * tflite Used for Android mobile devices. * edgetpu-tflite Used for Edge TPU devices. * tf-saved-model A tensorflow model in SavedModel format. * tf-js A TensorFlow.js model that can be used in the browser and in Node.js using JavaScript. * core-ml Used for iOS mobile devices. * custom-trained A Model that was uploaded or trained by custom code.

GoogleCloudAiplatformV1ModelGardenSource

Contains information about the source of the models generated from Model Garden.
Fields
publicModelName

string

Required. The model garden source model resource name.

GoogleCloudAiplatformV1ModelMonitoringAlertConfig

The alert config for model monitoring.
Fields
emailAlertConfig

object (GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig)

Email alert config.

enableLogging

boolean

Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.

notificationChannels[]

string

Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/

GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig

The config for email alert.
Fields
userEmails[]

string

The email addresses to send the alert.

GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig

The objective configuration for model monitoring, including the information needed to detect anomalies for one particular model.
Fields
explanationConfig

object (GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfig)

The config for integrating with Vertex Explainable AI.

predictionDriftDetectionConfig

object (GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfig)

The config for drift of prediction data.

trainingDataset

object (GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDataset)

Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.

trainingPredictionSkewDetectionConfig

object (GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfig)

The config for skew between training data and prediction data.

GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfig

The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.
Fields
enableFeatureAttributes

boolean

If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.

explanationBaseline

object (GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline)

Predictions generated by the BatchPredictionJob using baseline dataset.

GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline

Output from BatchPredictionJob for Model Monitoring baseline dataset, which can be used to generate baseline attribution scores.
Fields
bigquery

object (GoogleCloudAiplatformV1BigQueryDestination)

BigQuery location for BatchExplain output.

gcs

object (GoogleCloudAiplatformV1GcsDestination)

Cloud Storage location for BatchExplain output.

predictionFormat

enum

The storage format of the predictions generated BatchPrediction job.

Enum type. Can be one of the following:
PREDICTION_FORMAT_UNSPECIFIED Should not be set.
JSONL Predictions are in JSONL files.
BIGQUERY Predictions are in BigQuery.

GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfig

The config for Prediction data drift detection.
Fields
attributionScoreDriftThresholds

map (key: string, value: object (GoogleCloudAiplatformV1ThresholdConfig))

Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.

defaultDriftThreshold

object (GoogleCloudAiplatformV1ThresholdConfig)

Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.

driftThresholds

map (key: string, value: object (GoogleCloudAiplatformV1ThresholdConfig))

Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.

GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDataset

Training Dataset information.
Fields
bigquerySource

object (GoogleCloudAiplatformV1BigQuerySource)

The BigQuery table of the unmanaged Dataset used to train this Model.

dataFormat

string

Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.

dataset

string

The resource name of the Dataset used to train this Model.

gcsSource

object (GoogleCloudAiplatformV1GcsSource)

The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.

loggingSamplingStrategy

object (GoogleCloudAiplatformV1SamplingStrategy)

Strategy to sample data from Training Dataset. If not set, we process the whole dataset.

targetField

string

The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.

GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfig

The config for Training & Prediction data skew detection. It specifies the training dataset sources and the skew detection parameters.
Fields
attributionScoreSkewThresholds

map (key: string, value: object (GoogleCloudAiplatformV1ThresholdConfig))

Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.

defaultSkewThreshold

object (GoogleCloudAiplatformV1ThresholdConfig)

Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.

skewThresholds

map (key: string, value: object (GoogleCloudAiplatformV1ThresholdConfig))

Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.

GoogleCloudAiplatformV1ModelMonitoringStatsAnomalies

Statistics and anomalies generated by Model Monitoring.
Fields
anomalyCount

integer (int32 format)

Number of anomalies within all stats.

deployedModelId

string

Deployed Model ID.

featureStats[]

object (GoogleCloudAiplatformV1ModelMonitoringStatsAnomaliesFeatureHistoricStatsAnomalies)

A list of historical Stats and Anomalies generated for all Features.

objective

enum

Model Monitoring Objective those stats and anomalies belonging to.

Enum type. Can be one of the following:
MODEL_DEPLOYMENT_MONITORING_OBJECTIVE_TYPE_UNSPECIFIED Default value, should not be set.
RAW_FEATURE_SKEW Raw feature values' stats to detect skew between Training-Prediction datasets.
RAW_FEATURE_DRIFT Raw feature values' stats to detect drift between Serving-Prediction datasets.
FEATURE_ATTRIBUTION_SKEW Feature attribution scores to detect skew between Training-Prediction datasets.
FEATURE_ATTRIBUTION_DRIFT Feature attribution scores to detect skew between Prediction datasets collected within different time windows.

GoogleCloudAiplatformV1ModelMonitoringStatsAnomaliesFeatureHistoricStatsAnomalies

Historical Stats (and Anomalies) for a specific Feature.
Fields
featureDisplayName

string

Display Name of the Feature.

predictionStats[]

object (GoogleCloudAiplatformV1FeatureStatsAnomaly)

A list of historical stats generated by different time window's Prediction Dataset.

threshold

object (GoogleCloudAiplatformV1ThresholdConfig)

Threshold for anomaly detection.

trainingStats

object (GoogleCloudAiplatformV1FeatureStatsAnomaly)

Stats calculated for the Training Dataset.

GoogleCloudAiplatformV1ModelOriginalModelInfo

Contains information about the original Model if this Model is a copy.
Fields
model

string

Output only. The resource name of the Model this Model is a copy of, including the revision. Format: projects/{project}/locations/{location}/models/{model_id}@{version_id}

GoogleCloudAiplatformV1ModelSourceInfo

Detail description of the source information of the model.
Fields
copy

boolean

If this Model is copy of another Model. If true then source_type pertains to the original.

sourceType

enum

Type of the model source.

Enum type. Can be one of the following:
MODEL_SOURCE_TYPE_UNSPECIFIED Should not be used.
AUTOML The Model is uploaded by automl training pipeline.
CUSTOM The Model is uploaded by user or custom training pipeline.
BQML The Model is registered and sync'ed from BigQuery ML.
MODEL_GARDEN The Model is saved or tuned from Model Garden.
GENIE The Model is saved or tuned from Genie.
CUSTOM_TEXT_EMBEDDING The Model is uploaded by text embedding finetuning pipeline.
MARKETPLACE The Model is saved or tuned from Marketplace.

GoogleCloudAiplatformV1MutateDeployedIndexOperationMetadata

Runtime operation information for IndexEndpointService.MutateDeployedIndex.
Fields
deployedIndexId

string

The unique index id specified by user

genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1MutateDeployedIndexResponse

Response message for IndexEndpointService.MutateDeployedIndex.
Fields
deployedIndex

object (GoogleCloudAiplatformV1DeployedIndex)

The DeployedIndex that had been updated in the IndexEndpoint.

GoogleCloudAiplatformV1MutateDeployedModelOperationMetadata

Runtime operation information for EndpointService.MutateDeployedModel.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1MutateDeployedModelRequest

Request message for EndpointService.MutateDeployedModel.
Fields
deployedModel

object (GoogleCloudAiplatformV1DeployedModel)

Required. The DeployedModel to be mutated within the Endpoint. Only the following fields can be mutated: * min_replica_count in either DedicatedResources or AutomaticResources * max_replica_count in either DedicatedResources or AutomaticResources * autoscaling_metric_specs * disable_container_logging (v1 only) * enable_container_logging (v1beta1 only)

updateMask

string (FieldMask format)

Required. The update mask applies to the resource. See google.protobuf.FieldMask.

GoogleCloudAiplatformV1MutateDeployedModelResponse

Response message for EndpointService.MutateDeployedModel.
Fields
deployedModel

object (GoogleCloudAiplatformV1DeployedModel)

The DeployedModel that's being mutated.

GoogleCloudAiplatformV1NasJob

Represents a Neural Architecture Search (NAS) job.
Fields
createTime

string (Timestamp format)

Output only. Time when the NasJob was created.

displayName

string

Required. The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.

enableRestrictedImageTraining

boolean

Optional. Enable a separation of Custom model training and restricted image training for tenant project.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.

endTime

string (Timestamp format)

Output only. Time when the NasJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

error

object (GoogleRpcStatus)

Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

name

string

Output only. Resource name of the NasJob.

nasJobOutput

object (GoogleCloudAiplatformV1NasJobOutput)

Output only. Output of the NasJob.

nasJobSpec

object (GoogleCloudAiplatformV1NasJobSpec)

Required. The specification of a NasJob.

startTime

string (Timestamp format)

Output only. Time when the NasJob for the first time entered the JOB_STATE_RUNNING state.

state

enum

Output only. The detailed state of the job.

Enum type. Can be one of the following:
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.
updateTime

string (Timestamp format)

Output only. Time when the NasJob was most recently updated.

GoogleCloudAiplatformV1NasJobOutput

Represents a uCAIP NasJob output.
Fields
multiTrialJobOutput

object (GoogleCloudAiplatformV1NasJobOutputMultiTrialJobOutput)

Output only. The output of this multi-trial Neural Architecture Search (NAS) job.

GoogleCloudAiplatformV1NasJobOutputMultiTrialJobOutput

The output of a multi-trial Neural Architecture Search (NAS) jobs.
Fields
searchTrials[]

object (GoogleCloudAiplatformV1NasTrial)

Output only. List of NasTrials that were started as part of search stage.

trainTrials[]

object (GoogleCloudAiplatformV1NasTrial)

Output only. List of NasTrials that were started as part of train stage.

GoogleCloudAiplatformV1NasJobSpec

Represents the spec of a NasJob.
Fields
multiTrialAlgorithmSpec

object (GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpec)

The spec of multi-trial algorithms.

resumeNasJobId

string

The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.

searchSpaceSpec

string

It defines the search space for Neural Architecture Search (NAS).

GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpec

The spec of multi-trial Neural Architecture Search (NAS).
Fields
metric

object (GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecMetricSpec)

Metric specs for the NAS job. Validation for this field is done at multi_trial_algorithm_spec field.

multiTrialAlgorithm

enum

The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to REINFORCEMENT_LEARNING.

Enum type. Can be one of the following:
MULTI_TRIAL_ALGORITHM_UNSPECIFIED Defaults to REINFORCEMENT_LEARNING.
REINFORCEMENT_LEARNING The Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS).
GRID_SEARCH The Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS).
searchTrialSpec

object (GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpec)

Required. Spec for search trials.

trainTrialSpec

object (GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpec)

Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.

GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecMetricSpec

Represents a metric to optimize.
Fields
goal

enum

Required. The optimization goal of the metric.

Enum type. Can be one of the following:
GOAL_TYPE_UNSPECIFIED Goal Type will default to maximize.
MAXIMIZE Maximize the goal metric.
MINIMIZE Minimize the goal metric.
metricId

string

Required. The ID of the metric. Must not contain whitespaces.

GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpec

Represent spec for search trials.
Fields
maxFailedTrialCount

integer (int32 format)

The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.

maxParallelTrialCount

integer (int32 format)

Required. The maximum number of trials to run in parallel.

maxTrialCount

integer (int32 format)

Required. The maximum number of Neural Architecture Search (NAS) trials to run.

searchTrialJobSpec

object (GoogleCloudAiplatformV1CustomJobSpec)

Required. The spec of a search trial job. The same spec applies to all search trials.

GoogleCloudAiplatformV1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpec

Represent spec for train trials.
Fields
frequency

integer (int32 format)

Required. Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.

maxParallelTrialCount

integer (int32 format)

Required. The maximum number of trials to run in parallel.

trainTrialJobSpec

object (GoogleCloudAiplatformV1CustomJobSpec)

Required. The spec of a train trial job. The same spec applies to all train trials.

GoogleCloudAiplatformV1NasTrial

Represents a uCAIP NasJob trial.
Fields
endTime

string (Timestamp format)

Output only. Time when the NasTrial's status changed to SUCCEEDED or INFEASIBLE.

finalMeasurement

object (GoogleCloudAiplatformV1Measurement)

Output only. The final measurement containing the objective value.

id

string

Output only. The identifier of the NasTrial assigned by the service.

startTime

string (Timestamp format)

Output only. Time when the NasTrial was started.

state

enum

Output only. The detailed state of the NasTrial.

Enum type. Can be one of the following:
STATE_UNSPECIFIED The NasTrial state is unspecified.
REQUESTED Indicates that a specific NasTrial has been requested, but it has not yet been suggested by the service.
ACTIVE Indicates that the NasTrial has been suggested.
STOPPING Indicates that the NasTrial should stop according to the service.
SUCCEEDED Indicates that the NasTrial is completed successfully.
INFEASIBLE Indicates that the NasTrial should not be attempted again. The service will set a NasTrial to INFEASIBLE when it's done but missing the final_measurement.

GoogleCloudAiplatformV1NasTrialDetail

Represents a NasTrial details along with its parameters. If there is a corresponding train NasTrial, the train NasTrial is also returned.
Fields
name

string

Output only. Resource name of the NasTrialDetail.

parameters

string

The parameters for the NasJob NasTrial.

searchTrial

object (GoogleCloudAiplatformV1NasTrial)

The requested search NasTrial.

trainTrial

object (GoogleCloudAiplatformV1NasTrial)

The train NasTrial corresponding to search_trial. Only populated if search_trial is used for training.

GoogleCloudAiplatformV1NearestNeighborQuery

A query to find a number of similar entities.
Fields
embedding

object (GoogleCloudAiplatformV1NearestNeighborQueryEmbedding)

Optional. The embedding vector that be used for similar search.

entityId

string

Optional. The entity id whose similar entities should be searched for. If embedding is set, search will use embedding instead of entity_id.

neighborCount

integer (int32 format)

Optional. The number of similar entities to be retrieved from feature view for each query.

parameters

object (GoogleCloudAiplatformV1NearestNeighborQueryParameters)

Optional. Parameters that can be set to tune query on the fly.

perCrowdingAttributeNeighborCount

integer (int32 format)

Optional. Crowding is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than sper_crowding_attribute_neighbor_count of the k neighbors returned have the same value of crowding_attribute. It's used for improving result diversity.

stringFilters[]

object (GoogleCloudAiplatformV1NearestNeighborQueryStringFilter)

Optional. The list of string filters.

GoogleCloudAiplatformV1NearestNeighborQueryEmbedding

The embedding vector.
Fields
value[]

number (float format)

Optional. Individual value in the embedding.

GoogleCloudAiplatformV1NearestNeighborQueryParameters

Parameters that can be overrided in each query to tune query latency and recall.
Fields
approximateNeighborCandidates

integer (int32 format)

Optional. The number of neighbors to find via approximate search before exact reordering is performed; if set, this value must be > neighbor_count.

leafNodesSearchFraction

number (double format)

Optional. The fraction of the number of leaves to search, set at query time allows user to tune search performance. This value increase result in both search accuracy and latency increase. The value should be between 0.0 and 1.0.

GoogleCloudAiplatformV1NearestNeighborQueryStringFilter

String filter is used to search a subset of the entities by using boolean rules on string columns. For example: if a query specifies string filter with 'name = color, allow_tokens = {red, blue}, deny_tokens = {purple}',' then that query will match entities that are red or blue, but if those points are also purple, then they will be excluded even if they are red/blue. Only string filter is supported for now, numeric filter will be supported in the near future.
Fields
allowTokens[]

string

Optional. The allowed tokens.

denyTokens[]

string

Optional. The denied tokens.

name

string

Required. Column names in BigQuery that used as filters.

GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadata

Runtime operation metadata with regard to Matching Engine Index.
Fields
contentValidationStats[]

object (GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadataContentValidationStats)

The validation stats of the content (per file) to be inserted or updated on the Matching Engine Index resource. Populated if contentsDeltaUri is provided as part of Index.metadata. Please note that, currently for those files that are broken or has unsupported file format, we will not have the stats for those files.

dataBytesCount

string (int64 format)

The ingested data size in bytes.

GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadataContentValidationStats

(No description provided)
Fields
invalidRecordCount

string (int64 format)

Number of records in this file we skipped due to validate errors.

invalidSparseRecordCount

string (int64 format)

Number of sparse records in this file we skipped due to validate errors.

partialErrors[]

object (GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadataRecordError)

The detail information of the partial failures encountered for those invalid records that couldn't be parsed. Up to 50 partial errors will be reported.

sourceGcsUri

string

Cloud Storage URI pointing to the original file in user's bucket.

validRecordCount

string (int64 format)

Number of records in this file that were successfully processed.

validSparseRecordCount

string (int64 format)

Number of sparse records in this file that were successfully processed.

GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadataRecordError

(No description provided)
Fields
embeddingId

string

Empty if the embedding id is failed to parse.

errorMessage

string

A human-readable message that is shown to the user to help them fix the error. Note that this message may change from time to time, your code should check against error_type as the source of truth.

errorType

enum

The error type of this record.

Enum type. Can be one of the following:
ERROR_TYPE_UNSPECIFIED Default, shall not be used.
EMPTY_LINE The record is empty.
INVALID_JSON_SYNTAX Invalid json format.
INVALID_CSV_SYNTAX Invalid csv format.
INVALID_AVRO_SYNTAX Invalid avro format.
INVALID_EMBEDDING_ID The embedding id is not valid.
EMBEDDING_SIZE_MISMATCH The size of the dense embedding vectors does not match with the specified dimension.
NAMESPACE_MISSING The namespace field is missing.
PARSING_ERROR Generic catch-all error. Only used for validation failure where the root cause cannot be easily retrieved programmatically.
DUPLICATE_NAMESPACE There are multiple restricts with the same namespace value.
OP_IN_DATAPOINT Numeric restrict has operator specified in datapoint.
MULTIPLE_VALUES Numeric restrict has multiple values specified.
INVALID_NUMERIC_VALUE Numeric restrict has invalid numeric value specified.
INVALID_ENCODING File is not in UTF_8 format.
INVALID_SPARSE_DIMENSIONS Error parsing sparse dimensions field.
INVALID_TOKEN_VALUE Token restrict value is invalid.
INVALID_SPARSE_EMBEDDING Invalid sparse embedding.
rawRecord

string

The original content of this record.

sourceGcsUri

string

Cloud Storage URI pointing to the original file in user's bucket.

GoogleCloudAiplatformV1NearestNeighbors

Nearest neighbors for one query.
Fields
neighbors[]

object (GoogleCloudAiplatformV1NearestNeighborsNeighbor)

All its neighbors.

GoogleCloudAiplatformV1NearestNeighborsNeighbor

A neighbor of the query vector.
Fields
distance

number (double format)

The distance between the neighbor and the query vector.

entityId

string

The id of the similar entity.

entityKeyValues

object (GoogleCloudAiplatformV1FetchFeatureValuesResponse)

The attributes of the neighbor, e.g. filters, crowding and metadata Note that full entities are returned only when "return_full_entity" is set to true. Otherwise, only the "entity_id" and "distance" fields are populated.

GoogleCloudAiplatformV1Neighbor

Neighbors for example-based explanations.
Fields
neighborDistance

number (double format)

Output only. The neighbor distance.

neighborId

string

Output only. The neighbor id.

GoogleCloudAiplatformV1NetworkSpec

Network spec.
Fields
enableInternetAccess

boolean

Whether to enable public internet access. Default false.

network

string

The full name of the Google Compute Engine network

subnetwork

string

The name of the subnet that this instance is in. Format: projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}

GoogleCloudAiplatformV1NfsMount

Represents a mount configuration for Network File System (NFS) to mount.
Fields
mountPoint

string

Required. Destination mount path. The NFS will be mounted for the user under /mnt/nfs/

path

string

Required. Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path

server

string

Required. IP address of the NFS server.

GoogleCloudAiplatformV1NotebookEucConfig

The euc configuration of NotebookRuntimeTemplate.
Fields
bypassActasCheck

boolean

Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA.

eucDisabled

boolean

Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate.

GoogleCloudAiplatformV1NotebookIdleShutdownConfig

The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field.
Fields
idleShutdownDisabled

boolean

Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate.

idleTimeout

string (Duration format)

Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60.

GoogleCloudAiplatformV1NotebookReservationAffinity

Notebook Reservation Affinity for consuming Zonal reservation.
Fields
consumeReservationType

enum

Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples.

Enum type. Can be one of the following:
RESERVATION_AFFINITY_TYPE_UNSPECIFIED Default type.
RESERVATION_NONE Do not consume from any allocated capacity.
RESERVATION_ANY Consume any reservation available.
RESERVATION_SPECIFIC Must consume from a specific reservation. Must specify key value fields for specifying the reservations.
key

string

Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value.

values[]

string

Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation.

GoogleCloudAiplatformV1NotebookRuntime

A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this NotebookRuntime was created.

description

string

The description of the NotebookRuntime.

displayName

string

Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters.

expirationTime

string (Timestamp format)

Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade.

healthState

enum

Output only. The health state of the NotebookRuntime.

Enum type. Can be one of the following:
HEALTH_STATE_UNSPECIFIED Unspecified health state.
HEALTHY NotebookRuntime is in healthy state. Applies to ACTIVE state.
UNHEALTHY NotebookRuntime is in unhealthy state. Applies to ACTIVE state.
isUpgradable

boolean

Output only. Whether NotebookRuntime is upgradable.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex.

name

string

Output only. The resource name of the NotebookRuntime.

networkTags[]

string

Optional. The Compute Engine tags to add to runtime (see Tagging instances).

notebookRuntimeTemplateRef

object (GoogleCloudAiplatformV1NotebookRuntimeTemplateRef)

Output only. The pointer to NotebookRuntimeTemplate this NotebookRuntime is created from.

notebookRuntimeType

enum

Output only. The type of the notebook runtime.

Enum type. Can be one of the following:
NOTEBOOK_RUNTIME_TYPE_UNSPECIFIED Unspecified notebook runtime type, NotebookRuntimeType will default to USER_DEFINED.
USER_DEFINED runtime or template with coustomized configurations from user.
ONE_CLICK runtime or template with system defined configurations.
proxyUri

string

Output only. The proxy endpoint used to access the NotebookRuntime.

reservationAffinity

object (GoogleCloudAiplatformV1NotebookReservationAffinity)

Output only. Reservation Affinity of the notebook runtime.

runtimeState

enum

Output only. The runtime (instance) state of the NotebookRuntime.

Enum type. Can be one of the following:
RUNTIME_STATE_UNSPECIFIED Unspecified runtime state.
RUNNING NotebookRuntime is in running state.
BEING_STARTED NotebookRuntime is in starting state.
BEING_STOPPED NotebookRuntime is in stopping state.
STOPPED NotebookRuntime is in stopped state.
BEING_UPGRADED NotebookRuntime is in upgrading state. It is in the middle of upgrading process.
ERROR NotebookRuntime was unable to start/stop properly.
INVALID NotebookRuntime is in invalid state. Cannot be recovered.
runtimeUser

string

Required. The user email of the NotebookRuntime.

satisfiesPzi

boolean

Output only. Reserved for future use.

satisfiesPzs

boolean

Output only. Reserved for future use.

serviceAccount

string

Output only. The service account that the NotebookRuntime workload runs as.

updateTime

string (Timestamp format)

Output only. Timestamp when this NotebookRuntime was most recently updated.

version

string

Output only. The VM os image version of NotebookRuntime.

GoogleCloudAiplatformV1NotebookRuntimeTemplate

A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this NotebookRuntimeTemplate was created.

dataPersistentDiskSpec

object (GoogleCloudAiplatformV1PersistentDiskSpec)

Optional. The specification of persistent disk attached to the runtime as data disk storage.

description

string

The description of the NotebookRuntimeTemplate.

displayName

string

Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

eucConfig

object (GoogleCloudAiplatformV1NotebookEucConfig)

EUC configuration of the NotebookRuntimeTemplate.

idleShutdownConfig

object (GoogleCloudAiplatformV1NotebookIdleShutdownConfig)

The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled.

isDefault

boolean

Output only. The default template to use if not specified.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize the NotebookRuntimeTemplates. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

machineSpec

object (GoogleCloudAiplatformV1MachineSpec)

Optional. Immutable. The specification of a single machine for the template.

name

string

The resource name of the NotebookRuntimeTemplate.

networkSpec

object (GoogleCloudAiplatformV1NetworkSpec)

Optional. Network spec.

networkTags[]

string

Optional. The Compute Engine tags to add to runtime (see Tagging instances).

notebookRuntimeType

enum

Optional. Immutable. The type of the notebook runtime template.

Enum type. Can be one of the following:
NOTEBOOK_RUNTIME_TYPE_UNSPECIFIED Unspecified notebook runtime type, NotebookRuntimeType will default to USER_DEFINED.
USER_DEFINED runtime or template with coustomized configurations from user.
ONE_CLICK runtime or template with system defined configurations.
reservationAffinity

object (GoogleCloudAiplatformV1NotebookReservationAffinity)

Optional. Reservation Affinity of the notebook runtime template.

serviceAccount

string

The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the Compute Engine default service account is used.

shieldedVmConfig

object (GoogleCloudAiplatformV1ShieldedVmConfig)

Optional. Immutable. Runtime Shielded VM spec.

updateTime

string (Timestamp format)

Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated.

GoogleCloudAiplatformV1NotebookRuntimeTemplateRef

Points to a NotebookRuntimeTemplateRef.
Fields
notebookRuntimeTemplate

string

Immutable. A resource name of the NotebookRuntimeTemplate.

GoogleCloudAiplatformV1Part

A datatype containing media that is part of a multi-part Content message. A Part consists of data which has an associated datatype. A Part can only contain one of the accepted types in Part.data. A Part must have a fixed IANA MIME type identifying the type and subtype of the media if inline_data or file_data field is filled with raw bytes.
Fields
fileData

object (GoogleCloudAiplatformV1FileData)

Optional. URI based data.

functionCall

object (GoogleCloudAiplatformV1FunctionCall)

Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.

functionResponse

object (GoogleCloudAiplatformV1FunctionResponse)

Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.

inlineData

object (GoogleCloudAiplatformV1Blob)

Optional. Inlined bytes data.

text

string

Optional. Text part (can be code).

videoMetadata

object (GoogleCloudAiplatformV1VideoMetadata)

Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.

GoogleCloudAiplatformV1PersistentDiskSpec

Represents the spec of persistent disk options.
Fields
diskSizeGb

string (int64 format)

Size in GB of the disk (default is 100GB).

diskType

string

Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk)

GoogleCloudAiplatformV1PersistentResource

Represents long-lasting resources that are dedicated to users to runs custom workloads. A PersistentResource can have multiple node pools and each node pool can have its own machine spec.
Fields
createTime

string (Timestamp format)

Output only. Time when the PersistentResource was created.

displayName

string

Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.

error

object (GoogleRpcStatus)

Output only. Only populated when persistent resource's state is STOPPING or ERROR.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

name

string

Immutable. Resource name of a PersistentResource.

network

string

Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network.

reservedIpRanges[]

string

Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].

resourcePools[]

object (GoogleCloudAiplatformV1ResourcePool)

Required. The spec of the pools of different resources.

resourceRuntime

object (GoogleCloudAiplatformV1ResourceRuntime)

Output only. Runtime information of the Persistent Resource.

resourceRuntimeSpec

object (GoogleCloudAiplatformV1ResourceRuntimeSpec)

Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.

startTime

string (Timestamp format)

Output only. Time when the PersistentResource for the first time entered the RUNNING state.

state

enum

Output only. The detailed state of a Study.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Not set.
PROVISIONING The PROVISIONING state indicates the persistent resources is being created.
RUNNING The RUNNING state indicates the persistent resource is healthy and fully usable.
STOPPING The STOPPING state indicates the persistent resource is being deleted.
ERROR The ERROR state indicates the persistent resource may be unusable. Details can be found in the error field.
REBOOTING The REBOOTING state indicates the persistent resource is being rebooted (PR is not available right now but is expected to be ready again later).
UPDATING The UPDATING state indicates the persistent resource is being updated.
updateTime

string (Timestamp format)

Output only. Time when the PersistentResource was most recently updated.

GoogleCloudAiplatformV1PipelineJob

An instance of a machine learning PipelineJob.
Fields
createTime

string (Timestamp format)

Output only. Pipeline creation time.

displayName

string

The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.

endTime

string (Timestamp format)

Output only. Pipeline end time.

error

object (GoogleRpcStatus)

Output only. The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.

jobDetail

object (GoogleCloudAiplatformV1PipelineJobDetail)

Output only. The details of pipeline run. Not available in the list view.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.

name

string

Output only. The resource name of the PipelineJob.

network

string

The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.

pipelineSpec

map (key: string, value: any)

The spec of the pipeline.

reservedIpRanges[]

string

A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].

runtimeConfig

object (GoogleCloudAiplatformV1PipelineJobRuntimeConfig)

Runtime config of the pipeline.

scheduleName

string

Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API.

serviceAccount

string

The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.

startTime

string (Timestamp format)

Output only. Pipeline start time.

state

enum

Output only. The detailed state of the job.

Enum type. Can be one of the following:
PIPELINE_STATE_UNSPECIFIED The pipeline state is unspecified.
PIPELINE_STATE_QUEUED The pipeline has been created or resumed, and processing has not yet begun.
PIPELINE_STATE_PENDING The service is preparing to run the pipeline.
PIPELINE_STATE_RUNNING The pipeline is in progress.
PIPELINE_STATE_SUCCEEDED The pipeline completed successfully.
PIPELINE_STATE_FAILED The pipeline failed.
PIPELINE_STATE_CANCELLING The pipeline is being cancelled. From this state, the pipeline may only go to either PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED.
PIPELINE_STATE_CANCELLED The pipeline has been cancelled.
PIPELINE_STATE_PAUSED The pipeline has been stopped, and can be resumed.
templateMetadata

object (GoogleCloudAiplatformV1PipelineTemplateMetadata)

Output only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.

templateUri

string

A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.

updateTime

string (Timestamp format)

Output only. Timestamp when this PipelineJob was most recently updated.

GoogleCloudAiplatformV1PipelineJobDetail

The runtime detail of PipelineJob.
Fields
pipelineContext

object (GoogleCloudAiplatformV1Context)

Output only. The context of the pipeline.

pipelineRunContext

object (GoogleCloudAiplatformV1Context)

Output only. The context of the current pipeline run.

taskDetails[]

object (GoogleCloudAiplatformV1PipelineTaskDetail)

Output only. The runtime details of the tasks under the pipeline.

GoogleCloudAiplatformV1PipelineJobRuntimeConfig

The runtime config of a PipelineJob.
Fields
failurePolicy

enum

Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.

Enum type. Can be one of the following:
PIPELINE_FAILURE_POLICY_UNSPECIFIED Default value, and follows fail slow behavior.
PIPELINE_FAILURE_POLICY_FAIL_SLOW Indicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
PIPELINE_FAILURE_POLICY_FAIL_FAST Indicates that the pipeline should stop scheduling new tasks after a task has failed.
gcsOutputDirectory

string

Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.

inputArtifacts

map (key: string, value: object (GoogleCloudAiplatformV1PipelineJobRuntimeConfigInputArtifact))

The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.

parameterValues

map (key: string, value: any)

The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.

parameters

map (key: string, value: object (GoogleCloudAiplatformV1Value))

Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

GoogleCloudAiplatformV1PipelineJobRuntimeConfigInputArtifact

The type of an input artifact.
Fields
artifactId

string

Artifact resource id from MLMD. Which is the last portion of an artifact resource name: projects/{project}/locations/{location}/metadataStores/default/artifacts/{artifact_id}. The artifact must stay within the same project, location and default metadatastore as the pipeline.

GoogleCloudAiplatformV1PipelineTaskDetail

The runtime detail of a task execution.
Fields
createTime

string (Timestamp format)

Output only. Task create time.

endTime

string (Timestamp format)

Output only. Task end time.

error

object (GoogleRpcStatus)

Output only. The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.

execution

object (GoogleCloudAiplatformV1Execution)

Output only. The execution metadata of the task.

executorDetail

object (GoogleCloudAiplatformV1PipelineTaskExecutorDetail)

Output only. The detailed execution info.

inputs

map (key: string, value: object (GoogleCloudAiplatformV1PipelineTaskDetailArtifactList))

Output only. The runtime input artifacts of the task.

outputs

map (key: string, value: object (GoogleCloudAiplatformV1PipelineTaskDetailArtifactList))

Output only. The runtime output artifacts of the task.

parentTaskId

string (int64 format)

Output only. The id of the parent task if the task is within a component scope. Empty if the task is at the root level.

pipelineTaskStatus[]

object (GoogleCloudAiplatformV1PipelineTaskDetailPipelineTaskStatus)

Output only. A list of task status. This field keeps a record of task status evolving over time.

startTime

string (Timestamp format)

Output only. Task start time.

state

enum

Output only. State of the task.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Unspecified.
PENDING Specifies pending state for the task.
RUNNING Specifies task is being executed.
SUCCEEDED Specifies task completed successfully.
CANCEL_PENDING Specifies Task cancel is in pending state.
CANCELLING Specifies task is being cancelled.
CANCELLED Specifies task was cancelled.
FAILED Specifies task failed.
SKIPPED Specifies task was skipped due to cache hit.
NOT_TRIGGERED Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the condition field of PipelineJob.pipeline_spec.
taskId

string (int64 format)

Output only. The system generated ID of the task.

taskName

string

Output only. The user specified name of the task that is defined in pipeline_spec.

GoogleCloudAiplatformV1PipelineTaskDetailArtifactList

A list of artifact metadata.
Fields
artifacts[]

object (GoogleCloudAiplatformV1Artifact)

Output only. A list of artifact metadata.

GoogleCloudAiplatformV1PipelineTaskDetailPipelineTaskStatus

A single record of the task status.
Fields
error

object (GoogleRpcStatus)

Output only. The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.

state

enum

Output only. The state of the task.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Unspecified.
PENDING Specifies pending state for the task.
RUNNING Specifies task is being executed.
SUCCEEDED Specifies task completed successfully.
CANCEL_PENDING Specifies Task cancel is in pending state.
CANCELLING Specifies task is being cancelled.
CANCELLED Specifies task was cancelled.
FAILED Specifies task failed.
SKIPPED Specifies task was skipped due to cache hit.
NOT_TRIGGERED Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the condition field of PipelineJob.pipeline_spec.
updateTime

string (Timestamp format)

Output only. Update time of this status.

GoogleCloudAiplatformV1PipelineTaskExecutorDetail

The runtime detail of a pipeline executor.
Fields
containerDetail

object (GoogleCloudAiplatformV1PipelineTaskExecutorDetailContainerDetail)

Output only. The detailed info for a container executor.

customJobDetail

object (GoogleCloudAiplatformV1PipelineTaskExecutorDetailCustomJobDetail)

Output only. The detailed info for a custom job executor.

GoogleCloudAiplatformV1PipelineTaskExecutorDetailContainerDetail

The detail of a container execution. It contains the job names of the lifecycle of a container execution.
Fields
failedMainJobs[]

string

Output only. The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.

failedPreCachingCheckJobs[]

string

Output only. The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.

mainJob

string

Output only. The name of the CustomJob for the main container execution.

preCachingCheckJob

string

Output only. The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.

GoogleCloudAiplatformV1PipelineTaskExecutorDetailCustomJobDetail

The detailed info for a custom job executor.
Fields
failedJobs[]

string

Output only. The names of the previously failed CustomJob. The list includes the all attempts in chronological order.

job

string

Output only. The name of the CustomJob.

GoogleCloudAiplatformV1PipelineTemplateMetadata

Pipeline template metadata if PipelineJob.template_uri is from supported template registry. Currently, the only supported registry is Artifact Registry.
Fields
version

string

The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...".

GoogleCloudAiplatformV1Port

Represents a network port in a container.
Fields
containerPort

integer (int32 format)

The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.

GoogleCloudAiplatformV1PredefinedSplit

Assigns input data to training, validation, and test sets based on the value of a provided key. Supported only for tabular Datasets.
Fields
key

string

Required. The key is a name of one of the Dataset's data columns. The value of the key (either the label's value or value in the column) must be one of {training, validation, test}, and it defines to which set the given piece of data is assigned. If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline.

GoogleCloudAiplatformV1PredictRequest

Request message for PredictionService.Predict.
Fields
instances[]

any

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parameters

any

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

GoogleCloudAiplatformV1PredictRequestResponseLoggingConfig

Configuration for logging request-response to a BigQuery table.
Fields
bigqueryDestination

object (GoogleCloudAiplatformV1BigQueryDestination)

BigQuery table for logging. If only given a project, a new dataset will be created with name logging__ where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name request_response_logging

enabled

boolean

If logging is enabled or not.

samplingRate

number (double format)

Percentage of requests to be logged, expressed as a fraction in range(0,1].

GoogleCloudAiplatformV1PredictResponse

Response message for PredictionService.Predict.
Fields
deployedModelId

string

ID of the Endpoint's DeployedModel that served this prediction.

metadata

any

Output only. Request-level metadata returned by the model. The metadata type will be dependent upon the model implementation.

model

string

Output only. The resource name of the Model which is deployed as the DeployedModel that this prediction hits.

modelDisplayName

string

Output only. The display name of the Model which is deployed as the DeployedModel that this prediction hits.

modelVersionId

string

Output only. The version ID of the Model which is deployed as the DeployedModel that this prediction hits.

predictions[]

any

The predictions that are the output of the predictions call. The schema of any single prediction may be specified via Endpoint's DeployedModels' Model's PredictSchemata's prediction_schema_uri.

GoogleCloudAiplatformV1PredictSchemata

Contains the schemata used in Model's predictions and explanations via PredictionService.Predict, PredictionService.Explain and BatchPredictionJob.
Fields
instanceSchemaUri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in PredictRequest.instances, ExplainRequest.instances and BatchPredictionJob.input_config. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

parametersSchemaUri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via PredictRequest.parameters, ExplainRequest.parameters and BatchPredictionJob.model_parameters. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no parameters are supported, then it is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

predictionSchemaUri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via PredictResponse.predictions, ExplainResponse.explanations, and BatchPredictionJob.output_config. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

GoogleCloudAiplatformV1Presets

Preset configuration for example-based explanations
Fields
modality

enum

The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.

Enum type. Can be one of the following:
MODALITY_UNSPECIFIED Should not be set. Added as a recommended best practice for enums
IMAGE IMAGE modality
TEXT TEXT modality
TABULAR TABULAR modality
query

enum

Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.

Enum type. Can be one of the following:
PRECISE More precise neighbors as a trade-off against slower response.
FAST Faster response as a trade-off against less precise neighbors.

GoogleCloudAiplatformV1PrivateEndpoints

PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment.
Fields
explainHttpUri

string

Output only. Http(s) path to send explain requests.

healthHttpUri

string

Output only. Http(s) path to send health check requests.

predictHttpUri

string

Output only. Http(s) path to send prediction requests.

serviceAttachment

string

Output only. The name of the service attachment resource. Populated if private service connect is enabled.

GoogleCloudAiplatformV1PrivateServiceConnectConfig

Represents configuration for private service connect.
Fields
enablePrivateServiceConnect

boolean

Required. If true, expose the IndexEndpoint via private service connect.

projectAllowlist[]

string

A list of Projects from which the forwarding rule will target the service attachment.

GoogleCloudAiplatformV1Probe

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.
Fields
exec

object (GoogleCloudAiplatformV1ProbeExecAction)

Exec specifies the action to take.

periodSeconds

integer (int32 format)

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'.

timeoutSeconds

integer (int32 format)

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'.

GoogleCloudAiplatformV1ProbeExecAction

ExecAction specifies a command to execute.
Fields
command[]

string

Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

GoogleCloudAiplatformV1PscAutomatedEndpoints

PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.
Fields
matchAddress

string

Ip Address created by the automated forwarding rule.

network

string

Corresponding network in pscAutomationConfigs.

projectId

string

Corresponding project_id in pscAutomationConfigs

GoogleCloudAiplatformV1PublisherModel

A Model Garden Publisher Model.
Fields
frameworks[]

string

Optional. Additional information about the model's Frameworks.

launchStage

enum

Optional. Indicates the launch stage of the model.

Enum type. Can be one of the following:
LAUNCH_STAGE_UNSPECIFIED The model launch stage is unspecified.
EXPERIMENTAL Used to indicate the PublisherModel is at Experimental launch stage, available to a small set of customers.
PRIVATE_PREVIEW Used to indicate the PublisherModel is at Private Preview launch stage, only available to a small set of customers, although a larger set of customers than an Experimental launch. Previews are the first launch stage used to get feedback from customers.
PUBLIC_PREVIEW Used to indicate the PublisherModel is at Public Preview launch stage, available to all customers, although not supported for production workloads.
GA Used to indicate the PublisherModel is at GA launch stage, available to all customers and ready for production workload.
name

string

Output only. The resource name of the PublisherModel.

openSourceCategory

enum

Required. Indicates the open source category of the publisher model.

Enum type. Can be one of the following:
OPEN_SOURCE_CATEGORY_UNSPECIFIED The open source category is unspecified, which should not be used.
PROPRIETARY Used to indicate the PublisherModel is not open sourced.
GOOGLE_OWNED_OSS_WITH_GOOGLE_CHECKPOINT Used to indicate the PublisherModel is a Google-owned open source model w/ Google checkpoint.
THIRD_PARTY_OWNED_OSS_WITH_GOOGLE_CHECKPOINT Used to indicate the PublisherModel is a 3p-owned open source model w/ Google checkpoint.
GOOGLE_OWNED_OSS Used to indicate the PublisherModel is a Google-owned pure open source model.
THIRD_PARTY_OWNED_OSS Used to indicate the PublisherModel is a 3p-owned pure open source model.
predictSchemata

object (GoogleCloudAiplatformV1PredictSchemata)

Optional. The schemata that describes formats of the PublisherModel's predictions and explanations as given and returned via PredictionService.Predict.

publisherModelTemplate

string

Optional. Output only. Immutable. Used to indicate this model has a publisher model and provide the template of the publisher model resource name.

supportedActions

object (GoogleCloudAiplatformV1PublisherModelCallToAction)

Optional. Supported call-to-action options.

versionId

string

Output only. Immutable. The version ID of the PublisherModel. A new version is committed when a new model version is uploaded under an existing model id. It is an auto-incrementing decimal number in string representation.

versionState

enum

Optional. Indicates the state of the model version.

Enum type. Can be one of the following:
VERSION_STATE_UNSPECIFIED The version state is unspecified.
VERSION_STATE_STABLE Used to indicate the version is stable.
VERSION_STATE_UNSTABLE Used to indicate the version is unstable.

GoogleCloudAiplatformV1PublisherModelCallToAction

Actions could take on this Publisher Model.
Fields
createApplication

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Create application using the PublisherModel.

deploy

object (GoogleCloudAiplatformV1PublisherModelCallToActionDeploy)

Optional. Deploy the PublisherModel to Vertex Endpoint.

deployGke

object (GoogleCloudAiplatformV1PublisherModelCallToActionDeployGke)

Optional. Deploy PublisherModel to Google Kubernetes Engine.

fineTune

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Fine tune the PublisherModel with the third-party model tuning UI.

openEvaluationPipeline

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Open evaluation pipeline of the PublisherModel.

openFineTuningPipeline

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Open fine-tuning pipeline of the PublisherModel.

openFineTuningPipelines

object (GoogleCloudAiplatformV1PublisherModelCallToActionOpenFineTuningPipelines)

Optional. Open fine-tuning pipelines of the PublisherModel.

openGenerationAiStudio

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Open in Generation AI Studio.

openGenie

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Open Genie / Playground.

openNotebook

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Open notebook of the PublisherModel.

openNotebooks

object (GoogleCloudAiplatformV1PublisherModelCallToActionOpenNotebooks)

Optional. Open notebooks of the PublisherModel.

openPromptTuningPipeline

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Open prompt-tuning pipeline of the PublisherModel.

requestAccess

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Optional. Request for access.

viewRestApi

object (GoogleCloudAiplatformV1PublisherModelCallToActionViewRestApi)

Optional. To view Rest API docs.

GoogleCloudAiplatformV1PublisherModelCallToActionDeploy

Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests.
Fields
artifactUri

string

Optional. The path to the directory containing the Model artifact and any of its supporting files.

automaticResources

object (GoogleCloudAiplatformV1AutomaticResources)

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.

containerSpec

object (GoogleCloudAiplatformV1ModelContainerSpec)

Optional. The specification of the container that is to be used when deploying this Model in Vertex AI. Not present for Large Models.

dedicatedResources

object (GoogleCloudAiplatformV1DedicatedResources)

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

deployTaskName

string

Optional. The name of the deploy task (e.g., "text to image generation").

largeModelReference

object (GoogleCloudAiplatformV1LargeModelReference)

Optional. Large model reference. When this is set, model_artifact_spec is not needed.

modelDisplayName

string

Optional. Default model display name.

publicArtifactUri

string

Optional. The signed URI for ephemeral Cloud Storage access to model artifact.

sharedResources

string

The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

title

string

Required. The title of the regional resource reference.

GoogleCloudAiplatformV1PublisherModelCallToActionDeployGke

Configurations for PublisherModel GKE deployment
Fields
gkeYamlConfigs[]

string

Optional. GKE deployment configuration in yaml format.

GoogleCloudAiplatformV1PublisherModelCallToActionOpenFineTuningPipelines

Open fine tuning pipelines.
Fields
fineTuningPipelines[]

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Required. Regional resource references to fine tuning pipelines.

GoogleCloudAiplatformV1PublisherModelCallToActionOpenNotebooks

Open notebooks.
Fields
notebooks[]

object (GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences)

Required. Regional resource references to notebooks.

GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences

The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc..
Fields
references

map (key: string, value: object (GoogleCloudAiplatformV1PublisherModelResourceReference))

Required.

resourceDescription

string

Optional. Description of the resource.

resourceTitle

string

Optional. Title of the resource.

resourceUseCase

string

Optional. Use case (CUJ) of the resource.

title

string

Required.

GoogleCloudAiplatformV1PublisherModelCallToActionViewRestApi

Rest API docs.
Fields
documentations[]

object (GoogleCloudAiplatformV1PublisherModelDocumentation)

Required.

title

string

Required. The title of the view rest API.

GoogleCloudAiplatformV1PublisherModelDocumentation

A named piece of documentation.
Fields
content

string

Required. Content of this piece of document (in Markdown format).

title

string

Required. E.g., OVERVIEW, USE CASES, DOCUMENTATION, SDK & SAMPLES, JAVA, NODE.JS, etc..

GoogleCloudAiplatformV1PublisherModelResourceReference

Reference to a resource.
Fields
description

string

Description of the resource.

resourceName

string

The resource name of the Google Cloud resource.

uri

string

The URI of the resource.

useCase

string

Use case (CUJ) of the resource.

GoogleCloudAiplatformV1PurgeArtifactsMetadata

Details of operations that perform MetadataService.PurgeArtifacts.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for purging Artifacts.

GoogleCloudAiplatformV1PurgeArtifactsRequest

Request message for MetadataService.PurgeArtifacts.
Fields
filter

string

Required. A required filter matching the Artifacts to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00.

force

boolean

Optional. Flag to indicate to actually perform the purge. If force is set to false, the method will return a sample of Artifact names that would be deleted.

GoogleCloudAiplatformV1PurgeArtifactsResponse

Response message for MetadataService.PurgeArtifacts.
Fields
purgeCount

string (int64 format)

The number of Artifacts that this request deleted (or, if force is false, the number of Artifacts that will be deleted). This can be an estimate.

purgeSample[]

string

A sample of the Artifact names that will be deleted. Only populated if force is set to false. The maximum number of samples is 100 (it is possible to return fewer).

GoogleCloudAiplatformV1PurgeContextsMetadata

Details of operations that perform MetadataService.PurgeContexts.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for purging Contexts.

GoogleCloudAiplatformV1PurgeContextsRequest

Request message for MetadataService.PurgeContexts.
Fields
filter

string

Required. A required filter matching the Contexts to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00.

force

boolean

Optional. Flag to indicate to actually perform the purge. If force is set to false, the method will return a sample of Context names that would be deleted.

GoogleCloudAiplatformV1PurgeContextsResponse

Response message for MetadataService.PurgeContexts.
Fields
purgeCount

string (int64 format)

The number of Contexts that this request deleted (or, if force is false, the number of Contexts that will be deleted). This can be an estimate.

purgeSample[]

string

A sample of the Context names that will be deleted. Only populated if force is set to false. The maximum number of samples is 100 (it is possible to return fewer).

GoogleCloudAiplatformV1PurgeExecutionsMetadata

Details of operations that perform MetadataService.PurgeExecutions.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for purging Executions.

GoogleCloudAiplatformV1PurgeExecutionsRequest

Request message for MetadataService.PurgeExecutions.
Fields
filter

string

Required. A required filter matching the Executions to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00.

force

boolean

Optional. Flag to indicate to actually perform the purge. If force is set to false, the method will return a sample of Execution names that would be deleted.

GoogleCloudAiplatformV1PurgeExecutionsResponse

Response message for MetadataService.PurgeExecutions.
Fields
purgeCount

string (int64 format)

The number of Executions that this request deleted (or, if force is false, the number of Executions that will be deleted). This can be an estimate.

purgeSample[]

string

A sample of the Execution names that will be deleted. Only populated if force is set to false. The maximum number of samples is 100 (it is possible to return fewer).

GoogleCloudAiplatformV1PythonPackageSpec

The spec of a Python packaged code.
Fields
args[]

string

Command line arguments to be passed to the Python task.

env[]

object (GoogleCloudAiplatformV1EnvVar)

Environment variables to be passed to the python module. Maximum limit is 100.

executorImageUri

string

Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.

packageUris[]

string

Required. The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.

pythonModule

string

Required. The Python module name to run after installing the packages.

GoogleCloudAiplatformV1QueryDeployedModelsResponse

Response message for QueryDeployedModels method.
Fields
deployedModelRefs[]

object (GoogleCloudAiplatformV1DeployedModelRef)

References to the DeployedModels that share the specified deploymentResourcePool.

deployedModels[]

object (GoogleCloudAiplatformV1DeployedModel)

DEPRECATED Use deployed_model_refs instead.

nextPageToken

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

totalDeployedModelCount

integer (int32 format)

The total number of DeployedModels on this DeploymentResourcePool.

totalEndpointCount

integer (int32 format)

The total number of Endpoints that have DeployedModels on this DeploymentResourcePool.

GoogleCloudAiplatformV1RawPredictRequest

Request message for PredictionService.RawPredict.
Fields
httpBody

object (GoogleApiHttpBody)

The prediction input. Supports HTTP headers and arbitrary data payload. A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model. You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the Model as a DeployedModel to an Endpoint and use the RawPredict method.

GoogleCloudAiplatformV1ReadFeatureValuesRequest

Request message for FeaturestoreOnlineServingService.ReadFeatureValues.
Fields
entityId

string

Required. ID for a specific entity. For example, for a machine learning model predicting user clicks on a website, an entity ID could be user_123.

featureSelector

object (GoogleCloudAiplatformV1FeatureSelector)

Required. Selector choosing Features of the target EntityType.

GoogleCloudAiplatformV1ReadFeatureValuesResponse

Response message for FeaturestoreOnlineServingService.ReadFeatureValues.
Fields
entityView

object (GoogleCloudAiplatformV1ReadFeatureValuesResponseEntityView)

Entity view with Feature values. This may be the entity in the Featurestore if values for all Features were requested, or a projection of the entity in the Featurestore if values for only some Features were requested.

header

object (GoogleCloudAiplatformV1ReadFeatureValuesResponseHeader)

Response header.

GoogleCloudAiplatformV1ReadFeatureValuesResponseEntityView

Entity view with Feature values.
Fields
data[]

object (GoogleCloudAiplatformV1ReadFeatureValuesResponseEntityViewData)

Each piece of data holds the k requested values for one requested Feature. If no values for the requested Feature exist, the corresponding cell will be empty. This has the same size and is in the same order as the features from the header ReadFeatureValuesResponse.header.

entityId

string

ID of the requested entity.

GoogleCloudAiplatformV1ReadFeatureValuesResponseEntityViewData

Container to hold value(s), successive in time, for one Feature from the request.
Fields
value

object (GoogleCloudAiplatformV1FeatureValue)

Feature value if a single value is requested.

values

object (GoogleCloudAiplatformV1FeatureValueList)

Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty.

GoogleCloudAiplatformV1ReadFeatureValuesResponseFeatureDescriptor

Metadata for requested Features.
Fields
id

string

Feature ID.

GoogleCloudAiplatformV1ReadFeatureValuesResponseHeader

Response header with metadata for the requested ReadFeatureValuesRequest.entity_type and Features.
Fields
entityType

string

The resource name of the EntityType from the ReadFeatureValuesRequest. Value format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}.

featureDescriptors[]

object (GoogleCloudAiplatformV1ReadFeatureValuesResponseFeatureDescriptor)

List of Feature metadata corresponding to each piece of ReadFeatureValuesResponse.EntityView.data.

GoogleCloudAiplatformV1ReadIndexDatapointsRequest

The request message for MatchService.ReadIndexDatapoints.
Fields
deployedIndexId

string

The ID of the DeployedIndex that will serve the request.

ids[]

string

IDs of the datapoints to be searched for.

GoogleCloudAiplatformV1ReadIndexDatapointsResponse

The response message for MatchService.ReadIndexDatapoints.
Fields
datapoints[]

object (GoogleCloudAiplatformV1IndexDatapoint)

The result list of datapoints.

GoogleCloudAiplatformV1ReadTensorboardBlobDataResponse

Response message for TensorboardService.ReadTensorboardBlobData.
Fields
blobs[]

object (GoogleCloudAiplatformV1TensorboardBlob)

Blob messages containing blob bytes.

GoogleCloudAiplatformV1ReadTensorboardSizeResponse

Response message for TensorboardService.ReadTensorboardSize.
Fields
storageSizeByte

string (int64 format)

Payload storage size for the TensorBoard

GoogleCloudAiplatformV1ReadTensorboardTimeSeriesDataResponse

Response message for TensorboardService.ReadTensorboardTimeSeriesData.
Fields
timeSeriesData

object (GoogleCloudAiplatformV1TimeSeriesData)

The returned time series data.

GoogleCloudAiplatformV1ReadTensorboardUsageResponse

Response message for TensorboardService.ReadTensorboardUsage.
Fields
monthlyUsageData

map (key: string, value: object (GoogleCloudAiplatformV1ReadTensorboardUsageResponsePerMonthUsageData))

Maps year-month (YYYYMM) string to per month usage data.

GoogleCloudAiplatformV1ReadTensorboardUsageResponsePerMonthUsageData

Per month usage data
Fields
userUsageData[]

object (GoogleCloudAiplatformV1ReadTensorboardUsageResponsePerUserUsageData)

Usage data for each user in the given month.

GoogleCloudAiplatformV1ReadTensorboardUsageResponsePerUserUsageData

Per user usage data.
Fields
username

string

User's username

viewCount

string (int64 format)

Number of times the user has read data within the Tensorboard.

GoogleCloudAiplatformV1RebootPersistentResourceOperationMetadata

Details of operations that perform reboot PersistentResource.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for PersistentResource.

progressMessage

string

Progress Message for Reboot LRO

GoogleCloudAiplatformV1RemoveContextChildrenRequest

Request message for MetadataService.DeleteContextChildrenRequest.
Fields
childContexts[]

string

The resource names of the child Contexts.

GoogleCloudAiplatformV1RemoveDatapointsRequest

Request message for IndexService.RemoveDatapoints
Fields
datapointIds[]

string

A list of datapoint ids to be deleted.

GoogleCloudAiplatformV1ResourcePool

Represents the spec of a group of resources of the same type, for example machine type, disk, and accelerators, in a PersistentResource.
Fields
autoscalingSpec

object (GoogleCloudAiplatformV1ResourcePoolAutoscalingSpec)

Optional. Optional spec to configure GKE autoscaling

diskSpec

object (GoogleCloudAiplatformV1DiskSpec)

Optional. Disk spec for the machine in this node pool.

id

string

Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.

machineSpec

object (GoogleCloudAiplatformV1MachineSpec)

Required. Immutable. The specification of a single machine.

replicaCount

string (int64 format)

Optional. The total number of machines to use for this resource pool.

usedReplicaCount

string (int64 format)

Output only. The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.

GoogleCloudAiplatformV1ResourcePoolAutoscalingSpec

The min/max number of replicas allowed if enabling autoscaling
Fields
maxReplicaCount

string (int64 format)

Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error

minReplicaCount

string (int64 format)

Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error

GoogleCloudAiplatformV1ResourceRuntimeSpec

Configuration for the runtime on a PersistentResource instance, including but not limited to: * Service accounts used to run the workloads. * Whether to make it a dedicated Ray Cluster.
Fields
raySpec

object (GoogleCloudAiplatformV1RaySpec)

Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.

serviceAccountSpec

object (GoogleCloudAiplatformV1ServiceAccountSpec)

Optional. Configure the use of workload identity on the PersistentResource

GoogleCloudAiplatformV1ResourcesConsumed

Statistics information about resource consumption.
Fields
replicaHours

number (double format)

Output only. The number of replica hours used. Note that many replicas may run in parallel, and additionally any given work may be queued for some time. Therefore this value is not strictly related to wall time.

GoogleCloudAiplatformV1RestoreDatasetVersionOperationMetadata

Runtime operation information for DatasetService.RestoreDatasetVersion.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1ResumeScheduleRequest

Request message for ScheduleService.ResumeSchedule.
Fields
catchUp

boolean

Optional. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. This will also update Schedule.catch_up field. Default to false.

GoogleCloudAiplatformV1Retrieval

Defines a retrieval tool that model can call to access external knowledge.
Fields
disableAttribution

boolean

Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation.

vertexAiSearch

object (GoogleCloudAiplatformV1VertexAISearch)

Set to use data source powered by Vertex AI Search.

GoogleCloudAiplatformV1SafetyRating

Safety rating corresponding to the generated content.
Fields
blocked

boolean

Output only. Indicates whether the content was filtered out because of this rating.

category

enum

Output only. Harm category.

Enum type. Can be one of the following:
HARM_CATEGORY_UNSPECIFIED The harm category is unspecified.
HARM_CATEGORY_HATE_SPEECH The harm category is hate speech.
HARM_CATEGORY_DANGEROUS_CONTENT The harm category is dangerous content.
HARM_CATEGORY_HARASSMENT The harm category is harassment.
HARM_CATEGORY_SEXUALLY_EXPLICIT The harm category is sexually explicit content.
probability

enum

Output only. Harm probability levels in the content.

Enum type. Can be one of the following:
HARM_PROBABILITY_UNSPECIFIED Harm probability unspecified.
NEGLIGIBLE Negligible level of harm.
LOW Low level of harm.
MEDIUM Medium level of harm.
HIGH High level of harm.
probabilityScore

number (float format)

Output only. Harm probability score.

severity

enum

Output only. Harm severity levels in the content.

Enum type. Can be one of the following:
HARM_SEVERITY_UNSPECIFIED Harm severity unspecified.
HARM_SEVERITY_NEGLIGIBLE Negligible level of harm severity.
HARM_SEVERITY_LOW Low level of harm severity.
HARM_SEVERITY_MEDIUM Medium level of harm severity.
HARM_SEVERITY_HIGH High level of harm severity.
severityScore

number (float format)

Output only. Harm severity score.

GoogleCloudAiplatformV1SafetySetting

Safety settings.
Fields
category

enum

Required. Harm category.

Enum type. Can be one of the following:
HARM_CATEGORY_UNSPECIFIED The harm category is unspecified.
HARM_CATEGORY_HATE_SPEECH The harm category is hate speech.
HARM_CATEGORY_DANGEROUS_CONTENT The harm category is dangerous content.
HARM_CATEGORY_HARASSMENT The harm category is harassment.
HARM_CATEGORY_SEXUALLY_EXPLICIT The harm category is sexually explicit content.
method

enum

Optional. Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score.

Enum type. Can be one of the following:
HARM_BLOCK_METHOD_UNSPECIFIED The harm block method is unspecified.
SEVERITY The harm block method uses both probability and severity scores.
PROBABILITY The harm block method uses the probability score.
threshold

enum

Required. The harm block threshold.

Enum type. Can be one of the following:
HARM_BLOCK_THRESHOLD_UNSPECIFIED Unspecified harm block threshold.
BLOCK_LOW_AND_ABOVE Block low threshold and above (i.e. block more).
BLOCK_MEDIUM_AND_ABOVE Block medium threshold and above.
BLOCK_ONLY_HIGH Block only high threshold (i.e. block less).
BLOCK_NONE Block none.

GoogleCloudAiplatformV1SampleConfig

Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.
Fields
followingBatchSamplePercentage

integer (int32 format)

The percentage of data needed to be labeled in each following batch (except the first batch).

initialBatchSamplePercentage

integer (int32 format)

The percentage of data needed to be labeled in the first batch.

sampleStrategy

enum

Field to choose sampling strategy. Sampling strategy will decide which data should be selected for human labeling in every batch.

Enum type. Can be one of the following:
SAMPLE_STRATEGY_UNSPECIFIED Default will be treated as UNCERTAINTY.
UNCERTAINTY Sample the most uncertain data to label.

GoogleCloudAiplatformV1SampledShapleyAttribution

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
Fields
pathCount

integer (int32 format)

Required. The number of feature permutations to consider when approximating the Shapley values. Valid range of its value is [1, 50], inclusively.

GoogleCloudAiplatformV1SamplingStrategy

Sampling Strategy for logging, can be for both training and prediction dataset.
Fields
randomSampleConfig

object (GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig)

Random sample config. Will support more sampling strategies later.

GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig

Requests are randomly selected.
Fields
sampleRate

number (double format)

Sample rate (0, 1]

GoogleCloudAiplatformV1SavedQuery

A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.
Fields
annotationFilter

string

Output only. Filters on the Annotations in the dataset.

annotationSpecCount

integer (int32 format)

Output only. Number of AnnotationSpecs in the context of the SavedQuery.

createTime

string (Timestamp format)

Output only. Timestamp when this SavedQuery was created.

displayName

string

Required. The user-defined name of the SavedQuery. The name can be up to 128 characters long and can consist of any UTF-8 characters.

etag

string

Used to perform a consistent read-modify-write update. If not set, a blind "overwrite" update happens.

metadata

any

Some additional information about the SavedQuery.

name

string

Output only. Resource name of the SavedQuery.

problemType

string

Required. Problem type of the SavedQuery. Allowed values: * IMAGE_CLASSIFICATION_SINGLE_LABEL * IMAGE_CLASSIFICATION_MULTI_LABEL * IMAGE_BOUNDING_POLY * IMAGE_BOUNDING_BOX * TEXT_CLASSIFICATION_SINGLE_LABEL * TEXT_CLASSIFICATION_MULTI_LABEL * TEXT_EXTRACTION * TEXT_SENTIMENT * VIDEO_CLASSIFICATION * VIDEO_OBJECT_TRACKING

supportAutomlTraining

boolean

Output only. If the Annotations belonging to the SavedQuery can be used for AutoML training.

updateTime

string (Timestamp format)

Output only. Timestamp when SavedQuery was last updated.

GoogleCloudAiplatformV1Scalar

One point viewable on a scalar metric plot.
Fields
value

number (double format)

Value of the point at this step / timestamp.

GoogleCloudAiplatformV1Schedule

An instance of a Schedule periodically schedules runs to make API calls based on user specified time specification and API request type.
Fields
allowQueueing

boolean

Optional. Whether new scheduled runs can be queued when max_concurrent_runs limit is reached. If set to true, new runs will be queued instead of skipped. Default to false.

catchUp

boolean

Output only. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. Default to false.

createPipelineJobRequest

object (GoogleCloudAiplatformV1CreatePipelineJobRequest)

Request for PipelineService.CreatePipelineJob. CreatePipelineJobRequest.parent field is required (format: projects/{project}/locations/{location}).

createTime

string (Timestamp format)

Output only. Timestamp when this Schedule was created.

cron

string

Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * ", or "TZ=America/New_York 1 * * * ".

displayName

string

Required. User provided name of the Schedule. The name can be up to 128 characters long and can consist of any UTF-8 characters.

endTime

string (Timestamp format)

Optional. Timestamp after which no new runs can be scheduled. If specified, The schedule will be completed when either end_time is reached or when scheduled_run_count >= max_run_count. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified.

lastPauseTime

string (Timestamp format)

Output only. Timestamp when this Schedule was last paused. Unset if never paused.

lastResumeTime

string (Timestamp format)

Output only. Timestamp when this Schedule was last resumed. Unset if never resumed from pause.

lastScheduledRunResponse

object (GoogleCloudAiplatformV1ScheduleRunResponse)

Output only. Response of the last scheduled run. This is the response for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable). Unset if no run has been scheduled yet.

maxConcurrentRunCount

string (int64 format)

Required. Maximum number of runs that can be started concurrently for this Schedule. This is the limit for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable).

maxRunCount

string (int64 format)

Optional. Maximum run count of the schedule. If specified, The schedule will be completed when either started_run_count >= max_run_count or when end_time is reached. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified.

name

string

Immutable. The resource name of the Schedule.

nextRunTime

string (Timestamp format)

Output only. Timestamp when this Schedule should schedule the next run. Having a next_run_time in the past means the runs are being started behind schedule.

startTime

string (Timestamp format)

Optional. Timestamp after which the first run can be scheduled. Default to Schedule create time if not specified.

startedRunCount

string (int64 format)

Output only. The number of runs started by this schedule.

state

enum

Output only. The state of this Schedule.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Unspecified.
ACTIVE The Schedule is active. Runs are being scheduled on the user-specified timespec.
PAUSED The schedule is paused. No new runs will be created until the schedule is resumed. Already started runs will be allowed to complete.
COMPLETED The Schedule is completed. No new runs will be scheduled. Already started runs will be allowed to complete. Schedules in completed state cannot be paused or resumed.
updateTime

string (Timestamp format)

Output only. Timestamp when this Schedule was updated.

GoogleCloudAiplatformV1ScheduleRunResponse

Status of a scheduled run.
Fields
runResponse

string

The response of the scheduled run.

scheduledRunTime

string (Timestamp format)

The scheduled run time based on the user-specified schedule.

GoogleCloudAiplatformV1Scheduling

All parameters related to queuing and scheduling of custom jobs.
Fields
disableRetries

boolean

Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.

restartJobOnWorkerRestart

boolean

Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.

timeout

string (Duration format)

The maximum job running time. The default is 7 days.

GoogleCloudAiplatformV1Schema

Schema is used to define the format of input/output data. Represents a select subset of an OpenAPI 3.0 schema object. More fields may be added in the future as needed.
Fields
default

any

Optional. Default value of the data.

description

string

Optional. The description of the data.

enum[]

string

Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}

example

any

Optional. Example of the object. Will only populated when the object is the root.

format

string

Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc

items

object (GoogleCloudAiplatformV1Schema)

Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.

maxItems

string (int64 format)

Optional. Maximum number of the elements for Type.ARRAY.

maxLength

string (int64 format)

Optional. Maximum length of the Type.STRING

maxProperties

string (int64 format)

Optional. Maximum number of the properties for Type.OBJECT.

maximum

number (double format)

Optional. Maximum value of the Type.INTEGER and Type.NUMBER

minItems

string (int64 format)

Optional. Minimum number of the elements for Type.ARRAY.

minLength

string (int64 format)

Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING

minProperties

string (int64 format)

Optional. Minimum number of the properties for Type.OBJECT.

minimum

number (double format)

Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER

nullable

boolean

Optional. Indicates if the value may be null.

pattern

string

Optional. Pattern of the Type.STRING to restrict a string to a regular expression.

properties

map (key: string, value: object (GoogleCloudAiplatformV1Schema))

Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.

required[]

string

Optional. Required properties of Type.OBJECT.

title

string

Optional. The title of the Schema.

type

enum

Optional. The type of the data.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED Not specified, should not be used.
STRING OpenAPI string type
NUMBER OpenAPI number type
INTEGER OpenAPI integer type
BOOLEAN OpenAPI boolean type
ARRAY OpenAPI array type
OBJECT OpenAPI object type

GoogleCloudAiplatformV1SchemaAnnotationSpecColor

An entry of mapping between color and AnnotationSpec. The mapping is used in segmentation mask.
Fields
color

object (GoogleTypeColor)

The color of the AnnotationSpec in a segmentation mask.

displayName

string

The display name of the AnnotationSpec represented by the color in the segmentation mask.

id

string

The ID of the AnnotationSpec represented by the color in the segmentation mask.

GoogleCloudAiplatformV1SchemaImageBoundingBoxAnnotation

Annotation details specific to image object detection.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

xMax

number (double format)

The rightmost coordinate of the bounding box.

xMin

number (double format)

The leftmost coordinate of the bounding box.

yMax

number (double format)

The bottommost coordinate of the bounding box.

yMin

number (double format)

The topmost coordinate of the bounding box.

GoogleCloudAiplatformV1SchemaImageClassificationAnnotation

Annotation details specific to image classification.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

GoogleCloudAiplatformV1SchemaImageDataItem

Payload of Image DataItem.
Fields
gcsUri

string

Required. Google Cloud Storage URI points to the original image in user's bucket. The image is up to 30MB in size.

mimeType

string

Output only. The mime type of the content of the image. Only the images in below listed mime types are supported. - image/jpeg - image/gif - image/png - image/webp - image/bmp - image/tiff - image/vnd.microsoft.icon

GoogleCloudAiplatformV1SchemaImageDatasetMetadata

The metadata of Datasets that contain Image DataItems.
Fields
dataItemSchemaUri

string

Points to a YAML file stored on Google Cloud Storage describing payload of the Image DataItems that belong to this Dataset.

gcsBucket

string

Google Cloud Storage Bucket name that contains the blob data of this Dataset.

GoogleCloudAiplatformV1SchemaImageSegmentationAnnotation

Annotation details specific to image segmentation.
Fields
maskAnnotation

object (GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationMaskAnnotation)

Mask based segmentation annotation. Only one mask annotation can exist for one image.

polygonAnnotation

object (GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationPolygonAnnotation)

Polygon annotation.

polylineAnnotation

object (GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationPolylineAnnotation)

Polyline annotation.

GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationMaskAnnotation

The mask based segmentation annotation.
Fields
annotationSpecColors[]

object (GoogleCloudAiplatformV1SchemaAnnotationSpecColor)

The mapping between color and AnnotationSpec for this Annotation.

maskGcsUri

string

Google Cloud Storage URI that points to the mask image. The image must be in PNG format. It must have the same size as the DataItem's image. Each pixel in the image mask represents the AnnotationSpec which the pixel in the image DataItem belong to. Each color is mapped to one AnnotationSpec based on annotation_spec_colors.

GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationPolygonAnnotation

Represents a polygon in image.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

vertexes[]

object (GoogleCloudAiplatformV1SchemaVertex)

The vertexes are connected one by one and the last vertex is connected to the first one to represent a polygon.

GoogleCloudAiplatformV1SchemaImageSegmentationAnnotationPolylineAnnotation

Represents a polyline in image.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

vertexes[]

object (GoogleCloudAiplatformV1SchemaVertex)

The vertexes are connected one by one and the last vertex in not connected to the first one.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsBoundingBoxMetrics

Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.
Fields
confidenceMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsBoundingBoxMetricsConfidenceMetrics)

Metrics for each label-match confidence_threshold from 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99. Precision-recall curve is derived from them.

iouThreshold

number (float format)

The intersection-over-union threshold value used to compute this metrics entry.

meanAveragePrecision

number (float format)

The mean average precision, most often close to auPrc.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsBoundingBoxMetricsConfidenceMetrics

Metrics for a single confidence threshold.
Fields
confidenceThreshold

number (float format)

The confidence threshold value used to compute the metrics.

f1Score

number (float format)

The harmonic mean of recall and precision.

precision

number (float format)

Precision under the given confidence threshold.

recall

number (float format)

Recall under the given confidence threshold.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetrics

Metrics for classification evaluation results.
Fields
auPrc

number (float format)

The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.

auRoc

number (float format)

The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.

confidenceMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics)

Metrics for each confidenceThreshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and positionThreshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of positionThreshold, but from these no aggregated metrics are computed.

confusionMatrix

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix)

Confusion matrix of the evaluation.

logLoss

number (float format)

The Log Loss metric.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics

(No description provided)
Fields
confidenceThreshold

number (float format)

Metrics are computed with an assumption that the Model never returns predictions with score lower than this value.

confusionMatrix

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix)

Confusion matrix of the evaluation for this confidence_threshold.

f1Score

number (float format)

The harmonic mean of recall and precision. For summary metrics, it computes the micro-averaged F1 score.

f1ScoreAt1

number (float format)

The harmonic mean of recallAt1 and precisionAt1.

f1ScoreMacro

number (float format)

Macro-averaged F1 Score.

f1ScoreMicro

number (float format)

Micro-averaged F1 Score.

falseNegativeCount

string (int64 format)

The number of ground truth labels that are not matched by a Model created label.

falsePositiveCount

string (int64 format)

The number of Model created labels that do not match a ground truth label.

falsePositiveRate

number (float format)

False Positive Rate for the given confidence threshold.

falsePositiveRateAt1

number (float format)

The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.

maxPredictions

integer (int32 format)

Metrics are computed with an assumption that the Model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidenceThreshold.

precision

number (float format)

Precision for the given confidence threshold.

precisionAt1

number (float format)

The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.

recall

number (float format)

Recall (True Positive Rate) for the given confidence threshold.

recallAt1

number (float format)

The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.

trueNegativeCount

string (int64 format)

The number of labels that were not created by the Model, but if they would, they would not match a ground truth label.

truePositiveCount

string (int64 format)

The number of Model created labels that match a ground truth label.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix

(No description provided)
Fields
annotationSpecs[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrixAnnotationSpecRef)

AnnotationSpecs used in the confusion matrix. For AutoML Text Extraction, a special negative AnnotationSpec with empty id and displayName of "NULL" will be added as the last element.

rows[]

array

Rows in the confusion matrix. The number of rows is equal to the size of annotationSpecs. rowsi is the number of DataItems that have ground truth of the annotationSpecs[i] and are predicted as annotationSpecs[j] by the Model being evaluated. For Text Extraction, when annotationSpecs[i] is the last element in annotationSpecs, i.e. the special negative AnnotationSpec, rowsi is the number of predicted entities of annoatationSpec[j] that are not labeled as any of the ground truth AnnotationSpec. When annotationSpecs[j] is the special negative AnnotationSpec, rowsi is the number of entities have ground truth of annotationSpec[i] that are not predicted as an entity by the Model. The value of the last cell, i.e. rowi where i == j and annotationSpec[i] is the special negative AnnotationSpec, is always 0.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrixAnnotationSpecRef

(No description provided)
Fields
displayName

string

Display name of the AnnotationSpec.

id

string

ID of the AnnotationSpec.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsForecastingEvaluationMetrics

Metrics for forecasting evaluation results.
Fields
meanAbsoluteError

number (float format)

Mean Absolute Error (MAE).

meanAbsolutePercentageError

number (float format)

Mean absolute percentage error. Infinity when there are zeros in the ground truth.

quantileMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsForecastingEvaluationMetricsQuantileMetricsEntry)

The quantile metrics entries for each quantile.

rSquared

number (float format)

Coefficient of determination as Pearson correlation coefficient. Undefined when ground truth or predictions are constant or near constant.

rootMeanSquaredError

number (float format)

Root Mean Squared Error (RMSE).

rootMeanSquaredLogError

number (float format)

Root mean squared log error. Undefined when there are negative ground truth values or predictions.

rootMeanSquaredPercentageError

number (float format)

Root Mean Square Percentage Error. Square root of MSPE. Undefined/imaginary when MSPE is negative.

weightedAbsolutePercentageError

number (float format)

Weighted Absolute Percentage Error. Does not use weights, this is just what the metric is called. Undefined if actual values sum to zero. Will be very large if actual values sum to a very small number.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsForecastingEvaluationMetricsQuantileMetricsEntry

Entry for the Quantiles loss type optimization objective.
Fields
observedQuantile

number (double format)

This is a custom metric that calculates the percentage of true values that were less than the predicted value for that quantile. Only populated when optimization_objective is minimize-quantile-loss and each entry corresponds to an entry in quantiles The percent value can be used to compare with the quantile value, which is the target value.

quantile

number (double format)

The quantile for this entry.

scaledPinballLoss

number (float format)

The scaled pinball loss of this quantile.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsGeneralTextGenerationEvaluationMetrics

(No description provided)
Fields
bleu

number (float format)

BLEU (bilingual evaluation understudy) scores based on sacrebleu implementation.

rougeLSum

number (float format)

ROUGE-L (Longest Common Subsequence) scoring at summary level.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsImageObjectDetectionEvaluationMetrics

Metrics for image object detection evaluation results.
Fields
boundingBoxMeanAveragePrecision

number (float format)

The single metric for bounding boxes evaluation: the meanAveragePrecision averaged over all boundingBoxMetricsEntries.

boundingBoxMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsBoundingBoxMetrics)

The bounding boxes match metrics for each intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair.

evaluatedBoundingBoxCount

integer (int32 format)

The total number of bounding boxes (i.e. summed over all images) the ground truth used to create this evaluation had.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsImageSegmentationEvaluationMetrics

Metrics for image segmentation evaluation results.
Fields
confidenceMetricsEntries[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsImageSegmentationEvaluationMetricsConfidenceMetricsEntry)

Metrics for each confidenceThreshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 Precision-recall curve can be derived from it.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsImageSegmentationEvaluationMetricsConfidenceMetricsEntry

(No description provided)
Fields
confidenceThreshold

number (float format)

Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

confusionMatrix

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix)

Confusion matrix for the given confidence threshold.

diceScoreCoefficient

number (float format)

DSC or the F1 score, The harmonic mean of recall and precision.

iouScore

number (float format)

The intersection-over-union score. The measure of overlap of the annotation's category mask with ground truth category mask on the DataItem.

precision

number (float format)

Precision for the given confidence threshold.

recall

number (float format)

Recall (True Positive Rate) for the given confidence threshold.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Metrics for general pairwise text generation evaluation results.
Fields
accuracy

number (float format)

Fraction of cases where the autorater agreed with the human raters.

baselineModelWinRate

number (float format)

Percentage of time the autorater decided the baseline model had the better response.

cohensKappa

number (float format)

A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account.

f1Score

number (float format)

Harmonic mean of precision and recall.

falseNegativeCount

string (int64 format)

Number of examples where the autorater chose the baseline model, but humans preferred the model.

falsePositiveCount

string (int64 format)

Number of examples where the autorater chose the model, but humans preferred the baseline model.

humanPreferenceBaselineModelWinRate

number (float format)

Percentage of time humans decided the baseline model had the better response.

humanPreferenceModelWinRate

number (float format)

Percentage of time humans decided the model had the better response.

modelWinRate

number (float format)

Percentage of time the autorater decided the model had the better response.

precision

number (float format)

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response. True positive divided by all positive.

recall

number (float format)

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response.

trueNegativeCount

string (int64 format)

Number of examples where both the autorater and humans decided that the model had the worse response.

truePositiveCount

string (int64 format)

Number of examples where both the autorater and humans decided that the model had the better response.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsQuestionAnsweringEvaluationMetrics

(No description provided)
Fields
exactMatch

number (float format)

The rate at which the input predicted strings exactly match their references.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsRegressionEvaluationMetrics

Metrics for regression evaluation results.
Fields
meanAbsoluteError

number (float format)

Mean Absolute Error (MAE).

meanAbsolutePercentageError

number (float format)

Mean absolute percentage error. Infinity when there are zeros in the ground truth.

rSquared

number (float format)

Coefficient of determination as Pearson correlation coefficient. Undefined when ground truth or predictions are constant or near constant.

rootMeanSquaredError

number (float format)

Root Mean Squared Error (RMSE).

rootMeanSquaredLogError

number (float format)

Root mean squared log error. Undefined when there are negative ground truth values or predictions.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsSummarizationEvaluationMetrics

(No description provided)
Fields
rougeLSum

number (float format)

ROUGE-L (Longest Common Subsequence) scoring at summary level.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsTextExtractionEvaluationMetrics

Metrics for text extraction evaluation results.
Fields
confidenceMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsTextExtractionEvaluationMetricsConfidenceMetrics)

Metrics that have confidence thresholds. Precision-recall curve can be derived from them.

confusionMatrix

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix)

Confusion matrix of the evaluation. Only set for Models where number of AnnotationSpecs is no more than 10. Only set for ModelEvaluations, not for ModelEvaluationSlices.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsTextExtractionEvaluationMetricsConfidenceMetrics

(No description provided)
Fields
confidenceThreshold

number (float format)

Metrics are computed with an assumption that the Model never returns predictions with score lower than this value.

f1Score

number (float format)

The harmonic mean of recall and precision.

precision

number (float format)

Precision for the given confidence threshold.

recall

number (float format)

Recall (True Positive Rate) for the given confidence threshold.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsTextSentimentEvaluationMetrics

Model evaluation metrics for text sentiment problems.
Fields
confusionMatrix

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix)

Confusion matrix of the evaluation. Only set for ModelEvaluations, not for ModelEvaluationSlices.

f1Score

number (float format)

The harmonic mean of recall and precision.

linearKappa

number (float format)

Linear weighted kappa. Only set for ModelEvaluations, not for ModelEvaluationSlices.

meanAbsoluteError

number (float format)

Mean absolute error. Only set for ModelEvaluations, not for ModelEvaluationSlices.

meanSquaredError

number (float format)

Mean squared error. Only set for ModelEvaluations, not for ModelEvaluationSlices.

precision

number (float format)

Precision.

quadraticKappa

number (float format)

Quadratic weighted kappa. Only set for ModelEvaluations, not for ModelEvaluationSlices.

recall

number (float format)

Recall.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsTrackMetrics

UNIMPLEMENTED. Track matching model metrics for a single track match threshold and multiple label match confidence thresholds.
Fields
confidenceMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsTrackMetricsConfidenceMetrics)

Metrics for each label-match confidenceThreshold from 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99. Precision-recall curve is derived from them.

iouThreshold

number (float format)

The intersection-over-union threshold value between bounding boxes across frames used to compute this metric entry.

meanBoundingBoxIou

number (float format)

The mean bounding box iou over all confidence thresholds.

meanMismatchRate

number (float format)

The mean mismatch rate over all confidence thresholds.

meanTrackingAveragePrecision

number (float format)

The mean average precision over all confidence thresholds.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsTrackMetricsConfidenceMetrics

Metrics for a single confidence threshold.
Fields
boundingBoxIou

number (float format)

Bounding box intersection-over-union precision. Measures how well the bounding boxes overlap between each other (e.g. complete overlap or just barely above iou_threshold).

confidenceThreshold

number (float format)

The confidence threshold value used to compute the metrics.

mismatchRate

number (float format)

Mismatch rate, which measures the tracking consistency, i.e. correctness of instance ID continuity.

trackingPrecision

number (float format)

Tracking precision.

trackingRecall

number (float format)

Tracking recall.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionMetrics

The Evaluation metrics given a specific precision_window_length.
Fields
confidenceMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionMetricsConfidenceMetrics)

Metrics for each label-match confidence_threshold from 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99.

meanAveragePrecision

number (float format)

The mean average precision.

precisionWindowLength

string (Duration format)

This VideoActionMetrics is calculated based on this prediction window length. If the predicted action's timestamp is inside the time window whose center is the ground truth action's timestamp with this specific length, the prediction result is treated as a true positive.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionMetricsConfidenceMetrics

Metrics for a single confidence threshold.
Fields
confidenceThreshold

number (float format)

Output only. The confidence threshold value used to compute the metrics.

f1Score

number (float format)

Output only. The harmonic mean of recall and precision.

precision

number (float format)

Output only. Precision for the given confidence threshold.

recall

number (float format)

Output only. Recall for the given confidence threshold.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionRecognitionMetrics

Model evaluation metrics for video action recognition.
Fields
evaluatedActionCount

integer (int32 format)

The number of ground truth actions used to create this evaluation.

videoActionMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoActionMetrics)

The metric entries for precision window lengths: 1s,2s,3s.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsVideoObjectTrackingMetrics

Model evaluation metrics for video object tracking problems. Evaluates prediction quality of both labeled bounding boxes and labeled tracks (i.e. series of bounding boxes sharing same label and instance ID).
Fields
boundingBoxMeanAveragePrecision

number (float format)

The single metric for bounding boxes evaluation: the meanAveragePrecision averaged over all boundingBoxMetrics.

boundingBoxMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsBoundingBoxMetrics)

The bounding boxes match metrics for each intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair.

evaluatedBoundingBoxCount

integer (int32 format)

UNIMPLEMENTED. The total number of bounding boxes (i.e. summed over all frames) the ground truth used to create this evaluation had.

evaluatedFrameCount

integer (int32 format)

UNIMPLEMENTED. The number of video frames used to create this evaluation.

evaluatedTrackCount

integer (int32 format)

UNIMPLEMENTED. The total number of tracks (i.e. as seen across all frames) the ground truth used to create this evaluation had.

trackMeanAveragePrecision

number (float format)

UNIMPLEMENTED. The single metric for tracks accuracy evaluation: the meanAveragePrecision averaged over all trackMetrics.

trackMeanBoundingBoxIou

number (float format)

UNIMPLEMENTED. The single metric for tracks bounding box iou evaluation: the meanBoundingBoxIou averaged over all trackMetrics.

trackMeanMismatchRate

number (float format)

UNIMPLEMENTED. The single metric for tracking consistency evaluation: the meanMismatchRate averaged over all trackMetrics.

trackMetrics[]

object (GoogleCloudAiplatformV1SchemaModelevaluationMetricsTrackMetrics)

UNIMPLEMENTED. The tracks match metrics for each intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair.

GoogleCloudAiplatformV1SchemaPredictInstanceImageClassificationPredictionInstance

Prediction input format for Image Classification.
Fields
content

string

The image bytes or Cloud Storage URI to make the prediction on.

mimeType

string

The MIME type of the content of the image. Only the images in below listed MIME types are supported. - image/jpeg - image/gif - image/png - image/webp - image/bmp - image/tiff - image/vnd.microsoft.icon

GoogleCloudAiplatformV1SchemaPredictInstanceImageObjectDetectionPredictionInstance

Prediction input format for Image Object Detection.
Fields
content

string

The image bytes or Cloud Storage URI to make the prediction on.

mimeType

string

The MIME type of the content of the image. Only the images in below listed MIME types are supported. - image/jpeg - image/gif - image/png - image/webp - image/bmp - image/tiff - image/vnd.microsoft.icon

GoogleCloudAiplatformV1SchemaPredictInstanceImageSegmentationPredictionInstance

Prediction input format for Image Segmentation.
Fields
content

string

The image bytes to make the predictions on.

mimeType

string

The MIME type of the content of the image. Only the images in below listed MIME types are supported. - image/jpeg - image/png

GoogleCloudAiplatformV1SchemaPredictInstanceTextClassificationPredictionInstance

Prediction input format for Text Classification.
Fields
content

string

The text snippet to make the predictions on.

mimeType

string

The MIME type of the text snippet. The supported MIME types are listed below. - text/plain

GoogleCloudAiplatformV1SchemaPredictInstanceTextExtractionPredictionInstance

Prediction input format for Text Extraction.
Fields
content

string

The text snippet to make the predictions on.

key

string

This field is only used for batch prediction. If a key is provided, the batch prediction result will by mapped to this key. If omitted, then the batch prediction result will contain the entire input instance. Vertex AI will not check if keys in the request are duplicates, so it is up to the caller to ensure the keys are unique.

mimeType

string

The MIME type of the text snippet. The supported MIME types are listed below. - text/plain

GoogleCloudAiplatformV1SchemaPredictInstanceTextSentimentPredictionInstance

Prediction input format for Text Sentiment.
Fields
content

string

The text snippet to make the predictions on.

mimeType

string

The MIME type of the text snippet. The supported MIME types are listed below. - text/plain

GoogleCloudAiplatformV1SchemaPredictInstanceVideoActionRecognitionPredictionInstance

Prediction input format for Video Action Recognition.
Fields
content

string

The Google Cloud Storage location of the video on which to perform the prediction.

mimeType

string

The MIME type of the content of the video. Only the following are supported: video/mp4 video/avi video/quicktime

timeSegmentEnd

string

The end, exclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision, and "inf" or "Infinity" is allowed, which means the end of the video.

timeSegmentStart

string

The beginning, inclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision.

GoogleCloudAiplatformV1SchemaPredictInstanceVideoClassificationPredictionInstance

Prediction input format for Video Classification.
Fields
content

string

The Google Cloud Storage location of the video on which to perform the prediction.

mimeType

string

The MIME type of the content of the video. Only the following are supported: video/mp4 video/avi video/quicktime

timeSegmentEnd

string

The end, exclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision, and "inf" or "Infinity" is allowed, which means the end of the video.

timeSegmentStart

string

The beginning, inclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision.

GoogleCloudAiplatformV1SchemaPredictInstanceVideoObjectTrackingPredictionInstance

Prediction input format for Video Object Tracking.
Fields
content

string

The Google Cloud Storage location of the video on which to perform the prediction.

mimeType

string

The MIME type of the content of the video. Only the following are supported: video/mp4 video/avi video/quicktime

timeSegmentEnd

string

The end, exclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision, and "inf" or "Infinity" is allowed, which means the end of the video.

timeSegmentStart

string

The beginning, inclusive, of the video's time segment on which to perform the prediction. Expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision.

GoogleCloudAiplatformV1SchemaPredictParamsGroundingConfig

The configuration for grounding checking.
Fields
disableAttribution

boolean

If set, skip finding claim attributions (i.e not generate grounding citation).

sources[]

object (GoogleCloudAiplatformV1SchemaPredictParamsGroundingConfigSourceEntry)

The sources for the grounding checking.

GoogleCloudAiplatformV1SchemaPredictParamsGroundingConfigSourceEntry

Single source entry for the grounding checking.
Fields
enterpriseDatastore

string

The uri of the Vertex AI Search data source. Deprecated. Use vertex_ai_search_datastore instead.

inlineContext

string

The grounding text passed inline with the Predict API. It can support up to 1 million bytes.

type

enum

The type of the grounding checking source.

Enum type. Can be one of the following:
UNSPECIFIED (No description provided)
WEB Uses Web Search to check the grounding.
ENTERPRISE Uses Vertex AI Search to check the grounding. Deprecated. Use VERTEX_AI_SEARCH instead.
VERTEX_AI_SEARCH Uses Vertex AI Search to check the grounding
INLINE Uses inline context to check the grounding.
vertexAiSearchDatastore

string

The uri of the Vertex AI Search data source.

GoogleCloudAiplatformV1SchemaPredictParamsImageClassificationPredictionParams

Prediction model parameters for Image Classification.
Fields
confidenceThreshold

number (float format)

The Model only returns predictions with at least this confidence score. Default value is 0.0

maxPredictions

integer (int32 format)

The Model only returns up to that many top, by confidence score, predictions per instance. If this number is very high, the Model may return fewer predictions. Default value is 10.

GoogleCloudAiplatformV1SchemaPredictParamsImageObjectDetectionPredictionParams

Prediction model parameters for Image Object Detection.
Fields
confidenceThreshold

number (float format)

The Model only returns predictions with at least this confidence score. Default value is 0.0

maxPredictions

integer (int32 format)

The Model only returns up to that many top, by confidence score, predictions per instance. Note that number of returned predictions is also limited by metadata's predictionsLimit. Default value is 10.

GoogleCloudAiplatformV1SchemaPredictParamsImageSegmentationPredictionParams

Prediction model parameters for Image Segmentation.
Fields
confidenceThreshold

number (float format)

When the model predicts category of pixels of the image, it will only provide predictions for pixels that it is at least this much confident about. All other pixels will be classified as background. Default value is 0.5.

GoogleCloudAiplatformV1SchemaPredictParamsVideoActionRecognitionPredictionParams

Prediction model parameters for Video Action Recognition.
Fields
confidenceThreshold

number (float format)

The Model only returns predictions with at least this confidence score. Default value is 0.0

maxPredictions

integer (int32 format)

The model only returns up to that many top, by confidence score, predictions per frame of the video. If this number is very high, the Model may return fewer predictions per frame. Default value is 50.

GoogleCloudAiplatformV1SchemaPredictParamsVideoClassificationPredictionParams

Prediction model parameters for Video Classification.
Fields
confidenceThreshold

number (float format)

The Model only returns predictions with at least this confidence score. Default value is 0.0

maxPredictions

integer (int32 format)

The Model only returns up to that many top, by confidence score, predictions per instance. If this number is very high, the Model may return fewer predictions. Default value is 10,000.

oneSecIntervalClassification

boolean

Set to true to request classification for a video at one-second intervals. Vertex AI returns labels and their confidence scores for each second of the entire time segment of the video that user specified in the input WARNING: Model evaluation is not done for this classification type, the quality of it depends on the training data, but there are no metrics provided to describe that quality. Default value is false

segmentClassification

boolean

Set to true to request segment-level classification. Vertex AI returns labels and their confidence scores for the entire time segment of the video that user specified in the input instance. Default value is true

shotClassification

boolean

Set to true to request shot-level classification. Vertex AI determines the boundaries for each camera shot in the entire time segment of the video that user specified in the input instance. Vertex AI then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. WARNING: Model evaluation is not done for this classification type, the quality of it depends on the training data, but there are no metrics provided to describe that quality. Default value is false

GoogleCloudAiplatformV1SchemaPredictParamsVideoObjectTrackingPredictionParams

Prediction model parameters for Video Object Tracking.
Fields
confidenceThreshold

number (float format)

The Model only returns predictions with at least this confidence score. Default value is 0.0

maxPredictions

integer (int32 format)

The model only returns up to that many top, by confidence score, predictions per frame of the video. If this number is very high, the Model may return fewer predictions per frame. Default value is 50.

minBoundingBoxSize

number (float format)

Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Default value is 0.0.

GoogleCloudAiplatformV1SchemaPredictPredictionClassificationPredictionResult

Prediction output format for Image and Text Classification.
Fields
confidences[]

number (float format)

The Model's confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids.

displayNames[]

string

The display names of the AnnotationSpecs that had been identified, order matches the IDs.

ids[]

string (int64 format)

The resource IDs of the AnnotationSpecs that had been identified.

GoogleCloudAiplatformV1SchemaPredictPredictionImageObjectDetectionPredictionResult

Prediction output format for Image Object Detection.
Fields
bboxes[]

array

Bounding boxes, i.e. the rectangles over the image, that pinpoint the found AnnotationSpecs. Given in order that matches the IDs. Each bounding box is an array of 4 numbers xMin, xMax, yMin, and yMax, which represent the extremal coordinates of the box. They are relative to the image size, and the point 0,0 is in the top left of the image.

confidences[]

number (float format)

The Model's confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids.

displayNames[]

string

The display names of the AnnotationSpecs that had been identified, order matches the IDs.

ids[]

string (int64 format)

The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly.

GoogleCloudAiplatformV1SchemaPredictPredictionImageSegmentationPredictionResult

Prediction output format for Image Segmentation.
Fields
categoryMask

string

A PNG image where each pixel in the mask represents the category in which the pixel in the original image was predicted to belong to. The size of this image will be the same as the original image. The mapping between the AnntoationSpec and the color can be found in model's metadata. The model will choose the most likely category and if none of the categories reach the confidence threshold, the pixel will be marked as background.

confidenceMask

string

A one channel image which is encoded as an 8bit lossless PNG. The size of the image will be the same as the original image. For a specific pixel, darker color means less confidence in correctness of the cateogry in the categoryMask for the corresponding pixel. Black means no confidence and white means complete confidence.

GoogleCloudAiplatformV1SchemaPredictPredictionTabularClassificationPredictionResult

Prediction output format for Tabular Classification.
Fields
classes[]

string

The name of the classes being classified, contains all possible values of the target column.

scores[]

number (float format)

The model's confidence in each class being correct, higher value means higher confidence. The N-th score corresponds to the N-th class in classes.

GoogleCloudAiplatformV1SchemaPredictPredictionTabularRegressionPredictionResult

Prediction output format for Tabular Regression.
Fields
lowerBound

number (float format)

The lower bound of the prediction interval.

quantilePredictions[]

number (float format)

Quantile predictions, in 1-1 correspondence with quantile_values.

quantileValues[]

number (float format)

Quantile values.

upperBound

number (float format)

The upper bound of the prediction interval.

value

number (float format)

The regression value.

GoogleCloudAiplatformV1SchemaPredictPredictionTextExtractionPredictionResult

Prediction output format for Text Extraction.
Fields
confidences[]

number (float format)

The Model's confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids.

displayNames[]

string

The display names of the AnnotationSpecs that had been identified, order matches the IDs.

ids[]

string (int64 format)

The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly.

textSegmentEndOffsets[]

string (int64 format)

The end offsets, inclusive, of the text segment in which the AnnotationSpec has been identified. Expressed as a zero-based number of characters as measured from the start of the text snippet.

textSegmentStartOffsets[]

string (int64 format)

The start offsets, inclusive, of the text segment in which the AnnotationSpec has been identified. Expressed as a zero-based number of characters as measured from the start of the text snippet.

GoogleCloudAiplatformV1SchemaPredictPredictionTextSentimentPredictionResult

Prediction output format for Text Sentiment
Fields
sentiment

integer (int32 format)

The integer sentiment labels between 0 (inclusive) and sentimentMax label (inclusive), while 0 maps to the least positive sentiment and sentimentMax maps to the most positive one. The higher the score is, the more positive the sentiment in the text snippet is. Note: sentimentMax is an integer value between 1 (inclusive) and 10 (inclusive).

GoogleCloudAiplatformV1SchemaPredictPredictionTftFeatureImportance

(No description provided)
Fields
attributeColumns[]

string

(No description provided)

attributeWeights[]

number (float format)

(No description provided)

contextColumns[]

string

(No description provided)

contextWeights[]

number (float format)

TFT feature importance values. Each pair for {context/horizon/attribute} should have the same shape since the weight corresponds to the column names.

horizonColumns[]

string

(No description provided)

horizonWeights[]

number (float format)

(No description provided)

GoogleCloudAiplatformV1SchemaPredictPredictionTimeSeriesForecastingPredictionResult

Prediction output format for Time Series Forecasting.
Fields
quantilePredictions[]

number (float format)

Quantile predictions, in 1-1 correspondence with quantile_values.

quantileValues[]

number (float format)

Quantile values.

tftFeatureImportance

object (GoogleCloudAiplatformV1SchemaPredictPredictionTftFeatureImportance)

Only use these if TFt is enabled.

value

number (float format)

The regression value.

GoogleCloudAiplatformV1SchemaPredictPredictionVideoActionRecognitionPredictionResult

Prediction output format for Video Action Recognition.
Fields
confidence

number (float format)

The Model's confidence in correction of this prediction, higher value means higher confidence.

displayName

string

The display name of the AnnotationSpec that had been identified.

id

string

The resource ID of the AnnotationSpec that had been identified.

timeSegmentEnd

string (Duration format)

The end, exclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end.

timeSegmentStart

string (Duration format)

The beginning, inclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end.

GoogleCloudAiplatformV1SchemaPredictPredictionVideoClassificationPredictionResult

Prediction output format for Video Classification.
Fields
confidence

number (float format)

The Model's confidence in correction of this prediction, higher value means higher confidence.

displayName

string

The display name of the AnnotationSpec that had been identified.

id

string

The resource ID of the AnnotationSpec that had been identified.

timeSegmentEnd

string (Duration format)

The end, exclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. Note that for 'segment-classification' prediction type, this equals the original 'timeSegmentEnd' from the input instance, for other types it is the end of a shot or a 1 second interval respectively.

timeSegmentStart

string (Duration format)

The beginning, inclusive, of the video's time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end. Note that for 'segment-classification' prediction type, this equals the original 'timeSegmentStart' from the input instance, for other types it is the start of a shot or a 1 second interval respectively.

type

string

The type of the prediction. The requested types can be configured via parameters. This will be one of - segment-classification - shot-classification - one-sec-interval-classification

GoogleCloudAiplatformV1SchemaPredictPredictionVideoObjectTrackingPredictionResult

Prediction output format for Video Object Tracking.
Fields
confidence

number (float format)

The Model's confidence in correction of this prediction, higher value means higher confidence.

displayName

string

The display name of the AnnotationSpec that had been identified.

frames[]

object (GoogleCloudAiplatformV1SchemaPredictPredictionVideoObjectTrackingPredictionResultFrame)

All of the frames of the video in which a single object instance has been detected. The bounding boxes in the frames identify the same object.

id

string

The resource ID of the AnnotationSpec that had been identified.

timeSegmentEnd

string (Duration format)

The end, inclusive, of the video's time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end.

timeSegmentStart

string (Duration format)

The beginning, inclusive, of the video's time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end.

GoogleCloudAiplatformV1SchemaPredictPredictionVideoObjectTrackingPredictionResultFrame

The fields xMin, xMax, yMin, and yMax refer to a bounding box, i.e. the rectangle over the video frame pinpointing the found AnnotationSpec. The coordinates are relative to the frame size, and the point 0,0 is in the top left of the frame.
Fields
timeOffset

string (Duration format)

A time (frame) of a video in which the object has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with "s" appended at the end.

xMax

number (float format)

The rightmost coordinate of the bounding box.

xMin

number (float format)

The leftmost coordinate of the bounding box.

yMax

number (float format)

The bottommost coordinate of the bounding box.

yMin

number (float format)

The topmost coordinate of the bounding box.

GoogleCloudAiplatformV1SchemaPredictionResult

Represents a line of JSONL in the batch prediction output file.
Fields
error

object (GoogleCloudAiplatformV1SchemaPredictionResultError)

The error result. Do not set prediction if this is set.

instance

map (key: string, value: any)

User's input instance. Struct is used here instead of Any so that JsonFormat does not append an extra "@type" field when we convert the proto to JSON.

key

string

Optional user-provided key from the input instance.

prediction

any

The prediction result. Value is used here instead of Any so that JsonFormat does not append an extra "@type" field when we convert the proto to JSON and so we can represent array of objects. Do not set error if this is set.

GoogleCloudAiplatformV1SchemaPredictionResultError

(No description provided)
Fields
message

string

Error message with additional details.

status

enum

Error status. This will be serialized into the enum name e.g. "NOT_FOUND".

Enum type. Can be one of the following:
OK Not an error; returned on success. HTTP Mapping: 200 OK
CANCELLED The operation was cancelled, typically by the caller. HTTP Mapping: 499 Client Closed Request
UNKNOWN Unknown error. For example, this error may be returned when a Status value received from another address space belongs to an error space that is not known in this address space. Also errors raised by APIs that do not return enough error information may be converted to this error. HTTP Mapping: 500 Internal Server Error
INVALID_ARGUMENT The client specified an invalid argument. Note that this differs from FAILED_PRECONDITION. INVALID_ARGUMENT indicates arguments that are problematic regardless of the state of the system (e.g., a malformed file name). HTTP Mapping: 400 Bad Request
DEADLINE_EXCEEDED The deadline expired before the operation could complete. For operations that change the state of the system, this error may be returned even if the operation has completed successfully. For example, a successful response from a server could have been delayed long enough for the deadline to expire. HTTP Mapping: 504 Gateway Timeout
NOT_FOUND Some requested entity (e.g., file or directory) was not found. Note to server developers: if a request is denied for an entire class of users, such as gradual feature rollout or undocumented allowlist, NOT_FOUND may be used. If a request is denied for some users within a class of users, such as user-based access control, PERMISSION_DENIED must be used. HTTP Mapping: 404 Not Found
ALREADY_EXISTS The entity that a client attempted to create (e.g., file or directory) already exists. HTTP Mapping: 409 Conflict
PERMISSION_DENIED The caller does not have permission to execute the specified operation. PERMISSION_DENIED must not be used for rejections caused by exhausting some resource (use RESOURCE_EXHAUSTED instead for those errors). PERMISSION_DENIED must not be used if the caller can not be identified (use UNAUTHENTICATED instead for those errors). This error code does not imply the request is valid or the requested entity exists or satisfies other pre-conditions. HTTP Mapping: 403 Forbidden
UNAUTHENTICATED The request does not have valid authentication credentials for the operation. HTTP Mapping: 401 Unauthorized
RESOURCE_EXHAUSTED Some resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space. HTTP Mapping: 429 Too Many Requests
FAILED_PRECONDITION The operation was rejected because the system is not in a state required for the operation's execution. For example, the directory to be deleted is non-empty, an rmdir operation is applied to a non-directory, etc. Service implementors can use the following guidelines to decide between FAILED_PRECONDITION, ABORTED, and UNAVAILABLE: (a) Use UNAVAILABLE if the client can retry just the failing call. (b) Use ABORTED if the client should retry at a higher level. For example, when a client-specified test-and-set fails, indicating the client should restart a read-modify-write sequence. (c) Use FAILED_PRECONDITION if the client should not retry until the system state has been explicitly fixed. For example, if an "rmdir" fails because the directory is non-empty, FAILED_PRECONDITION should be returned since the client should not retry unless the files are deleted from the directory. HTTP Mapping: 400 Bad Request
ABORTED The operation was aborted, typically due to a concurrency issue such as a sequencer check failure or transaction abort. See the guidelines above for deciding between FAILED_PRECONDITION, ABORTED, and UNAVAILABLE. HTTP Mapping: 409 Conflict
OUT_OF_RANGE The operation was attempted past the valid range. E.g., seeking or reading past end-of-file. Unlike INVALID_ARGUMENT, this error indicates a problem that may be fixed if the system state changes. For example, a 32-bit file system will generate INVALID_ARGUMENT if asked to read at an offset that is not in the range [0,2^32-1], but it will generate OUT_OF_RANGE if asked to read from an offset past the current file size. There is a fair bit of overlap between FAILED_PRECONDITION and OUT_OF_RANGE. We recommend using OUT_OF_RANGE (the more specific error) when it applies so that callers who are iterating through a space can easily look for an OUT_OF_RANGE error to detect when they are done. HTTP Mapping: 400 Bad Request
UNIMPLEMENTED The operation is not implemented or is not supported/enabled in this service. HTTP Mapping: 501 Not Implemented
INTERNAL Internal errors. This means that some invariants expected by the underlying system have been broken. This error code is reserved for serious errors. HTTP Mapping: 500 Internal Server Error
UNAVAILABLE The service is currently unavailable. This is most likely a transient condition, which can be corrected by retrying with a backoff. Note that it is not always safe to retry non-idempotent operations. See the guidelines above for deciding between FAILED_PRECONDITION, ABORTED, and UNAVAILABLE. HTTP Mapping: 503 Service Unavailable
DATA_LOSS Unrecoverable data loss or corruption. HTTP Mapping: 500 Internal Server Error

GoogleCloudAiplatformV1SchemaTablesDatasetMetadata

The metadata of Datasets that contain tables data.
Fields
inputConfig

object (GoogleCloudAiplatformV1SchemaTablesDatasetMetadataInputConfig)

(No description provided)

GoogleCloudAiplatformV1SchemaTablesDatasetMetadataBigQuerySource

(No description provided)
Fields
uri

string

The URI of a BigQuery table. e.g. bq://projectId.bqDatasetId.bqTableId

GoogleCloudAiplatformV1SchemaTablesDatasetMetadataGcsSource

(No description provided)
Fields
uri[]

string

Cloud Storage URI of one or more files. Only CSV files are supported. The first line of the CSV file is used as the header. If there are multiple files, the header is the first line of the lexicographically first file, the other files must either contain the exact same header or omit the header.

GoogleCloudAiplatformV1SchemaTablesDatasetMetadataInputConfig

The tables Dataset's data source. The Dataset doesn't store the data directly, but only pointer(s) to its data.
Fields
bigquerySource

object (GoogleCloudAiplatformV1SchemaTablesDatasetMetadataBigQuerySource)

(No description provided)

gcsSource

object (GoogleCloudAiplatformV1SchemaTablesDatasetMetadataGcsSource)

(No description provided)

GoogleCloudAiplatformV1SchemaTextClassificationAnnotation

Annotation details specific to text classification.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

GoogleCloudAiplatformV1SchemaTextDataItem

Payload of Text DataItem.
Fields
gcsUri

string

Output only. Google Cloud Storage URI points to the original text in user's bucket. The text file is up to 10MB in size.

GoogleCloudAiplatformV1SchemaTextDatasetMetadata

The metadata of Datasets that contain Text DataItems.
Fields
dataItemSchemaUri

string

Points to a YAML file stored on Google Cloud Storage describing payload of the Text DataItems that belong to this Dataset.

gcsBucket

string

Google Cloud Storage Bucket name that contains the blob data of this Dataset.

GoogleCloudAiplatformV1SchemaTextExtractionAnnotation

Annotation details specific to text extraction.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

textSegment

object (GoogleCloudAiplatformV1SchemaTextSegment)

The segment of the text content.

GoogleCloudAiplatformV1SchemaTextPromptDatasetMetadata

The metadata of Datasets that contain Text Prompt data.
Fields
candidateCount

string (int64 format)

Number of candidates.

gcsUri

string

The Google Cloud Storage URI that stores the prompt data.

groundingConfig

object (GoogleCloudAiplatformV1SchemaPredictParamsGroundingConfig)

Grounding checking configuration.

maxOutputTokens

string (int64 format)

Value of the maximum number of tokens generated set when the dataset was saved.

note

string

User-created prompt note. Note size limit is 2KB.

promptType

string

Type of the prompt dataset.

stopSequences[]

string

Customized stop sequences.

systemInstructionGcsUri

string

The Google Cloud Storage URI that stores the system instruction, starting with gs://.

temperature

number (float format)

Temperature value used for sampling set when the dataset was saved. This value is used to tune the degree of randomness.

text

string

The content of the prompt dataset.

topK

string (int64 format)

Top K value set when the dataset was saved. This value determines how many candidates with highest probability from the vocab would be selected for each decoding step.

topP

number (float format)

Top P value set when the dataset was saved. Given topK tokens for decoding, top candidates will be selected until the sum of their probabilities is topP.

GoogleCloudAiplatformV1SchemaTextSegment

The text segment inside of DataItem.
Fields
content

string

The text content in the segment for output only.

endOffset

string (uint64 format)

Zero-based character index of the first character past the end of the text segment (counting character from the beginning of the text). The character at the end_offset is NOT included in the text segment.

startOffset

string (uint64 format)

Zero-based character index of the first character of the text segment (counting characters from the beginning of the text).

GoogleCloudAiplatformV1SchemaTextSentimentAnnotation

Annotation details specific to text sentiment.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

sentiment

integer (int32 format)

The sentiment score for text.

sentimentMax

integer (int32 format)

The sentiment max score for text.

GoogleCloudAiplatformV1SchemaTextSentimentSavedQueryMetadata

The metadata of SavedQuery contains TextSentiment Annotations.
Fields
sentimentMax

integer (int32 format)

The maximum sentiment of sentiment Anntoation in this SavedQuery.

GoogleCloudAiplatformV1SchemaTimeSegment

A time period inside of a DataItem that has a time dimension (e.g. video).
Fields
endTimeOffset

string (Duration format)

End of the time segment (exclusive), represented as the duration since the start of the DataItem.

startTimeOffset

string (Duration format)

Start of the time segment (inclusive), represented as the duration since the start of the DataItem.

GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadata

The metadata of Datasets that contain time series data.
Fields
inputConfig

object (GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataInputConfig)

(No description provided)

timeColumn

string

The column name of the time column that identifies time order in the time series.

timeSeriesIdentifierColumn

string

The column name of the time series identifier column that identifies the time series.

GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataBigQuerySource

(No description provided)
Fields
uri

string

The URI of a BigQuery table.

GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataGcsSource

(No description provided)
Fields
uri[]

string

Cloud Storage URI of one or more files. Only CSV files are supported. The first line of the CSV file is used as the header. If there are multiple files, the header is the first line of the lexicographically first file, the other files must either contain the exact same header or omit the header.

GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataInputConfig

The time series Dataset's data source. The Dataset doesn't store the data directly, but only pointer(s) to its data.
Fields
bigquerySource

object (GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataBigQuerySource)

(No description provided)

gcsSource

object (GoogleCloudAiplatformV1SchemaTimeSeriesDatasetMetadataGcsSource)

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecasting

A TrainingJob that trains and uploads an AutoML Forecasting Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputs)

The input parameters of this TrainingJob.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputs

(No description provided)
Fields
additionalExperiments[]

string

Additional experiment flags for the time series forcasting training.

availableAtForecastColumns[]

string

Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day.

contextWindow

string (int64 format)

The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by the data_granularity field.

dataGranularity

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsGranularity)

Expected difference in time granularity between rows in the data.

enableProbabilisticInference

boolean

If probabilistic inference is enabled, the model will fit a distribution that captures the uncertainty of a prediction. At inference time, the predictive distribution is used to make a point prediction that minimizes the optimization objective. For example, the mean of a predictive distribution is the point prediction that minimizes RMSE loss. If quantiles are specified, then the quantiles of the distribution are also returned. The optimization objective cannot be minimize-quantile-loss.

exportEvaluatedDataItemsConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig)

Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed.

forecastHorizon

string (int64 format)

The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by the data_granularity field.

hierarchyConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig)

Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies.

holidayRegions[]

string

The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option.

optimizationObjective

string

Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: * "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). * "minimize-mae" - Minimize mean-absolute error (MAE). * "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). * "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). * "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). * "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined in quantiles. * "minimize-mape" - Minimize the mean absolute percentage error.

quantiles[]

number (double format)

Quantiles to use for minimize-quantile-loss optimization_objective, or for probabilistic inference. Up to 5 quantiles are allowed of values between 0 and 1, exclusive. Required if the value of optimization_objective is minimize-quantile-loss. Represents the percent quantiles to use for that objective. Quantiles must be unique.

targetColumn

string

The name of the column that the Model is to predict values for. This column must be unavailable at forecast.

timeColumn

string

The name of the column that identifies time order in the time series. This column must be available at forecast.

timeSeriesAttributeColumns[]

string

Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color.

timeSeriesIdentifierColumn

string

The name of the column that identifies the time series.

trainBudgetMilliNodeHours

string (int64 format)

Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive.

transformations[]

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformation)

Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter.

unavailableAtForecastColumns[]

string

Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day.

validationOptions

string

Validation options for the data validation component. The available options are: * "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. * "ignore-validation" - ignore the results of the validation and continue

weightColumn

string

Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1.

windowConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig)

Config containing strategy for generating sliding windows.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsGranularity

A duration of time expressed in time granularity units.
Fields
quantity

string (int64 format)

The number of granularity_units between data points in the training data. If granularity_unit is minute, can be 1, 5, 10, 15, or 30. For all other values of granularity_unit, must be 1.

unit

string

The time granularity unit of this time period. The supported units are: * "minute" * "hour" * "day" * "week" * "month" * "year"

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformation

(No description provided)
Fields
auto

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationAutoTransformation)

(No description provided)

categorical

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationCategoricalTransformation)

(No description provided)

numeric

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationNumericTransformation)

(No description provided)

text

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTextTransformation)

(No description provided)

timestamp

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTimestampTransformation)

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationAutoTransformation

Training pipeline will infer the proper transformation based on the statistic of dataset.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationCategoricalTransformation

Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationNumericTransformation

Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * A boolean value that indicates whether the value is valid.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTextTransformation

Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingInputsTransformationTimestampTransformation

Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Fields
columnName

string

(No description provided)

timeFormat

string

The format in which that time field is expressed. The time_format must either be one of: * unix-seconds * unix-milliseconds * unix-microseconds * unix-nanoseconds (for respectively number of seconds, milliseconds, microseconds and nanoseconds since start of the Unix epoch); or be written in strftime syntax. If time_format is not set, then the default format is RFC 3339 date-time format, where time-offset = "Z" (e.g. 1985-04-12T23:20:50.52Z)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlForecastingMetadata

Model metadata specific to AutoML Forecasting.
Fields
evaluatedDataItemsBigqueryUri

string

BigQuery destination uri for exported evaluated examples.

trainCostMilliNodeHours

string (int64 format)

Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassification

A TrainingJob that trains and uploads an AutoML Image Classification Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassificationInputs)

The input parameters of this TrainingJob.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassificationMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassificationInputs

(No description provided)
Fields
baseModelId

string

The ID of the base model. If it is specified, the new model will be trained based on the base model. Otherwise, the new model will be trained from scratch. The base model must be in the same Project and Location as the new Model to train, and have the same modelType.

budgetMilliNodeHours

string (int64 format)

The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will be model-converged. Note, node_hour = actual_hour * number_of_nodes_involved. For modelType cloud(default), the budget must be between 8,000 and 800,000 milli node hours, inclusive. The default value is 192,000 which represents one day in wall time, considering 8 nodes are used. For model types mobile-tf-low-latency-1, mobile-tf-versatile-1, mobile-tf-high-accuracy-1, the training budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24,000 which represents one day in wall time on a single node that is used.

disableEarlyStopping

boolean

Use the entire training budget. This disables the early stopping feature. When false the early stopping feature is enabled, which means that AutoML Image Classification might stop training before the entire training budget has been used.

modelType

enum

(No description provided)

Enum type. Can be one of the following:
MODEL_TYPE_UNSPECIFIED Should not be set.
CLOUD A Model best tailored to be used within Google Cloud, and which cannot be exported. Default.
CLOUD_1 A model type best tailored to be used within Google Cloud, which cannot be exported externally. Compared to the CLOUD model above, it is expected to have higher prediction accuracy.
MOBILE_TF_LOW_LATENCY_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow or Core ML model and used on a mobile or edge device afterwards. Expected to have low latency, but may have lower prediction quality than other mobile models.
MOBILE_TF_VERSATILE_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow or Core ML model and used on a mobile or edge device with afterwards.
MOBILE_TF_HIGH_ACCURACY_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow or Core ML model and used on a mobile or edge device afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other mobile models.
EFFICIENTNET EfficientNet model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally.
MAXVIT MaxViT model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally.
VIT ViT model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally.
COCA CoCa model for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally.
multiLabel

boolean

If false, a single-label (multi-class) Model will be trained (i.e. assuming that for each image just up to one annotation may be applicable). If true, a multi-label Model will be trained (i.e. assuming that for each image multiple annotations may be applicable).

tunableParameter

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutomlImageTrainingTunableParameter)

Trainer type for Vision TrainRequest.

uptrainBaseModelId

string

The ID of base model for upTraining. If it is specified, the new model will be upTrained based on the base model for upTraining. Otherwise, the new model will be trained from scratch. The base model for upTraining must be in the same Project and Location as the new Model to train, and have the same modelType.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageClassificationMetadata

(No description provided)
Fields
costMilliNodeHours

string (int64 format)

The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours.

successfulStopReason

enum

For successful job completions, this is the reason why the job has finished.

Enum type. Can be one of the following:
SUCCESSFUL_STOP_REASON_UNSPECIFIED Should not be set.
BUDGET_REACHED The inputs.budgetMilliNodeHours had been reached.
MODEL_CONVERGED Further training of the Model ceased to increase its quality, since it already has converged.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetection

A TrainingJob that trains and uploads an AutoML Image Object Detection Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetectionInputs)

The input parameters of this TrainingJob.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetectionMetadata)

The metadata information

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetectionInputs

(No description provided)
Fields
budgetMilliNodeHours

string (int64 format)

The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will be model-converged. Note, node_hour = actual_hour * number_of_nodes_involved. For modelType cloud(default), the budget must be between 20,000 and 900,000 milli node hours, inclusive. The default value is 216,000 which represents one day in wall time, considering 9 nodes are used. For model types mobile-tf-low-latency-1, mobile-tf-versatile-1, mobile-tf-high-accuracy-1 the training budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24,000 which represents one day in wall time on a single node that is used.

disableEarlyStopping

boolean

Use the entire training budget. This disables the early stopping feature. When false the early stopping feature is enabled, which means that AutoML Image Object Detection might stop training before the entire training budget has been used.

modelType

enum

(No description provided)

Enum type. Can be one of the following:
MODEL_TYPE_UNSPECIFIED Should not be set.
CLOUD_HIGH_ACCURACY_1 A model best tailored to be used within Google Cloud, and which cannot be exported. Expected to have a higher latency, but should also have a higher prediction quality than other cloud models.
CLOUD_LOW_LATENCY_1 A model best tailored to be used within Google Cloud, and which cannot be exported. Expected to have a low latency, but may have lower prediction quality than other cloud models.
CLOUD_1 A model best tailored to be used within Google Cloud, and which cannot be exported. Compared to the CLOUD_HIGH_ACCURACY_1 and CLOUD_LOW_LATENCY_1 models above, it is expected to have higher prediction quality and lower latency.
MOBILE_TF_LOW_LATENCY_1 A model that, in addition to being available within Google Cloud can also be exported (see ModelService.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other mobile models.
MOBILE_TF_VERSATILE_1 A model that, in addition to being available within Google Cloud can also be exported (see ModelService.ExportModel) and used on a mobile or edge device with TensorFlow afterwards.
MOBILE_TF_HIGH_ACCURACY_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other mobile models.
CLOUD_STREAMING_1 A model best tailored to be used within Google Cloud, and which cannot be exported. Expected to best support predictions in streaming with lower latency and lower prediction quality than other cloud models.
SPINENET SpineNet for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally.
YOLO YOLO for Model Garden training with customizable hyperparameters. Best tailored to be used within Google Cloud, and cannot be exported externally.
tunableParameter

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutomlImageTrainingTunableParameter)

Trainer type for Vision TrainRequest.

uptrainBaseModelId

string

The ID of base model for upTraining. If it is specified, the new model will be upTrained based on the base model for upTraining. Otherwise, the new model will be trained from scratch. The base model for upTraining must be in the same Project and Location as the new Model to train, and have the same modelType.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageObjectDetectionMetadata

(No description provided)
Fields
costMilliNodeHours

string (int64 format)

The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours.

successfulStopReason

enum

For successful job completions, this is the reason why the job has finished.

Enum type. Can be one of the following:
SUCCESSFUL_STOP_REASON_UNSPECIFIED Should not be set.
BUDGET_REACHED The inputs.budgetMilliNodeHours had been reached.
MODEL_CONVERGED Further training of the Model ceased to increase its quality, since it already has converged.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentation

A TrainingJob that trains and uploads an AutoML Image Segmentation Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentationInputs)

The input parameters of this TrainingJob.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentationMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentationInputs

(No description provided)
Fields
baseModelId

string

The ID of the base model. If it is specified, the new model will be trained based on the base model. Otherwise, the new model will be trained from scratch. The base model must be in the same Project and Location as the new Model to train, and have the same modelType.

budgetMilliNodeHours

string (int64 format)

The training budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual metadata.costMilliNodeHours will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using the full budget and the metadata.successfulStopReason will be model-converged. Note, node_hour = actual_hour * number_of_nodes_involved. Or actual_wall_clock_hours = train_budget_milli_node_hours / (number_of_nodes_involved * 1000) For modelType cloud-high-accuracy-1(default), the budget must be between 20,000 and 2,000,000 milli node hours, inclusive. The default value is 192,000 which represents one day in wall time (1000 milli * 24 hours * 8 nodes).

modelType

enum

(No description provided)

Enum type. Can be one of the following:
MODEL_TYPE_UNSPECIFIED Should not be set.
CLOUD_HIGH_ACCURACY_1 A model to be used via prediction calls to uCAIP API. Expected to have a higher latency, but should also have a higher prediction quality than other models.
CLOUD_LOW_ACCURACY_1 A model to be used via prediction calls to uCAIP API. Expected to have a lower latency but relatively lower prediction quality.
MOBILE_TF_LOW_LATENCY_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as TensorFlow model and used on a mobile or edge device afterwards. Expected to have low latency, but may have lower prediction quality than other mobile models.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlImageSegmentationMetadata

(No description provided)
Fields
costMilliNodeHours

string (int64 format)

The actual training cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed inputs.budgetMilliNodeHours.

successfulStopReason

enum

For successful job completions, this is the reason why the job has finished.

Enum type. Can be one of the following:
SUCCESSFUL_STOP_REASON_UNSPECIFIED Should not be set.
BUDGET_REACHED The inputs.budgetMilliNodeHours had been reached.
MODEL_CONVERGED Further training of the Model ceased to increase its quality, since it already has converged.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTables

A TrainingJob that trains and uploads an AutoML Tables Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputs)

The input parameters of this TrainingJob.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputs

(No description provided)
Fields
additionalExperiments[]

string

Additional experiment flags for the Tables training pipeline.

disableEarlyStopping

boolean

Use the entire training budget. This disables the early stopping feature. By default, the early stopping feature is enabled, which means that AutoML Tables might stop training before the entire training budget has been used.

exportEvaluatedDataItemsConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig)

Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed.

optimizationObjective

string

Objective function the model is optimizing towards. The training process creates a model that maximizes/minimizes the value of the objective function over the validation set. The supported optimization objectives depend on the prediction type. If the field is not set, a default objective function is used. classification (binary): "maximize-au-roc" (default) - Maximize the area under the receiver operating characteristic (ROC) curve. "minimize-log-loss" - Minimize log loss. "maximize-au-prc" - Maximize the area under the precision-recall curve. "maximize-precision-at-recall" - Maximize precision for a specified recall value. "maximize-recall-at-precision" - Maximize recall for a specified precision value. classification (multi-class): "minimize-log-loss" (default) - Minimize log loss. regression: "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). "minimize-mae" - Minimize mean-absolute error (MAE). "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE).

optimizationObjectivePrecisionValue

number (float format)

Required when optimization_objective is "maximize-recall-at-precision". Must be between 0 and 1, inclusive.

optimizationObjectiveRecallValue

number (float format)

Required when optimization_objective is "maximize-precision-at-recall". Must be between 0 and 1, inclusive.

predictionType

string

The type of prediction the Model is to produce. "classification" - Predict one out of multiple target values is picked for each row. "regression" - Predict a value based on its relation to other values. This type is available only to columns that contain semantically numeric values, i.e. integers or floating point number, even if stored as e.g. strings.

targetColumn

string

The column name of the target column that the model is to predict.

trainBudgetMilliNodeHours

string (int64 format)

Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive.

transformations[]

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformation)

Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter.

weightColumnName

string

Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformation

(No description provided)
Fields
auto

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationAutoTransformation)

(No description provided)

categorical

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalTransformation)

(No description provided)

numeric

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericTransformation)

(No description provided)

repeatedCategorical

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalArrayTransformation)

(No description provided)

repeatedNumeric

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericArrayTransformation)

(No description provided)

repeatedText

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextArrayTransformation)

(No description provided)

text

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextTransformation)

(No description provided)

timestamp

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTimestampTransformation)

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationAutoTransformation

Training pipeline will infer the proper transformation based on the statistic of dataset.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalArrayTransformation

Treats the column as categorical array and performs following transformation functions. * For each element in the array, convert the category name to a dictionary lookup index and generate an embedding for each index. Combine the embedding of all elements into a single embedding using the mean. * Empty arrays treated as an embedding of zeroes.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationCategoricalTransformation

Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericArrayTransformation

Treats the column as numerical array and performs following transformation functions. * All transformations for Numerical types applied to the average of the all elements. * The average of empty arrays is treated as zero.
Fields
columnName

string

(No description provided)

invalidValuesAllowed

boolean

If invalid values is allowed, the training pipeline will create a boolean feature that indicated whether the value is valid. Otherwise, the training pipeline will discard the input row from trainining data.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationNumericTransformation

Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * A boolean value that indicates whether the value is valid.
Fields
columnName

string

(No description provided)

invalidValuesAllowed

boolean

If invalid values is allowed, the training pipeline will create a boolean feature that indicated whether the value is valid. Otherwise, the training pipeline will discard the input row from trainining data.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextArrayTransformation

Treats the column as text array and performs following transformation functions. * Concatenate all text values in the array into a single text value using a space (" ") as a delimiter, and then treat the result as a single text value. Apply the transformations for Text columns. * Empty arrays treated as an empty text.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTextTransformation

Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Tokenize text to words. Convert each words to a dictionary lookup index and generate an embedding for each index. Combine the embedding of all elements into a single embedding using the mean. * Tokenization is based on unicode script boundaries. * Missing values get their own lookup index and resulting embedding. * Stop-words receive no special treatment and are not removed.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesInputsTransformationTimestampTransformation

Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the * timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Fields
columnName

string

(No description provided)

invalidValuesAllowed

boolean

If invalid values is allowed, the training pipeline will create a boolean feature that indicated whether the value is valid. Otherwise, the training pipeline will discard the input row from trainining data.

timeFormat

string

The format in which that time field is expressed. The time_format must either be one of: * unix-seconds * unix-milliseconds * unix-microseconds * unix-nanoseconds (for respectively number of seconds, milliseconds, microseconds and nanoseconds since start of the Unix epoch); or be written in strftime syntax. If time_format is not set, then the default format is RFC 3339 date-time format, where time-offset = "Z" (e.g. 1985-04-12T23:20:50.52Z)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTablesMetadata

Model metadata specific to AutoML Tables.
Fields
evaluatedDataItemsBigqueryUri

string

BigQuery destination uri for exported evaluated examples.

trainCostMilliNodeHours

string (int64 format)

Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextClassification

A TrainingJob that trains and uploads an AutoML Text Classification Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextClassificationInputs)

The input parameters of this TrainingJob.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextClassificationInputs

(No description provided)
Fields
multiLabel

boolean

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextExtraction

A TrainingJob that trains and uploads an AutoML Text Extraction Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextExtractionInputs)

The input parameters of this TrainingJob.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextSentiment

A TrainingJob that trains and uploads an AutoML Text Sentiment Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextSentimentInputs)

The input parameters of this TrainingJob.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlTextSentimentInputs

(No description provided)
Fields
sentimentMax

integer (int32 format)

A sentiment is expressed as an integer ordinal, where higher value means a more positive sentiment. The range of sentiments that will be used is between 0 and sentimentMax (inclusive on both ends), and all the values in the range must be represented in the dataset before a model can be created. Only the Annotations with this sentimentMax will be used for training. sentimentMax value must be between 1 and 10 (inclusive).

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoActionRecognition

A TrainingJob that trains and uploads an AutoML Video Action Recognition Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoActionRecognitionInputs)

The input parameters of this TrainingJob.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoActionRecognitionInputs

(No description provided)
Fields
modelType

enum

(No description provided)

Enum type. Can be one of the following:
MODEL_TYPE_UNSPECIFIED Should not be set.
CLOUD A model best tailored to be used within Google Cloud, and which c annot be exported. Default.
MOBILE_VERSATILE_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a mobile or edge device afterwards.
MOBILE_JETSON_VERSATILE_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) to a Jetson device afterwards.
MOBILE_CORAL_VERSATILE_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a Coral device afterwards.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoClassification

A TrainingJob that trains and uploads an AutoML Video Classification Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoClassificationInputs)

The input parameters of this TrainingJob.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoClassificationInputs

(No description provided)
Fields
modelType

enum

(No description provided)

Enum type. Can be one of the following:
MODEL_TYPE_UNSPECIFIED Should not be set.
CLOUD A model best tailored to be used within Google Cloud, and which cannot be exported. Default.
MOBILE_VERSATILE_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a mobile or edge device afterwards.
MOBILE_JETSON_VERSATILE_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) to a Jetson device afterwards.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoObjectTracking

A TrainingJob that trains and uploads an AutoML Video ObjectTracking Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoObjectTrackingInputs)

The input parameters of this TrainingJob.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutoMlVideoObjectTrackingInputs

(No description provided)
Fields
modelType

enum

(No description provided)

Enum type. Can be one of the following:
MODEL_TYPE_UNSPECIFIED Should not be set.
CLOUD A model best tailored to be used within Google Cloud, and which c annot be exported. Default.
MOBILE_VERSATILE_1 A model that, in addition to being available within Google Cloud, can also be exported (see ModelService.ExportModel) as a TensorFlow or TensorFlow Lite model and used on a mobile or edge device afterwards.
MOBILE_CORAL_VERSATILE_1 A versatile model that is meant to be exported (see ModelService.ExportModel) and used on a Google Coral device.
MOBILE_CORAL_LOW_LATENCY_1 A model that trades off quality for low latency, to be exported (see ModelService.ExportModel) and used on a Google Coral device.
MOBILE_JETSON_VERSATILE_1 A versatile model that is meant to be exported (see ModelService.ExportModel) and used on an NVIDIA Jetson device.
MOBILE_JETSON_LOW_LATENCY_1 A model that trades off quality for low latency, to be exported (see ModelService.ExportModel) and used on an NVIDIA Jetson device.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionAutomlImageTrainingTunableParameter

A wrapper class which contains the tunable parameters in an AutoML Image training job.
Fields
checkpointName

string

Optional. An unique name of pretrained model checkpoint provided in model garden, it will be mapped to a GCS location internally.

datasetConfig

map (key: string, value: string)

Customizable dataset settings, used in the model_garden_trainer.

studySpec

object (GoogleCloudAiplatformV1StudySpec)

Optioinal. StudySpec of hyperparameter tuning job. Required for model_garden_trainer.

trainerConfig

map (key: string, value: string)

Customizable trainer settings, used in the model_garden_trainer.

trainerType

enum

(No description provided)

Enum type. Can be one of the following:
TRAINER_TYPE_UNSPECIFIED Default value.
AUTOML_TRAINER (No description provided)
MODEL_GARDEN_TRAINER (No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionCustomJobMetadata

(No description provided)
Fields
backingCustomJob

string

The resource name of the CustomJob that has been created to carry out this custom task.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionCustomTask

A TrainingJob that trains a custom code Model.
Fields
inputs

object (GoogleCloudAiplatformV1CustomJobSpec)

The input parameters of this CustomTask.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionCustomJobMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig

Configuration for exporting test set predictions to a BigQuery table.
Fields
destinationBigqueryUri

string

URI of desired destination BigQuery table. Expected format: bq://{project_id}:{dataset_id}:{table} If not specified, then results are exported to the following auto-created BigQuery table: {project_id}:export_evaluated_examples_{model_name}_{yyyy_MM_dd'T'HH_mm_ss_SSS'Z'}.evaluated_examples

overrideExistingTable

boolean

If true and an export destination is specified, then the contents of the destination are overwritten. Otherwise, if the export destination already exists, then the export operation fails.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig

Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies.
Fields
groupColumns[]

string

A list of time series attribute column names that define the time series hierarchy. Only one level of hierarchy is supported, ex. 'region' for a hierarchy of stores or 'department' for a hierarchy of products. If multiple columns are specified, time series will be grouped by their combined values, ex. ('blue', 'large') for 'color' and 'size', up to 5 columns are accepted. If no group columns are specified, all time series are considered to be part of the same group.

groupTemporalTotalWeight

number (double format)

The weight of the loss for predictions aggregated over both the horizon and time series in the same hierarchy group.

groupTotalWeight

number (double format)

The weight of the loss for predictions aggregated over time series in the same group.

temporalTotalWeight

number (double format)

The weight of the loss for predictions aggregated over the horizon for a single time series.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningJobMetadata

(No description provided)
Fields
backingHyperparameterTuningJob

string

The resource name of the HyperparameterTuningJob that has been created to carry out this HyperparameterTuning task.

bestTrialBackingCustomJob

string

The resource name of the CustomJob that has been created to run the best Trial of this HyperparameterTuning task.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningJobSpec

(No description provided)
Fields
maxFailedTrialCount

integer (int32 format)

The number of failed Trials that need to be seen before failing the HyperparameterTuningJob. If set to 0, Vertex AI decides how many Trials must fail before the whole job fails.

maxTrialCount

integer (int32 format)

The desired total number of Trials.

parallelTrialCount

integer (int32 format)

The desired number of Trials to run in parallel.

studySpec

object (GoogleCloudAiplatformV1StudySpec)

Study configuration of the HyperparameterTuningJob.

trialJobSpec

object (GoogleCloudAiplatformV1CustomJobSpec)

The spec of a trial job. The same spec applies to the CustomJobs created in all the trials.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningTask

A TrainingJob that tunes Hypererparameters of a custom code Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningJobSpec)

The input parameters of this HyperparameterTuningTask.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHyperparameterTuningJobMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecasting

A TrainingJob that trains and uploads an AutoML Forecasting Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputs)

The input parameters of this TrainingJob.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputs

(No description provided)
Fields
additionalExperiments[]

string

Additional experiment flags for the time series forcasting training.

availableAtForecastColumns[]

string

Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day.

contextWindow

string (int64 format)

The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by the data_granularity field.

dataGranularity

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsGranularity)

Expected difference in time granularity between rows in the data.

exportEvaluatedDataItemsConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig)

Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed.

forecastHorizon

string (int64 format)

The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by the data_granularity field.

hierarchyConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig)

Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies.

holidayRegions[]

string

The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option.

optimizationObjective

string

Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: * "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). * "minimize-mae" - Minimize mean-absolute error (MAE). * "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). * "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). * "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). * "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined in quantiles. * "minimize-mape" - Minimize the mean absolute percentage error.

quantiles[]

number (double format)

Quantiles to use for minimize-quantile-loss optimization_objective. Up to 5 quantiles are allowed of values between 0 and 1, exclusive. Required if the value of optimization_objective is minimize-quantile-loss. Represents the percent quantiles to use for that objective. Quantiles must be unique.

targetColumn

string

The name of the column that the Model is to predict values for. This column must be unavailable at forecast.

timeColumn

string

The name of the column that identifies time order in the time series. This column must be available at forecast.

timeSeriesAttributeColumns[]

string

Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color.

timeSeriesIdentifierColumn

string

The name of the column that identifies the time series.

trainBudgetMilliNodeHours

string (int64 format)

Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive.

transformations[]

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformation)

Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter.

unavailableAtForecastColumns[]

string

Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day.

validationOptions

string

Validation options for the data validation component. The available options are: * "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. * "ignore-validation" - ignore the results of the validation and continue

weightColumn

string

Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. This column must be available at forecast.

windowConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig)

Config containing strategy for generating sliding windows.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsGranularity

A duration of time expressed in time granularity units.
Fields
quantity

string (int64 format)

The number of granularity_units between data points in the training data. If granularity_unit is minute, can be 1, 5, 10, 15, or 30. For all other values of granularity_unit, must be 1.

unit

string

The time granularity unit of this time period. The supported units are: * "minute" * "hour" * "day" * "week" * "month" * "year"

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformation

(No description provided)
Fields
auto

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationAutoTransformation)

(No description provided)

categorical

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationCategoricalTransformation)

(No description provided)

numeric

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationNumericTransformation)

(No description provided)

text

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTextTransformation)

(No description provided)

timestamp

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTimestampTransformation)

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationAutoTransformation

Training pipeline will infer the proper transformation based on the statistic of dataset.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationCategoricalTransformation

Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationNumericTransformation

Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTextTransformation

Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingInputsTransformationTimestampTransformation

Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Fields
columnName

string

(No description provided)

timeFormat

string

The format in which that time field is expressed. The time_format must either be one of: * unix-seconds * unix-milliseconds * unix-microseconds * unix-nanoseconds (for respectively number of seconds, milliseconds, microseconds and nanoseconds since start of the Unix epoch); or be written in strftime syntax. If time_format is not set, then the default format is RFC 3339 date-time format, where time-offset = "Z" (e.g. 1985-04-12T23:20:50.52Z)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionSeq2SeqPlusForecastingMetadata

Model metadata specific to Seq2Seq Plus Forecasting.
Fields
evaluatedDataItemsBigqueryUri

string

BigQuery destination uri for exported evaluated examples.

trainCostMilliNodeHours

string (int64 format)

Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecasting

A TrainingJob that trains and uploads an AutoML Forecasting Model.
Fields
inputs

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputs)

The input parameters of this TrainingJob.

metadata

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingMetadata)

The metadata information.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputs

(No description provided)
Fields
additionalExperiments[]

string

Additional experiment flags for the time series forcasting training.

availableAtForecastColumns[]

string

Names of columns that are available and provided when a forecast is requested. These columns contain information for the given entity (identified by the time_series_identifier_column column) that is known at forecast. For example, predicted weather for a specific day.

contextWindow

string (int64 format)

The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by the data_granularity field.

dataGranularity

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsGranularity)

Expected difference in time granularity between rows in the data.

exportEvaluatedDataItemsConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionExportEvaluatedDataItemsConfig)

Configuration for exporting test set predictions to a BigQuery table. If this configuration is absent, then the export is not performed.

forecastHorizon

string (int64 format)

The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by the data_granularity field.

hierarchyConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionHierarchyConfig)

Configuration that defines the hierarchical relationship of time series and parameters for hierarchical forecasting strategies.

holidayRegions[]

string

The geographical region based on which the holiday effect is applied in modeling by adding holiday categorical array feature that include all holidays matching the date. This option only allowed when data_granularity is day. By default, holiday effect modeling is disabled. To turn it on, specify the holiday region using this option.

optimizationObjective

string

Objective function the model is optimizing towards. The training process creates a model that optimizes the value of the objective function over the validation set. The supported optimization objectives: * "minimize-rmse" (default) - Minimize root-mean-squared error (RMSE). * "minimize-mae" - Minimize mean-absolute error (MAE). * "minimize-rmsle" - Minimize root-mean-squared log error (RMSLE). * "minimize-rmspe" - Minimize root-mean-squared percentage error (RMSPE). * "minimize-wape-mae" - Minimize the combination of weighted absolute percentage error (WAPE) and mean-absolute-error (MAE). * "minimize-quantile-loss" - Minimize the quantile loss at the quantiles defined in quantiles. * "minimize-mape" - Minimize the mean absolute percentage error.

quantiles[]

number (double format)

Quantiles to use for minimize-quantile-loss optimization_objective. Up to 5 quantiles are allowed of values between 0 and 1, exclusive. Required if the value of optimization_objective is minimize-quantile-loss. Represents the percent quantiles to use for that objective. Quantiles must be unique.

targetColumn

string

The name of the column that the Model is to predict values for. This column must be unavailable at forecast.

timeColumn

string

The name of the column that identifies time order in the time series. This column must be available at forecast.

timeSeriesAttributeColumns[]

string

Column names that should be used as attribute columns. The value of these columns does not vary as a function of time. For example, store ID or item color.

timeSeriesIdentifierColumn

string

The name of the column that identifies the time series.

trainBudgetMilliNodeHours

string (int64 format)

Required. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend's discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a model for the given dataset, the training won't be attempted and will error. The train budget must be between 1,000 and 72,000 milli node hours, inclusive.

transformations[]

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformation)

Each transformation will apply transform function to given input column. And the result will be used for training. When creating transformation for BigQuery Struct column, the column should be flattened using "." as the delimiter.

unavailableAtForecastColumns[]

string

Names of columns that are unavailable when a forecast is requested. This column contains information for the given entity (identified by the time_series_identifier_column) that is unknown before the forecast For example, actual weather on a given day.

validationOptions

string

Validation options for the data validation component. The available options are: * "fail-pipeline" - default, will validate against the validation and fail the pipeline if it fails. * "ignore-validation" - ignore the results of the validation and continue

weightColumn

string

Column name that should be used as the weight column. Higher values in this column give more importance to the row during model training. The column must have numeric values between 0 and 10000 inclusively; 0 means the row is ignored for training. If weight column field is not set, then all rows are assumed to have equal weight of 1. This column must be available at forecast.

windowConfig

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig)

Config containing strategy for generating sliding windows.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsGranularity

A duration of time expressed in time granularity units.
Fields
quantity

string (int64 format)

The number of granularity_units between data points in the training data. If granularity_unit is minute, can be 1, 5, 10, 15, or 30. For all other values of granularity_unit, must be 1.

unit

string

The time granularity unit of this time period. The supported units are: * "minute" * "hour" * "day" * "week" * "month" * "year"

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformation

(No description provided)
Fields
auto

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationAutoTransformation)

(No description provided)

categorical

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationCategoricalTransformation)

(No description provided)

numeric

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationNumericTransformation)

(No description provided)

text

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTextTransformation)

(No description provided)

timestamp

object (GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTimestampTransformation)

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationAutoTransformation

Training pipeline will infer the proper transformation based on the statistic of dataset.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationCategoricalTransformation

Training pipeline will perform following transformation functions. * The categorical string as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index. * Categories that appear less than 5 times in the training dataset are treated as the "unknown" category. The "unknown" category gets its own special lookup index and resulting embedding.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationNumericTransformation

Training pipeline will perform following transformation functions. * The value converted to float32. * The z_score of the value. * log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value. * z_score of log(value+1) when the value is greater than or equal to 0. Otherwise, this transformation is not applied and the value is considered a missing value.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTextTransformation

Training pipeline will perform following transformation functions. * The text as is--no change to case, punctuation, spelling, tense, and so on. * Convert the category name to a dictionary lookup index and generate an embedding for each index.
Fields
columnName

string

(No description provided)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingInputsTransformationTimestampTransformation

Training pipeline will perform following transformation functions. * Apply the transformation functions for Numerical columns. * Determine the year, month, day,and weekday. Treat each value from the timestamp as a Categorical column. * Invalid numerical values (for example, values that fall outside of a typical timestamp range, or are extreme values) receive no special treatment and are not removed.
Fields
columnName

string

(No description provided)

timeFormat

string

The format in which that time field is expressed. The time_format must either be one of: * unix-seconds * unix-milliseconds * unix-microseconds * unix-nanoseconds (for respectively number of seconds, milliseconds, microseconds and nanoseconds since start of the Unix epoch); or be written in strftime syntax. If time_format is not set, then the default format is RFC 3339 date-time format, where time-offset = "Z" (e.g. 1985-04-12T23:20:50.52Z)

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionTftForecastingMetadata

Model metadata specific to TFT Forecasting.
Fields
evaluatedDataItemsBigqueryUri

string

BigQuery destination uri for exported evaluated examples.

trainCostMilliNodeHours

string (int64 format)

Output only. The actual training cost of the model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.

GoogleCloudAiplatformV1SchemaTrainingjobDefinitionWindowConfig

Config that contains the strategy used to generate sliding windows in time series training. A window is a series of rows that comprise the context up to the time of prediction, and the horizon following. The corresponding row for each window marks the start of the forecast horizon. Each window is used as an input example for training/evaluation.
Fields
column

string

Name of the column that should be used to generate sliding windows. The column should contain either booleans or string booleans; if the value of the row is True, generate a sliding window with the horizon starting at that row. The column will not be used as a feature in training.

maxCount

string (int64 format)

Maximum number of windows that should be generated across all time series.

strideLength

string (int64 format)

Stride length used to generate input examples. Within one time series, every {$STRIDE_LENGTH} rows will be used to generate a sliding window.

GoogleCloudAiplatformV1SchemaVertex

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Fields
x

number (double format)

X coordinate.

y

number (double format)

Y coordinate.

GoogleCloudAiplatformV1SchemaVideoActionRecognitionAnnotation

Annotation details specific to video action recognition.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

timeSegment

object (GoogleCloudAiplatformV1SchemaTimeSegment)

This Annotation applies to the time period represented by the TimeSegment. If it's not set, the Annotation applies to the whole video.

GoogleCloudAiplatformV1SchemaVideoClassificationAnnotation

Annotation details specific to video classification.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

timeSegment

object (GoogleCloudAiplatformV1SchemaTimeSegment)

This Annotation applies to the time period represented by the TimeSegment. If it's not set, the Annotation applies to the whole video.

GoogleCloudAiplatformV1SchemaVideoDataItem

Payload of Video DataItem.
Fields
gcsUri

string

Required. Google Cloud Storage URI points to the original video in user's bucket. The video is up to 50 GB in size and up to 3 hour in duration.

mimeType

string

Output only. The mime type of the content of the video. Only the videos in below listed mime types are supported. Supported mime_type: - video/mp4 - video/avi - video/quicktime

GoogleCloudAiplatformV1SchemaVideoDatasetMetadata

The metadata of Datasets that contain Video DataItems.
Fields
dataItemSchemaUri

string

Points to a YAML file stored on Google Cloud Storage describing payload of the Video DataItems that belong to this Dataset.

gcsBucket

string

Google Cloud Storage Bucket name that contains the blob data of this Dataset.

GoogleCloudAiplatformV1SchemaVideoObjectTrackingAnnotation

Annotation details specific to video object tracking.
Fields
annotationSpecId

string

The resource Id of the AnnotationSpec that this Annotation pertains to.

displayName

string

The display name of the AnnotationSpec that this Annotation pertains to.

instanceId

string (int64 format)

The instance of the object, expressed as a positive integer. Used to track the same object across different frames.

timeOffset

string (Duration format)

A time (frame) of a video to which this annotation pertains. Represented as the duration since the video's start.

xMax

number (double format)

The rightmost coordinate of the bounding box.

xMin

number (double format)

The leftmost coordinate of the bounding box.

yMax

number (double format)

The bottommost coordinate of the bounding box.

yMin

number (double format)

The topmost coordinate of the bounding box.

GoogleCloudAiplatformV1SchemaVisualInspectionClassificationLabelSavedQueryMetadata

(No description provided)
Fields
multiLabel

boolean

Whether or not the classification label is multi_label.

GoogleCloudAiplatformV1SearchDataItemsResponse

Response message for DatasetService.SearchDataItems.
Fields
dataItemViews[]

object (GoogleCloudAiplatformV1DataItemView)

The DataItemViews read.

nextPageToken

string

A token to retrieve next page of results. Pass to SearchDataItemsRequest.page_token to obtain that page.

GoogleCloudAiplatformV1SearchEntryPoint

Google search entry point.
Fields
renderedContent

string

Optional. Web content snippet that can be embedded in a web page or an app webview.

sdkBlob

string (bytes format)

Optional. Base64 encoded JSON representing array of tuple.

GoogleCloudAiplatformV1SearchFeaturesResponse

Response message for FeaturestoreService.SearchFeatures.
Fields
features[]

object (GoogleCloudAiplatformV1Feature)

The Features matching the request. Fields returned: * name * description * labels * create_time * update_time

nextPageToken

string

A token, which can be sent as SearchFeaturesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

GoogleCloudAiplatformV1SearchMigratableResourcesRequest

Request message for MigrationService.SearchMigratableResources.
Fields
filter

string

A filter for your search. You can use the following types of filters: * Resource type filters. The following strings filter for a specific type of MigratableResource: * ml_engine_model_version:* * automl_model:* * automl_dataset:* * data_labeling_dataset:* * "Migrated or not" filters. The following strings filter for resources that either have or have not already been migrated: * last_migrate_time:* filters for migrated resources. * NOT last_migrate_time:* filters for not yet migrated resources.

pageSize

integer (int32 format)

The standard page size. The default and maximum value is 100.

pageToken

string

The standard page token.

GoogleCloudAiplatformV1SearchMigratableResourcesResponse

Response message for MigrationService.SearchMigratableResources.
Fields
migratableResources[]

object (GoogleCloudAiplatformV1MigratableResource)

All migratable resources that can be migrated to the location specified in the request.

nextPageToken

string

The standard next-page token. The migratable_resources may not fill page_size in SearchMigratableResourcesRequest even when there are subsequent pages.

GoogleCloudAiplatformV1SearchModelDeploymentMonitoringStatsAnomaliesRequest

Request message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.
Fields
deployedModelId

string

Required. The DeployedModel ID of the [ModelDeploymentMonitoringObjectiveConfig.deployed_model_id].

endTime

string (Timestamp format)

The latest timestamp of stats being generated. If not set, indicates feching stats till the latest possible one.

featureDisplayName

string

The feature display name. If specified, only return the stats belonging to this feature. Format: ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.feature_display_name, example: "user_destination".

objectives[]

object (GoogleCloudAiplatformV1SearchModelDeploymentMonitoringStatsAnomaliesRequestStatsAnomaliesObjective)

Required. Objectives of the stats to retrieve.

pageSize

integer (int32 format)

The standard list page size.

pageToken

string

A page token received from a previous JobService.SearchModelDeploymentMonitoringStatsAnomalies call.

startTime

string (Timestamp format)

The earliest timestamp of stats being generated. If not set, indicates fetching stats till the earliest possible one.

GoogleCloudAiplatformV1SearchModelDeploymentMonitoringStatsAnomaliesRequestStatsAnomaliesObjective

Stats requested for specific objective.
Fields
topFeatureCount

integer (int32 format)

If set, all attribution scores between SearchModelDeploymentMonitoringStatsAnomaliesRequest.start_time and SearchModelDeploymentMonitoringStatsAnomaliesRequest.end_time are fetched, and page token doesn't take effect in this case. Only used to retrieve attribution score for the top Features which has the highest attribution score in the latest monitoring run.

type

enum

(No description provided)

Enum type. Can be one of the following:
MODEL_DEPLOYMENT_MONITORING_OBJECTIVE_TYPE_UNSPECIFIED Default value, should not be set.
RAW_FEATURE_SKEW Raw feature values' stats to detect skew between Training-Prediction datasets.
RAW_FEATURE_DRIFT Raw feature values' stats to detect drift between Serving-Prediction datasets.
FEATURE_ATTRIBUTION_SKEW Feature attribution scores to detect skew between Training-Prediction datasets.
FEATURE_ATTRIBUTION_DRIFT Feature attribution scores to detect skew between Prediction datasets collected within different time windows.

GoogleCloudAiplatformV1SearchModelDeploymentMonitoringStatsAnomaliesResponse

Response message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.
Fields
monitoringStats[]

object (GoogleCloudAiplatformV1ModelMonitoringStatsAnomalies)

Stats retrieved for requested objectives. There are at most 1000 ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.prediction_stats in the response.

nextPageToken

string

The page token that can be used by the next JobService.SearchModelDeploymentMonitoringStatsAnomalies call.

GoogleCloudAiplatformV1SearchNearestEntitiesRequest

The request message for FeatureOnlineStoreService.SearchNearestEntities.
Fields
query

object (GoogleCloudAiplatformV1NearestNeighborQuery)

Required. The query.

returnFullEntity

boolean

Optional. If set to true, the full entities (including all vector values and metadata) of the nearest neighbors are returned; otherwise only entity id of the nearest neighbors will be returned. Note that returning full entities will significantly increase the latency and cost of the query.

GoogleCloudAiplatformV1SearchNearestEntitiesResponse

Response message for FeatureOnlineStoreService.SearchNearestEntities
Fields
nearestNeighbors

object (GoogleCloudAiplatformV1NearestNeighbors)

The nearest neighbors of the query entity.

GoogleCloudAiplatformV1ServiceAccountSpec

Configuration for the use of custom service account to run the workloads.
Fields
enableCustomServiceAccount

boolean

Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.

serviceAccount

string

Optional. Required when all below conditions are met * enable_custom_service_account is true; * any runtime is specified via ResourceRuntimeSpec on creation time, for example, Ray The users must have iam.serviceAccounts.actAs permission on this service account and then the specified runtime containers will run as it. Do not set this field if you want to submit jobs using custom service account to this PersistentResource after creation, but only specify the service_account inside the job.

GoogleCloudAiplatformV1ShieldedVmConfig

A set of Shielded Instance options. See Images using supported Shielded VM features.
Fields
enableSecureBoot

boolean

Defines whether the instance has Secure Boot enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.

GoogleCloudAiplatformV1SmoothGradConfig

Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf
Fields
featureNoiseSigma

object (GoogleCloudAiplatformV1FeatureNoiseSigma)

This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.

noiseSigma

number (float format)

This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization. For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1. If the distribution is different per feature, set feature_noise_sigma instead for each feature.

noisySampleCount

integer (int32 format)

The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.

GoogleCloudAiplatformV1SpecialistPool

SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers and workers. Managers are responsible for managing the workers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and workers handle the jobs using CrowdCompute console.
Fields
displayName

string

Required. The user-defined name of the SpecialistPool. The name can be up to 128 characters long and can consist of any UTF-8 characters. This field should be unique on project-level.

name

string

Required. The resource name of the SpecialistPool.

pendingDataLabelingJobs[]

string

Output only. The resource name of the pending data labeling jobs.

specialistManagerEmails[]

string

The email addresses of the managers in the SpecialistPool.

specialistManagersCount

integer (int32 format)

Output only. The number of managers in this SpecialistPool.

specialistWorkerEmails[]

string

The email addresses of workers in the SpecialistPool.

GoogleCloudAiplatformV1StartNotebookRuntimeOperationMetadata

Metadata information for NotebookService.StartNotebookRuntime.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

progressMessage

string

A human-readable message that shows the intermediate progress details of NotebookRuntime.

GoogleCloudAiplatformV1StratifiedSplit

Assigns input data to the training, validation, and test sets so that the distribution of values found in the categorical column (as specified by the key field) is mirrored within each split. The fraction values determine the relative sizes of the splits. For example, if the specified column has three values, with 50% of the rows having value "A", 25% value "B", and 25% value "C", and the split fractions are specified as 80/10/10, then the training set will constitute 80% of the training data, with about 50% of the training set rows having the value "A" for the specified column, about 25% having the value "B", and about 25% having the value "C". Only the top 500 occurring values are used; any values not in the top 500 values are randomly assigned to a split. If less than three rows contain a specific value, those rows are randomly assigned. Supported only for tabular Datasets.
Fields
key

string

Required. The key is a name of one of the Dataset's data columns. The key provided must be for a categorical column.

testFraction

number (double format)

The fraction of the input data that is to be used to evaluate the Model.

trainingFraction

number (double format)

The fraction of the input data that is to be used to train the Model.

validationFraction

number (double format)

The fraction of the input data that is to be used to validate the Model.

GoogleCloudAiplatformV1StreamRawPredictRequest

Request message for PredictionService.StreamRawPredict.
Fields
httpBody

object (GoogleApiHttpBody)

The prediction input. Supports HTTP headers and arbitrary data payload.

GoogleCloudAiplatformV1StreamingPredictRequest

Request message for PredictionService.StreamingPredict. The first message must contain endpoint field and optionally input. The subsequent messages must contain input.
Fields
inputs[]

object (GoogleCloudAiplatformV1Tensor)

The prediction input.

parameters

object (GoogleCloudAiplatformV1Tensor)

The parameters that govern the prediction.

GoogleCloudAiplatformV1StreamingPredictResponse

Response message for PredictionService.StreamingPredict.
Fields
outputs[]

object (GoogleCloudAiplatformV1Tensor)

The prediction output.

parameters

object (GoogleCloudAiplatformV1Tensor)

The parameters that govern the prediction.

GoogleCloudAiplatformV1StreamingReadFeatureValuesRequest

Request message for FeaturestoreOnlineServingService.StreamingFeatureValuesRead.
Fields
entityIds[]

string

Required. IDs of entities to read Feature values of. The maximum number of IDs is 100. For example, for a machine learning model predicting user clicks on a website, an entity ID could be user_123.

featureSelector

object (GoogleCloudAiplatformV1FeatureSelector)

Required. Selector choosing Features of the target EntityType. Feature IDs will be deduplicated.

GoogleCloudAiplatformV1StringArray

A list of string values.
Fields
values[]

string

A list of string values.

GoogleCloudAiplatformV1Study

A message representing a Study.
Fields
createTime

string (Timestamp format)

Output only. Time at which the study was created.

displayName

string

Required. Describes the Study, default value is empty string.

inactiveReason

string

Output only. A human readable reason why the Study is inactive. This should be empty if a study is ACTIVE or COMPLETED.

name

string

Output only. The name of a study. The study's globally unique identifier. Format: projects/{project}/locations/{location}/studies/{study}

state

enum

Output only. The detailed state of a Study.

Enum type. Can be one of the following:
STATE_UNSPECIFIED The study state is unspecified.
ACTIVE The study is active.
INACTIVE The study is stopped due to an internal error.
COMPLETED The study is done when the service exhausts the parameter search space or max_trial_count is reached.
studySpec

object (GoogleCloudAiplatformV1StudySpec)

Required. Configuration of the Study.

GoogleCloudAiplatformV1StudySpec

Represents specification of a Study.
Fields
algorithm

enum

The search algorithm specified for the Study.

Enum type. Can be one of the following:
ALGORITHM_UNSPECIFIED The default algorithm used by Vertex AI for hyperparameter tuning and Vertex AI Vizier.
GRID_SEARCH Simple grid search within the feasible space. To use grid search, all parameters must be INTEGER, CATEGORICAL, or DISCRETE.
RANDOM_SEARCH Simple random search within the feasible space.
convexAutomatedStoppingSpec

object (GoogleCloudAiplatformV1StudySpecConvexAutomatedStoppingSpec)

The automated early stopping spec using convex stopping rule.

decayCurveStoppingSpec

object (GoogleCloudAiplatformV1StudySpecDecayCurveAutomatedStoppingSpec)

The automated early stopping spec using decay curve rule.

measurementSelectionType

enum

Describe which measurement selection type will be used

Enum type. Can be one of the following:
MEASUREMENT_SELECTION_TYPE_UNSPECIFIED Will be treated as LAST_MEASUREMENT.
LAST_MEASUREMENT Use the last measurement reported.
BEST_MEASUREMENT Use the best measurement reported.
medianAutomatedStoppingSpec

object (GoogleCloudAiplatformV1StudySpecMedianAutomatedStoppingSpec)

The automated early stopping spec using median rule.

metrics[]

object (GoogleCloudAiplatformV1StudySpecMetricSpec)

Required. Metric specs for the Study.

observationNoise

enum

The observation noise level of the study. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

Enum type. Can be one of the following:
OBSERVATION_NOISE_UNSPECIFIED The default noise level chosen by Vertex AI.
LOW Vertex AI assumes that the objective function is (nearly) perfectly reproducible, and will never repeat the same Trial parameters.
HIGH Vertex AI will estimate the amount of noise in metric evaluations, it may repeat the same Trial parameters more than once.
parameters[]

object (GoogleCloudAiplatformV1StudySpecParameterSpec)

Required. The set of parameters to tune.

studyStoppingConfig

object (GoogleCloudAiplatformV1StudySpecStudyStoppingConfig)

Conditions for automated stopping of a Study. Enable automated stopping by configuring at least one condition.

GoogleCloudAiplatformV1StudySpecConvexAutomatedStoppingSpec

Configuration for ConvexAutomatedStoppingSpec. When there are enough completed trials (configured by min_measurement_count), for pending trials with enough measurements and steps, the policy first computes an overestimate of the objective value at max_num_steps according to the slope of the incomplete objective value curve. No prediction can be made if the curve is completely flat. If the overestimation is worse than the best objective value of the completed trials, this pending trial will be early-stopped, but a last measurement will be added to the pending trial with max_num_steps and predicted objective value from the autoregression model.
Fields
learningRateParameterName

string

The hyper-parameter name used in the tuning job that stands for learning rate. Leave it blank if learning rate is not in a parameter in tuning. The learning_rate is used to estimate the objective value of the ongoing trial.

maxStepCount

string (int64 format)

Steps used in predicting the final objective for early stopped trials. In general, it's set to be the same as the defined steps in training / tuning. If not defined, it will learn it from the completed trials. When use_steps is false, this field is set to the maximum elapsed seconds.

minMeasurementCount

string (int64 format)

The minimal number of measurements in a Trial. Early-stopping checks will not trigger if less than min_measurement_count+1 completed trials or pending trials with less than min_measurement_count measurements. If not defined, the default value is 5.

minStepCount

string (int64 format)

Minimum number of steps for a trial to complete. Trials which do not have a measurement with step_count > min_step_count won't be considered for early stopping. It's ok to set it to 0, and a trial can be early stopped at any stage. By default, min_step_count is set to be one-tenth of the max_step_count. When use_elapsed_duration is true, this field is set to the minimum elapsed seconds.

updateAllStoppedTrials

boolean

ConvexAutomatedStoppingSpec by default only updates the trials that needs to be early stopped using a newly trained auto-regressive model. When this flag is set to True, all stopped trials from the beginning are potentially updated in terms of their final_measurement. Also, note that the training logic of autoregressive models is different in this case. Enabling this option has shown better results and this may be the default option in the future.

useElapsedDuration

boolean

This bool determines whether or not the rule is applied based on elapsed_secs or steps. If use_elapsed_duration==false, the early stopping decision is made according to the predicted objective values according to the target steps. If use_elapsed_duration==true, elapsed_secs is used instead of steps. Also, in this case, the parameters max_num_steps and min_num_steps are overloaded to contain max_elapsed_seconds and min_elapsed_seconds.

GoogleCloudAiplatformV1StudySpecDecayCurveAutomatedStoppingSpec

The decay curve automated stopping rule builds a Gaussian Process Regressor to predict the final objective value of a Trial based on the already completed Trials and the intermediate measurements of the current Trial. Early stopping is requested for the current Trial if there is very low probability to exceed the optimal value found so far.
Fields
useElapsedDuration

boolean

True if Measurement.elapsed_duration is used as the x-axis of each Trials Decay Curve. Otherwise, Measurement.step_count will be used as the x-axis.

GoogleCloudAiplatformV1StudySpecMedianAutomatedStoppingSpec

The median automated stopping rule stops a pending Trial if the Trial's best objective_value is strictly below the median 'performance' of all completed Trials reported up to the Trial's last measurement. Currently, 'performance' refers to the running average of the objective values reported by the Trial in each measurement.
Fields
useElapsedDuration

boolean

True if median automated stopping rule applies on Measurement.elapsed_duration. It means that elapsed_duration field of latest measurement of current Trial is used to compute median objective value for each completed Trials.

GoogleCloudAiplatformV1StudySpecMetricSpec

Represents a metric to optimize.
Fields
goal

enum

Required. The optimization goal of the metric.

Enum type. Can be one of the following:
GOAL_TYPE_UNSPECIFIED Goal Type will default to maximize.
MAXIMIZE Maximize the goal metric.
MINIMIZE Minimize the goal metric.
metricId

string

Required. The ID of the metric. Must not contain whitespaces and must be unique amongst all MetricSpecs.

safetyConfig

object (GoogleCloudAiplatformV1StudySpecMetricSpecSafetyMetricConfig)

Used for safe search. In the case, the metric will be a safety metric. You must provide a separate metric for objective metric.

GoogleCloudAiplatformV1StudySpecMetricSpecSafetyMetricConfig

Used in safe optimization to specify threshold levels and risk tolerance.
Fields
desiredMinSafeTrialsFraction

number (double format)

Desired minimum fraction of safe trials (over total number of trials) that should be targeted by the algorithm at any time during the study (best effort). This should be between 0.0 and 1.0 and a value of 0.0 means that there is no minimum and an algorithm proceeds without targeting any specific fraction. A value of 1.0 means that the algorithm attempts to only Suggest safe Trials.

safetyThreshold

number (double format)

Safety threshold (boundary value between safe and unsafe). NOTE that if you leave SafetyMetricConfig unset, a default value of 0 will be used.

GoogleCloudAiplatformV1StudySpecParameterSpec

Represents a single parameter to optimize.
Fields
categoricalValueSpec

object (GoogleCloudAiplatformV1StudySpecParameterSpecCategoricalValueSpec)

The value spec for a 'CATEGORICAL' parameter.

conditionalParameterSpecs[]

object (GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpec)

A conditional parameter node is active if the parameter's value matches the conditional node's parent_value_condition. If two items in conditional_parameter_specs have the same name, they must have disjoint parent_value_condition.

discreteValueSpec

object (GoogleCloudAiplatformV1StudySpecParameterSpecDiscreteValueSpec)

The value spec for a 'DISCRETE' parameter.

doubleValueSpec

object (GoogleCloudAiplatformV1StudySpecParameterSpecDoubleValueSpec)

The value spec for a 'DOUBLE' parameter.

integerValueSpec

object (GoogleCloudAiplatformV1StudySpecParameterSpecIntegerValueSpec)

The value spec for an 'INTEGER' parameter.

parameterId

string

Required. The ID of the parameter. Must not contain whitespaces and must be unique amongst all ParameterSpecs.

scaleType

enum

How the parameter should be scaled. Leave unset for CATEGORICAL parameters.

Enum type. Can be one of the following:
SCALE_TYPE_UNSPECIFIED By default, no scaling is applied.
UNIT_LINEAR_SCALE Scales the feasible space to (0, 1) linearly.
UNIT_LOG_SCALE Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive.
UNIT_REVERSE_LOG_SCALE Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive.

GoogleCloudAiplatformV1StudySpecParameterSpecCategoricalValueSpec

Value specification for a parameter in CATEGORICAL type.
Fields
defaultValue

string

A default value for a CATEGORICAL parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

values[]

string

Required. The list of possible categories.

GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpec

Represents a parameter spec with condition from its parent parameter.
Fields
parameterSpec

object (GoogleCloudAiplatformV1StudySpecParameterSpec)

Required. The spec for a conditional parameter.

parentCategoricalValues

object (GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecCategoricalValueCondition)

The spec for matching values from a parent parameter of CATEGORICAL type.

parentDiscreteValues

object (GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecDiscreteValueCondition)

The spec for matching values from a parent parameter of DISCRETE type.

parentIntValues

object (GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecIntValueCondition)

The spec for matching values from a parent parameter of INTEGER type.

GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecCategoricalValueCondition

Represents the spec to match categorical values from parent parameter.
Fields
values[]

string

Required. Matches values of the parent parameter of 'CATEGORICAL' type. All values must exist in categorical_value_spec of parent parameter.

GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecDiscreteValueCondition

Represents the spec to match discrete values from parent parameter.
Fields
values[]

number (double format)

Required. Matches values of the parent parameter of 'DISCRETE' type. All values must exist in discrete_value_spec of parent parameter. The Epsilon of the value matching is 1e-10.

GoogleCloudAiplatformV1StudySpecParameterSpecConditionalParameterSpecIntValueCondition

Represents the spec to match integer values from parent parameter.
Fields
values[]

string (int64 format)

Required. Matches values of the parent parameter of 'INTEGER' type. All values must lie in integer_value_spec of parent parameter.

GoogleCloudAiplatformV1StudySpecParameterSpecDiscreteValueSpec

Value specification for a parameter in DISCRETE type.
Fields
defaultValue

number (double format)

A default value for a DISCRETE parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point. It automatically rounds to the nearest feasible discrete point. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

values[]

number (double format)

Required. A list of possible values. The list should be in increasing order and at least 1e-10 apart. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.

GoogleCloudAiplatformV1StudySpecParameterSpecDoubleValueSpec

Value specification for a parameter in DOUBLE type.
Fields
defaultValue

number (double format)

A default value for a DOUBLE parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

maxValue

number (double format)

Required. Inclusive maximum value of the parameter.

minValue

number (double format)

Required. Inclusive minimum value of the parameter.

GoogleCloudAiplatformV1StudySpecParameterSpecIntegerValueSpec

Value specification for a parameter in INTEGER type.
Fields
defaultValue

string (int64 format)

A default value for an INTEGER parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

maxValue

string (int64 format)

Required. Inclusive maximum value of the parameter.

minValue

string (int64 format)

Required. Inclusive minimum value of the parameter.

GoogleCloudAiplatformV1StudySpecStudyStoppingConfig

The configuration (stopping conditions) for automated stopping of a Study. Conditions include trial budgets, time budgets, and convergence detection.
Fields
maxDurationNoProgress

string (Duration format)

If the objective value has not improved for this much time, stop the study. WARNING: Effective only for single-objective studies.

maxNumTrials

integer (int32 format)

If there are more than this many trials, stop the study.

maxNumTrialsNoProgress

integer (int32 format)

If the objective value has not improved for this many consecutive trials, stop the study. WARNING: Effective only for single-objective studies.

maximumRuntimeConstraint

object (GoogleCloudAiplatformV1StudyTimeConstraint)

If the specified time or duration has passed, stop the study.

minNumTrials

integer (int32 format)

If there are fewer than this many COMPLETED trials, do not stop the study.

minimumRuntimeConstraint

object (GoogleCloudAiplatformV1StudyTimeConstraint)

Each "stopping rule" in this proto specifies an "if" condition. Before Vizier would generate a new suggestion, it first checks each specified stopping rule, from top to bottom in this list. Note that the first few rules (e.g. minimum_runtime_constraint, min_num_trials) will prevent other stopping rules from being evaluated until they are met. For example, setting min_num_trials=5 and always_stop_after= 1 hour means that the Study will ONLY stop after it has 5 COMPLETED trials, even if more than an hour has passed since its creation. It follows the first applicable rule (whose "if" condition is satisfied) to make a stopping decision. If none of the specified rules are applicable, then Vizier decides that the study should not stop. If Vizier decides that the study should stop, the study enters STOPPING state (or STOPPING_ASAP if should_stop_asap = true). IMPORTANT: The automatic study state transition happens precisely as described above; that is, deleting trials or updating StudyConfig NEVER automatically moves the study state back to ACTIVE. If you want to resume a Study that was stopped, 1) change the stopping conditions if necessary, 2) activate the study, and then 3) ask for suggestions. If the specified time or duration has not passed, do not stop the study.

shouldStopAsap

boolean

If true, a Study enters STOPPING_ASAP whenever it would normally enters STOPPING state. The bottom line is: set to true if you want to interrupt on-going evaluations of Trials as soon as the study stopping condition is met. (Please see Study.State documentation for the source of truth).

GoogleCloudAiplatformV1StudyTimeConstraint

Time-based Constraint for Study
Fields
endTime

string (Timestamp format)

Compares the wallclock time to this time. Must use UTC timezone.

maxDuration

string (Duration format)

Counts the wallclock time passed since the creation of this Study.

GoogleCloudAiplatformV1SuggestTrialsMetadata

Details of operations that perform Trials suggestion.
Fields
clientId

string

The identifier of the client that is requesting the suggestion. If multiple SuggestTrialsRequests have the same client_id, the service will return the identical suggested Trial if the Trial is pending, and provide a new Trial if the last suggested Trial was completed.

genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for suggesting Trials.

GoogleCloudAiplatformV1SuggestTrialsRequest

Request message for VizierService.SuggestTrials.
Fields
clientId

string

Required. The identifier of the client that is requesting the suggestion. If multiple SuggestTrialsRequests have the same client_id, the service will return the identical suggested Trial if the Trial is pending, and provide a new Trial if the last suggested Trial was completed.

contexts[]

object (GoogleCloudAiplatformV1TrialContext)

Optional. This allows you to specify the "context" for a Trial; a context is a slice (a subspace) of the search space. Typical uses for contexts: 1) You are using Vizier to tune a server for best performance, but there's a strong weekly cycle. The context specifies the day-of-week. This allows Tuesday to generalize from Wednesday without assuming that everything is identical. 2) Imagine you're optimizing some medical treatment for people. As they walk in the door, you know certain facts about them (e.g. sex, weight, height, blood-pressure). Put that information in the context, and Vizier will adapt its suggestions to the patient. 3) You want to do a fair A/B test efficiently. Specify the "A" and "B" conditions as contexts, and Vizier will generalize between "A" and "B" conditions. If they are similar, this will allow Vizier to converge to the optimum faster than if "A" and "B" were separate Studies. NOTE: You can also enter contexts as REQUESTED Trials, e.g. via the CreateTrial() RPC; that's the asynchronous option where you don't need a close association between contexts and suggestions. NOTE: All the Parameters you set in a context MUST be defined in the Study. NOTE: You must supply 0 or $suggestion_count contexts. If you don't supply any contexts, Vizier will make suggestions from the full search space specified in the StudySpec; if you supply a full set of context, each suggestion will match the corresponding context. NOTE: A Context with no features set matches anything, and allows suggestions from the full search space. NOTE: Contexts MUST lie within the search space specified in the StudySpec. It's an error if they don't. NOTE: Contexts preferentially match ACTIVE then REQUESTED trials before new suggestions are generated. NOTE: Generation of suggestions involves a match between a Context and (optionally) a REQUESTED trial; if that match is not fully specified, a suggestion will be geneated in the merged subspace.

suggestionCount

integer (int32 format)

Required. The number of suggestions requested. It must be positive.

GoogleCloudAiplatformV1SuggestTrialsResponse

Response message for VizierService.SuggestTrials.
Fields
endTime

string (Timestamp format)

The time at which operation processing completed.

startTime

string (Timestamp format)

The time at which the operation was started.

studyState

enum

The state of the Study.

Enum type. Can be one of the following:
STATE_UNSPECIFIED The study state is unspecified.
ACTIVE The study is active.
INACTIVE The study is stopped due to an internal error.
COMPLETED The study is done when the service exhausts the parameter search space or max_trial_count is reached.
trials[]

object (GoogleCloudAiplatformV1Trial)

A list of Trials.

GoogleCloudAiplatformV1SupervisedHyperParameters

Hyperparameters for SFT.
Fields
adapterSize

enum

Optional. Adapter size for tuning.

Enum type. Can be one of the following:
ADAPTER_SIZE_UNSPECIFIED Adapter size is unspecified.
ADAPTER_SIZE_ONE Adapter size 1.
ADAPTER_SIZE_FOUR Adapter size 4.
ADAPTER_SIZE_EIGHT Adapter size 8.
ADAPTER_SIZE_SIXTEEN Adapter size 16.
epochCount

string (int64 format)

Optional. Number of complete passes the model makes over the entire training dataset during training.

learningRateMultiplier

number (double format)

Optional. Multiplier for adjusting the default learning rate.

GoogleCloudAiplatformV1SupervisedTuningDataStats

Tuning data statistics for Supervised Tuning.
Fields
totalBillableCharacterCount

string (int64 format)

Output only. Number of billable characters in the tuning dataset.

totalTuningCharacterCount

string (int64 format)

Output only. Number of tuning characters in the tuning dataset.

tuningDatasetExampleCount

string (int64 format)

Output only. Number of examples in the tuning dataset.

tuningStepCount

string (int64 format)

Output only. Number of tuning steps for this Tuning Job.

userDatasetExamples[]

object (GoogleCloudAiplatformV1Content)

Output only. Sample user messages in the training dataset uri.

userInputTokenDistribution

object (GoogleCloudAiplatformV1SupervisedTuningDatasetDistribution)

Output only. Dataset distributions for the user input tokens.

userMessagePerExampleDistribution

object (GoogleCloudAiplatformV1SupervisedTuningDatasetDistribution)

Output only. Dataset distributions for the messages per example.

userOutputTokenDistribution

object (GoogleCloudAiplatformV1SupervisedTuningDatasetDistribution)

Output only. Dataset distributions for the user output tokens.

GoogleCloudAiplatformV1SupervisedTuningDatasetDistribution

Dataset distribution for Supervised Tuning.
Fields
buckets[]

object (GoogleCloudAiplatformV1SupervisedTuningDatasetDistributionDatasetBucket)

Output only. Defines the histogram bucket.

max

number (double format)

Output only. The maximum of the population values.

mean

number (double format)

Output only. The arithmetic mean of the values in the population.

median

number (double format)

Output only. The median of the values in the population.

min

number (double format)

Output only. The minimum of the population values.

p5

number (double format)

Output only. The 5th percentile of the values in the population.

p95

number (double format)

Output only. The 95th percentile of the values in the population.

sum

string (int64 format)

Output only. Sum of a given population of values.

GoogleCloudAiplatformV1SupervisedTuningDatasetDistributionDatasetBucket

Dataset bucket used to create a histogram for the distribution given a population of values.
Fields
count

number (double format)

Output only. Number of values in the bucket.

left

number (double format)

Output only. Left bound of the bucket.

right

number (double format)

Output only. Right bound of the bucket.

GoogleCloudAiplatformV1SupervisedTuningSpec

Tuning Spec for Supervised Tuning.
Fields
hyperParameters

object (GoogleCloudAiplatformV1SupervisedHyperParameters)

Optional. Hyperparameters for SFT.

trainingDatasetUri

string

Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file.

validationDatasetUri

string

Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file.

GoogleCloudAiplatformV1SyncFeatureViewResponse

Respose message for FeatureOnlineStoreAdminService.SyncFeatureView.
Fields
featureViewSync

string

Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}

GoogleCloudAiplatformV1TFRecordDestination

The storage details for TFRecord output content.
Fields
gcsDestination

object (GoogleCloudAiplatformV1GcsDestination)

Required. Google Cloud Storage location.

GoogleCloudAiplatformV1Tensor

A tensor value type.
Fields
boolVal[]

boolean

Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order. BOOL

bytesVal[]

string (bytes format)

STRING

doubleVal[]

number (double format)

DOUBLE

dtype

enum

The data type of tensor.

Enum type. Can be one of the following:
DATA_TYPE_UNSPECIFIED Not a legal value for DataType. Used to indicate a DataType field has not been set.
BOOL Data types that all computation devices are expected to be capable to support.
STRING (No description provided)
FLOAT (No description provided)
DOUBLE (No description provided)
INT8 (No description provided)
INT16 (No description provided)
INT32 (No description provided)
INT64 (No description provided)
UINT8 (No description provided)
UINT16 (No description provided)
UINT32 (No description provided)
UINT64 (No description provided)
floatVal[]

number (float format)

FLOAT

int64Val[]

string (int64 format)

INT64

intVal[]

integer (int32 format)

INT_8 INT_16 INT_32

listVal[]

object (GoogleCloudAiplatformV1Tensor)

A list of tensor values.

shape[]

string (int64 format)

Shape of the tensor.

stringVal[]

string

STRING

structVal

map (key: string, value: object (GoogleCloudAiplatformV1Tensor))

A map of string to tensor.

tensorVal

string (bytes format)

Serialized raw tensor content.

uint64Val[]

string (uint64 format)

UINT64

uintVal[]

integer (uint32 format)

UINT8 UINT16 UINT32

GoogleCloudAiplatformV1Tensorboard

Tensorboard is a physical database that stores users' training metrics. A default Tensorboard is provided in each region of a Google Cloud project. If needed users can also create extra Tensorboards in their projects.
Fields
blobStoragePathPrefix

string

Output only. Consumer project Cloud Storage path prefix used to store blob data, which can either be a bucket or directory. Does not end with a '/'.

createTime

string (Timestamp format)

Output only. Timestamp when this Tensorboard was created.

description

string

Description of this Tensorboard.

displayName

string

Required. User provided name of this Tensorboard.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a Tensorboard. If set, this Tensorboard and all sub-resources of this Tensorboard will be secured by this key.

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

isDefault

boolean

Used to indicate if the TensorBoard instance is the default one. Each project & region can have at most one default TensorBoard instance. Creation of a default TensorBoard instance and updating an existing TensorBoard instance to be default will mark all other TensorBoard instances (if any) as non default.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your Tensorboards. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Tensorboard (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

name

string

Output only. Name of the Tensorboard. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

runCount

integer (int32 format)

Output only. The number of Runs stored in this Tensorboard.

updateTime

string (Timestamp format)

Output only. Timestamp when this Tensorboard was last updated.

GoogleCloudAiplatformV1TensorboardBlob

One blob (e.g, image, graph) viewable on a blob metric plot.
Fields
data

string (bytes format)

Optional. The bytes of the blob is not present unless it's returned by the ReadTensorboardBlobData endpoint.

id

string

Output only. A URI safe key uniquely identifying a blob. Can be used to locate the blob stored in the Cloud Storage bucket of the consumer project.

GoogleCloudAiplatformV1TensorboardBlobSequence

One point viewable on a blob metric plot, but mostly just a wrapper message to work around repeated fields can't be used directly within oneof fields.
Fields
values[]

object (GoogleCloudAiplatformV1TensorboardBlob)

List of blobs contained within the sequence.

GoogleCloudAiplatformV1TensorboardExperiment

A TensorboardExperiment is a group of TensorboardRuns, that are typically the results of a training job run, in a Tensorboard.
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this TensorboardExperiment was created.

description

string

Description of this TensorboardExperiment.

displayName

string

User provided name of this TensorboardExperiment.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your TensorboardExperiment. Label keys and values cannot be longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with aiplatform.googleapis.com/ and are immutable. The following system labels exist for each Dataset: * aiplatform.googleapis.com/dataset_metadata_schema: output only. Its value is the metadata_schema's title.

name

string

Output only. Name of the TensorboardExperiment. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

source

string

Immutable. Source of the TensorboardExperiment. Example: a custom training job.

updateTime

string (Timestamp format)

Output only. Timestamp when this TensorboardExperiment was last updated.

GoogleCloudAiplatformV1TensorboardRun

TensorboardRun maps to a specific execution of a training job with a given set of hyperparameter values, model definition, dataset, etc
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this TensorboardRun was created.

description

string

Description of this TensorboardRun.

displayName

string

Required. User provided name of this TensorboardRun. This value must be unique among all TensorboardRuns belonging to the same parent TensorboardExperiment.

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize your TensorboardRuns. This field will be used to filter and visualize Runs in the Tensorboard UI. For example, a Vertex AI training job can set a label aiplatform.googleapis.com/training_job_id=xxxxx to all the runs created within that job. An end user can set a label experiment_id=xxxxx for all the runs produced in a Jupyter notebook. These runs can be grouped by a label value and visualized together in the Tensorboard UI. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one TensorboardRun (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

name

string

Output only. Name of the TensorboardRun. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

updateTime

string (Timestamp format)

Output only. Timestamp when this TensorboardRun was last updated.

GoogleCloudAiplatformV1TensorboardTensor

One point viewable on a tensor metric plot.
Fields
value

string (bytes format)

Required. Serialized form of https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor.proto

versionNumber

integer (int32 format)

Optional. Version number of TensorProto used to serialize value.

GoogleCloudAiplatformV1TensorboardTimeSeries

TensorboardTimeSeries maps to times series produced in training runs
Fields
createTime

string (Timestamp format)

Output only. Timestamp when this TensorboardTimeSeries was created.

description

string

Description of this TensorboardTimeSeries.

displayName

string

Required. User provided name of this TensorboardTimeSeries. This value should be unique among all TensorboardTimeSeries resources belonging to the same TensorboardRun resource (parent resource).

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

metadata

object (GoogleCloudAiplatformV1TensorboardTimeSeriesMetadata)

Output only. Scalar, Tensor, or Blob metadata for this TensorboardTimeSeries.

name

string

Output only. Name of the TensorboardTimeSeries.

pluginData

string (bytes format)

Data of the current plugin, with the size limited to 65KB.

pluginName

string

Immutable. Name of the plugin this time series pertain to. Such as Scalar, Tensor, Blob

updateTime

string (Timestamp format)

Output only. Timestamp when this TensorboardTimeSeries was last updated.

valueType

enum

Required. Immutable. Type of TensorboardTimeSeries value.

Enum type. Can be one of the following:
VALUE_TYPE_UNSPECIFIED The value type is unspecified.
SCALAR Used for TensorboardTimeSeries that is a list of scalars. E.g. accuracy of a model over epochs/time.
TENSOR Used for TensorboardTimeSeries that is a list of tensors. E.g. histograms of weights of layer in a model over epoch/time.
BLOB_SEQUENCE Used for TensorboardTimeSeries that is a list of blob sequences. E.g. set of sample images with labels over epochs/time.

GoogleCloudAiplatformV1TensorboardTimeSeriesMetadata

Describes metadata for a TensorboardTimeSeries.
Fields
maxBlobSequenceLength

string (int64 format)

Output only. The largest blob sequence length (number of blobs) of all data points in this time series, if its ValueType is BLOB_SEQUENCE.

maxStep

string (int64 format)

Output only. Max step index of all data points within a TensorboardTimeSeries.

maxWallTime

string (Timestamp format)

Output only. Max wall clock timestamp of all data points within a TensorboardTimeSeries.

GoogleCloudAiplatformV1ThresholdConfig

The config for feature monitoring threshold.
Fields
value

number (double format)

Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.

GoogleCloudAiplatformV1TimeSeriesData

All the data stored in a TensorboardTimeSeries.
Fields
tensorboardTimeSeriesId

string

Required. The ID of the TensorboardTimeSeries, which will become the final component of the TensorboardTimeSeries' resource name

valueType

enum

Required. Immutable. The value type of this time series. All the values in this time series data must match this value type.

Enum type. Can be one of the following:
VALUE_TYPE_UNSPECIFIED The value type is unspecified.
SCALAR Used for TensorboardTimeSeries that is a list of scalars. E.g. accuracy of a model over epochs/time.
TENSOR Used for TensorboardTimeSeries that is a list of tensors. E.g. histograms of weights of layer in a model over epoch/time.
BLOB_SEQUENCE Used for TensorboardTimeSeries that is a list of blob sequences. E.g. set of sample images with labels over epochs/time.
values[]

object (GoogleCloudAiplatformV1TimeSeriesDataPoint)

Required. Data points in this time series.

GoogleCloudAiplatformV1TimeSeriesDataPoint

A TensorboardTimeSeries data point.
Fields
blobs

object (GoogleCloudAiplatformV1TensorboardBlobSequence)

A blob sequence value.

scalar

object (GoogleCloudAiplatformV1Scalar)

A scalar value.

step

string (int64 format)

Step index of this data point within the run.

tensor

object (GoogleCloudAiplatformV1TensorboardTensor)

A tensor value.

wallTime

string (Timestamp format)

Wall clock timestamp when this data point is generated by the end user.

GoogleCloudAiplatformV1TimestampSplit

Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set. Supported only for tabular Datasets.
Fields
key

string

Required. The key is a name of one of the Dataset's data columns. The values of the key (the values in the column) must be in RFC 3339 date-time format, where time-offset = "Z" (e.g. 1985-04-12T23:20:50.52Z). If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline.

testFraction

number (double format)

The fraction of the input data that is to be used to evaluate the Model.

trainingFraction

number (double format)

The fraction of the input data that is to be used to train the Model.

validationFraction

number (double format)

The fraction of the input data that is to be used to validate the Model.

GoogleCloudAiplatformV1TokensInfo

Tokens info with a list of tokens and the corresponding list of token ids.
Fields
tokenIds[]

string (int64 format)

A list of token ids from the input.

tokens[]

string (bytes format)

A list of tokens from the input.

GoogleCloudAiplatformV1Tool

Tool details that the model may use to generate response. A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
Fields
functionDeclarations[]

object (GoogleCloudAiplatformV1FunctionDeclaration)

Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided.

retrieval

object (GoogleCloudAiplatformV1Retrieval)

Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.

GoogleCloudAiplatformV1TrainingConfig

CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.
Fields
timeoutTrainingMilliHours

string (int64 format)

The timeout hours for the CMLE training job, expressed in milli hours i.e. 1,000 value in this field means 1 hour.

GoogleCloudAiplatformV1TrainingPipeline

The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload the Model to Vertex AI, and evaluate the Model.
Fields
createTime

string (Timestamp format)

Output only. Time when the TrainingPipeline was created.

displayName

string

Required. The user-defined name of this TrainingPipeline.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key spec for a TrainingPipeline. If set, this TrainingPipeline will be secured by this key. Note: Model trained by this TrainingPipeline is also secured by this key if model_to_upload is not set separately.

endTime

string (Timestamp format)

Output only. Time when the TrainingPipeline entered any of the following states: PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED, PIPELINE_STATE_CANCELLED.

error

object (GoogleRpcStatus)

Output only. Only populated when the pipeline's state is PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED.

inputDataConfig

object (GoogleCloudAiplatformV1InputDataConfig)

Specifies Vertex AI owned input data that may be used for training the Model. The TrainingPipeline's training_task_definition should make clear whether this config is used and if there are any special requirements on how it should be filled. If nothing about this config is mentioned in the training_task_definition, then it should be assumed that the TrainingPipeline does not depend on this configuration.

labels

map (key: string, value: string)

The labels with user-defined metadata to organize TrainingPipelines. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

modelId

string

Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are [a-z0-9_-]. The first character cannot be a number or hyphen.

modelToUpload

object (GoogleCloudAiplatformV1Model)

Describes the Model that may be uploaded (via ModelService.UploadModel) by this TrainingPipeline. The TrainingPipeline's training_task_definition should make clear whether this Model description should be populated, and if there are any special requirements regarding how it should be filled. If nothing is mentioned in the training_task_definition, then it should be assumed that this field should not be filled and the training task either uploads the Model without a need of this information, or that training task does not support uploading a Model as part of the pipeline. When the Pipeline's state becomes PIPELINE_STATE_SUCCEEDED and the trained Model had been uploaded into Vertex AI, then the model_to_upload's resource name is populated. The Model is always uploaded into the Project and Location in which this pipeline is.

name

string

Output only. Resource name of the TrainingPipeline.

parentModel

string

Optional. When specify this field, the model_to_upload will not be uploaded as a new model, instead, it will become a new version of this parent_model.

startTime

string (Timestamp format)

Output only. Time when the TrainingPipeline for the first time entered the PIPELINE_STATE_RUNNING state.

state

enum

Output only. The detailed state of the pipeline.

Enum type. Can be one of the following:
PIPELINE_STATE_UNSPECIFIED The pipeline state is unspecified.
PIPELINE_STATE_QUEUED The pipeline has been created or resumed, and processing has not yet begun.
PIPELINE_STATE_PENDING The service is preparing to run the pipeline.
PIPELINE_STATE_RUNNING The pipeline is in progress.
PIPELINE_STATE_SUCCEEDED The pipeline completed successfully.
PIPELINE_STATE_FAILED The pipeline failed.
PIPELINE_STATE_CANCELLING The pipeline is being cancelled. From this state, the pipeline may only go to either PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED.
PIPELINE_STATE_CANCELLED The pipeline has been cancelled.
PIPELINE_STATE_PAUSED The pipeline has been stopped, and can be resumed.
trainingTaskDefinition

string

Required. A Google Cloud Storage path to the YAML file that defines the training task which is responsible for producing the model artifact, and may also include additional auxiliary work. The definition files that can be used here are found in gs://google-cloud-aiplatform/schema/trainingjob/definition/. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

trainingTaskInputs

any

Required. The training task's parameter(s), as specified in the training_task_definition's inputs.

trainingTaskMetadata

any

Output only. The metadata information as specified in the training_task_definition's metadata. This metadata is an auxiliary runtime and final information about the training task. While the pipeline is running this information is populated only at a best effort basis. Only present if the pipeline's training_task_definition contains metadata object.

updateTime

string (Timestamp format)

Output only. Time when the TrainingPipeline was most recently updated.

GoogleCloudAiplatformV1Trial

A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.
Fields
clientId

string

Output only. The identifier of the client that originally requested this Trial. Each client is identified by a unique client_id. When a client asks for a suggestion, Vertex AI Vizier will assign it a Trial. The client should evaluate the Trial, complete it, and report back to Vertex AI Vizier. If suggestion is asked again by same client_id before the Trial is completed, the same Trial will be returned. Multiple clients with different client_ids can ask for suggestions simultaneously, each of them will get their own Trial.

customJob

string

Output only. The CustomJob name linked to the Trial. It's set for a HyperparameterTuningJob's Trial.

endTime

string (Timestamp format)

Output only. Time when the Trial's status changed to SUCCEEDED or INFEASIBLE.

finalMeasurement

object (GoogleCloudAiplatformV1Measurement)

Output only. The final measurement containing the objective value.

id

string

Output only. The identifier of the Trial assigned by the service.

infeasibleReason

string

Output only. A human readable string describing why the Trial is infeasible. This is set only if Trial state is INFEASIBLE.

measurements[]

object (GoogleCloudAiplatformV1Measurement)

Output only. A list of measurements that are strictly lexicographically ordered by their induced tuples (steps, elapsed_duration). These are used for early stopping computations.

name

string

Output only. Resource name of the Trial assigned by the service.

parameters[]

object (GoogleCloudAiplatformV1TrialParameter)

Output only. The parameters of the Trial.

startTime

string (Timestamp format)

Output only. Time when the Trial was started.

state

enum

Output only. The detailed state of the Trial.

Enum type. Can be one of the following:
STATE_UNSPECIFIED The Trial state is unspecified.
REQUESTED Indicates that a specific Trial has been requested, but it has not yet been suggested by the service.
ACTIVE Indicates that the Trial has been suggested.
STOPPING Indicates that the Trial should stop according to the service.
SUCCEEDED Indicates that the Trial is completed successfully.
INFEASIBLE Indicates that the Trial should not be attempted again. The service will set a Trial to INFEASIBLE when it's done but missing the final_measurement.
webAccessUris

map (key: string, value: string)

Output only. URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a HyperparameterTuningJob and the job's trial_job_spec.enable_web_access field is true. The keys are names of each node used for the trial; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.

GoogleCloudAiplatformV1TrialContext

Next ID: 3
Fields
description

string

A human-readable field which can store a description of this context. This will become part of the resulting Trial's description field.

parameters[]

object (GoogleCloudAiplatformV1TrialParameter)

If/when a Trial is generated or selected from this Context, its Parameters will match any parameters specified here. (I.e. if this context specifies parameter name:'a' int_value:3, then a resulting Trial will have int_value:3 for its parameter named 'a'.) Note that we first attempt to match existing REQUESTED Trials with contexts, and if there are no matches, we generate suggestions in the subspace defined by the parameters specified here. NOTE: a Context without any Parameters matches the entire feasible search space.

GoogleCloudAiplatformV1TrialParameter

A message representing a parameter to be tuned.
Fields
parameterId

string

Output only. The ID of the parameter. The parameter should be defined in StudySpec's Parameters.

value

any

Output only. The value of the parameter. number_value will be set if a parameter defined in StudySpec is in type 'INTEGER', 'DOUBLE' or 'DISCRETE'. string_value will be set if a parameter defined in StudySpec is in type 'CATEGORICAL'.

GoogleCloudAiplatformV1TunedModel

The Model Registry Model and Online Prediction Endpoint assiociated with this TuningJob.
Fields
endpoint

string

Output only. A resource name of an Endpoint. Format: projects/{project}/locations/{location}/endpoints/{endpoint}.

model

string

Output only. The resource name of the TunedModel. Format: projects/{project}/locations/{location}/models/{model}.

GoogleCloudAiplatformV1TuningDataStats

The tuning data statistic values for TuningJob.
Fields
supervisedTuningDataStats

object (GoogleCloudAiplatformV1SupervisedTuningDataStats)

The SFT Tuning data stats.

GoogleCloudAiplatformV1TuningJob

Represents a TuningJob that runs with Google owned models.
Fields
baseModel

string

The base model that is being tuned, e.g., "gemini-1.0-pro-002".

createTime

string (Timestamp format)

Output only. Time when the TuningJob was created.

description

string

Optional. The description of the TuningJob.

encryptionSpec

object (GoogleCloudAiplatformV1EncryptionSpec)

Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key.

endTime

string (Timestamp format)

Output only. Time when the TuningJob entered any of the following JobStates: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED, JOB_STATE_EXPIRED.

error

object (GoogleRpcStatus)

Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

experiment

string

Output only. The Experiment associated with this TuningJob.

labels

map (key: string, value: string)

Optional. The labels with user-defined metadata to organize TuningJob and generated resources such as Model and Endpoint. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

name

string

Output only. Identifier. Resource name of a TuningJob. Format: projects/{project}/locations/{location}/tuningJobs/{tuning_job}

startTime

string (Timestamp format)

Output only. Time when the TuningJob for the first time entered the JOB_STATE_RUNNING state.

state

enum

Output only. The detailed state of the job.

Enum type. Can be one of the following:
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.
supervisedTuningSpec

object (GoogleCloudAiplatformV1SupervisedTuningSpec)

Tuning Spec for Supervised Fine Tuning.

tunedModel

object (GoogleCloudAiplatformV1TunedModel)

Output only. The tuned model resources assiociated with this TuningJob.

tunedModelDisplayName

string

Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters.

tuningDataStats

object (GoogleCloudAiplatformV1TuningDataStats)

Output only. The tuning data statistics associated with this TuningJob.

updateTime

string (Timestamp format)

Output only. Time when the TuningJob was most recently updated.

GoogleCloudAiplatformV1UndeployIndexOperationMetadata

Runtime operation information for IndexEndpointService.UndeployIndex.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1UndeployIndexRequest

Request message for IndexEndpointService.UndeployIndex.
Fields
deployedIndexId

string

Required. The ID of the DeployedIndex to be undeployed from the IndexEndpoint.

GoogleCloudAiplatformV1UndeployModelOperationMetadata

Runtime operation information for EndpointService.UndeployModel.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1UndeployModelRequest

Request message for EndpointService.UndeployModel.
Fields
deployedModelId

string

Required. The ID of the DeployedModel to be undeployed from the Endpoint.

trafficSplit

map (key: string, value: integer (int32 format))

If this field is provided, then the Endpoint's traffic_split will be overwritten with it. If last DeployedModel is being undeployed from the Endpoint, the [Endpoint.traffic_split] will always end up empty when this call returns. A DeployedModel will be successfully undeployed only if it doesn't have any traffic assigned to it when this method executes, or if this field unassigns any traffic to it.

GoogleCloudAiplatformV1UnmanagedContainerModel

Contains model information necessary to perform batch prediction without requiring a full model import.
Fields
artifactUri

string

The path to the directory containing the Model artifact and any of its supporting files.

containerSpec

object (GoogleCloudAiplatformV1ModelContainerSpec)

Input only. The specification of the container that is to be used when deploying this Model.

predictSchemata

object (GoogleCloudAiplatformV1PredictSchemata)

Contains the schemata used in Model's predictions and explanations

GoogleCloudAiplatformV1UpdateDeploymentResourcePoolOperationMetadata

Runtime operation information for UpdateDeploymentResourcePool method.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1UpdateExplanationDatasetOperationMetadata

Runtime operation information for ModelService.UpdateExplanationDataset.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1UpdateExplanationDatasetRequest

Request message for ModelService.UpdateExplanationDataset.
Fields
examples

object (GoogleCloudAiplatformV1Examples)

The example config containing the location of the dataset.

GoogleCloudAiplatformV1UpdateFeatureGroupOperationMetadata

Details of operations that perform update FeatureGroup.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for FeatureGroup.

GoogleCloudAiplatformV1UpdateFeatureOnlineStoreOperationMetadata

Details of operations that perform update FeatureOnlineStore.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for FeatureOnlineStore.

GoogleCloudAiplatformV1UpdateFeatureOperationMetadata

Details of operations that perform update Feature.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Feature Update.

GoogleCloudAiplatformV1UpdateFeatureViewOperationMetadata

Details of operations that perform update FeatureView.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for FeatureView Update.

GoogleCloudAiplatformV1UpdateFeaturestoreOperationMetadata

Details of operations that perform update Featurestore.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Featurestore.

GoogleCloudAiplatformV1UpdateIndexOperationMetadata

Runtime operation information for IndexService.UpdateIndex.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

nearestNeighborSearchOperationMetadata

object (GoogleCloudAiplatformV1NearestNeighborSearchOperationMetadata)

The operation metadata with regard to Matching Engine Index operation.

GoogleCloudAiplatformV1UpdateModelDeploymentMonitoringJobOperationMetadata

Runtime operation information for JobService.UpdateModelDeploymentMonitoringJob.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

GoogleCloudAiplatformV1UpdatePersistentResourceOperationMetadata

Details of operations that perform update PersistentResource.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for PersistentResource.

progressMessage

string

Progress Message for Update LRO

GoogleCloudAiplatformV1UpdateSpecialistPoolOperationMetadata

Runtime operation metadata for SpecialistPoolService.UpdateSpecialistPool.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

specialistPool

string

Output only. The name of the SpecialistPool to which the specialists are being added. Format: projects/{project_id}/locations/{location_id}/specialistPools/{specialist_pool}

GoogleCloudAiplatformV1UpdateTensorboardOperationMetadata

Details of operations that perform update Tensorboard.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

Operation metadata for Tensorboard.

GoogleCloudAiplatformV1UpgradeNotebookRuntimeOperationMetadata

Metadata information for NotebookService.UpgradeNotebookRuntime.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The operation generic information.

progressMessage

string

A human-readable message that shows the intermediate progress details of NotebookRuntime.

GoogleCloudAiplatformV1UploadModelOperationMetadata

Details of ModelService.UploadModel operation.
Fields
genericMetadata

object (GoogleCloudAiplatformV1GenericOperationMetadata)

The common part of the operation metadata.

GoogleCloudAiplatformV1UploadModelRequest

Request message for ModelService.UploadModel.
Fields
model

object (GoogleCloudAiplatformV1Model)

Required. The Model to create.

modelId

string

Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are [a-z0-9_-]. The first character cannot be a number or hyphen.

parentModel

string

Optional. The resource name of the model into which to upload the version. Only specify this field when uploading a new version.

serviceAccount

string

Optional. The user-provided custom service account to use to do the model upload. If empty, Vertex AI Service Agent will be used to access resources needed to upload the model. This account must belong to the target project where the model is uploaded to, i.e., the project specified in the parent field of this request and have necessary read permissions (to Google Cloud Storage, Artifact Registry, etc.).

GoogleCloudAiplatformV1UploadModelResponse

Response message of ModelService.UploadModel operation.
Fields
model

string

The name of the uploaded Model resource. Format: projects/{project}/locations/{location}/models/{model}

modelVersionId

string

Output only. The version ID of the model that is uploaded.

GoogleCloudAiplatformV1UpsertDatapointsRequest

Request message for IndexService.UpsertDatapoints
Fields
datapoints[]

object (GoogleCloudAiplatformV1IndexDatapoint)

A list of datapoints to be created/updated.

updateMask

string (FieldMask format)

Optional. Update mask is used to specify the fields to be overwritten in the datapoints by the update. The fields specified in the update_mask are relative to each IndexDatapoint inside datapoints, not the full request. Updatable fields: * Use all_restricts to update both restricts and numeric_restricts.

GoogleCloudAiplatformV1UserActionReference

References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.
Fields
dataLabelingJob

string

For API calls that start a LabelingJob. Resource name of the LabelingJob. Format: projects/{project}/locations/{location}/dataLabelingJobs/{data_labeling_job}

method

string

The method name of the API RPC call. For example, "/google.cloud.aiplatform.{apiVersion}.DatasetService.CreateDataset"

operation

string

For API calls that return a long running operation. Resource name of the long running operation. Format: projects/{project}/locations/{location}/operations/{operation}

GoogleCloudAiplatformV1Value

Value is the value of the field.
Fields
doubleValue

number (double format)

A double value.

intValue

string (int64 format)

An integer value.

stringValue

string

A string value.

GoogleCloudAiplatformV1VertexAISearch

Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation
Fields
datastore

string

Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}

GoogleCloudAiplatformV1VideoMetadata

Metadata describes the input video content.
Fields
endOffset

string (Duration format)

Optional. The end offset of the video.

startOffset

string (Duration format)

Optional. The start offset of the video.

GoogleCloudAiplatformV1WorkerPoolSpec

Represents the spec of a worker pool in a job.
Fields
containerSpec

object (GoogleCloudAiplatformV1ContainerSpec)

The custom container task.

diskSpec

object (GoogleCloudAiplatformV1DiskSpec)

Disk spec.

machineSpec

object (GoogleCloudAiplatformV1MachineSpec)

Optional. Immutable. The specification of a single machine.

nfsMounts[]

object (GoogleCloudAiplatformV1NfsMount)

Optional. List of NFS mount spec.

pythonPackageSpec

object (GoogleCloudAiplatformV1PythonPackageSpec)

The Python packaged task.

replicaCount

string (int64 format)

Optional. The number of worker replicas to use for this worker pool.

GoogleCloudAiplatformV1WriteFeatureValuesPayload

Contains Feature values to be written for a specific entity.
Fields
entityId

string

Required. The ID of the entity.

featureValues

map (key: string, value: object (GoogleCloudAiplatformV1FeatureValue))

Required. Feature values to be written, mapping from Feature ID to value. Up to 100,000 feature_values entries may be written across all payloads. The feature generation time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future.

GoogleCloudAiplatformV1WriteFeatureValuesRequest

Request message for FeaturestoreOnlineServingService.WriteFeatureValues.
Fields
payloads[]

object (GoogleCloudAiplatformV1WriteFeatureValuesPayload)

Required. The entities to be written. Up to 100,000 feature values can be written across all payloads.

GoogleCloudAiplatformV1WriteTensorboardExperimentDataRequest

Request message for TensorboardService.WriteTensorboardExperimentData.
Fields
writeRunDataRequests[]

object (GoogleCloudAiplatformV1WriteTensorboardRunDataRequest)

Required. Requests containing per-run TensorboardTimeSeries data to write.

GoogleCloudAiplatformV1WriteTensorboardRunDataRequest

Request message for TensorboardService.WriteTensorboardRunData.
Fields
tensorboardRun

string

Required. The resource name of the TensorboardRun to write data to. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

timeSeriesData[]

object (GoogleCloudAiplatformV1TimeSeriesData)

Required. The TensorboardTimeSeries data to write. Values with in a time series are indexed by their step value. Repeated writes to the same step will overwrite the existing value for that step. The upper limit of data points per write request is 5000.

GoogleCloudAiplatformV1XraiAttribution

An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models.
Fields
blurBaselineConfig

object (GoogleCloudAiplatformV1BlurBaselineConfig)

Config for XRAI with blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383

smoothGradConfig

object (GoogleCloudAiplatformV1SmoothGradConfig)

Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf

stepCount

integer (int32 format)

Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. Valid range of its value is [1, 100], inclusively.

GoogleCloudLocationListLocationsResponse

The response message for Locations.ListLocations.
Fields
locations[]

object (GoogleCloudLocationLocation)

A list of locations that matches the specified filter in the request.

nextPageToken

string

The standard List next-page token.

GoogleCloudLocationLocation

A resource that represents a Google Cloud location.
Fields
displayName

string

The friendly name for this location, typically a nearby city name. For example, "Tokyo".

labels

map (key: string, value: string)

Cross-service attributes for the location. For example {"cloud.googleapis.com/region": "us-east1"}

locationId

string

The canonical id for this location. For example: "us-east1".

metadata

map (key: string, value: any)

Service-specific metadata. For example the available capacity at the given location.

name

string

Resource name for the location, which may vary between implementations. For example: "projects/example-project/locations/us-east1"

GoogleIamV1Binding

Associates members, or principals, with a role.
Fields
condition

object (GoogleTypeExpr)

The condition that is associated with this binding. If the condition evaluates to true, then this binding applies to the current request. If the condition evaluates to false, then this binding does not apply to the current request. However, a different role binding might grant the same role to one or more of the principals in this binding. To learn which resources support conditions in their IAM policies, see the IAM documentation.

members[]

string

Specifies the principals requesting access for a Google Cloud resource. members can have the following values: * allUsers: A special identifier that represents anyone who is on the internet; with or without a Google account. * allAuthenticatedUsers: A special identifier that represents anyone who is authenticated with a Google account or a service account. Does not include identities that come from external identity providers (IdPs) through identity federation. * user:{emailid}: An email address that represents a specific Google account. For example, alice@example.com . * serviceAccount:{emailid}: An email address that represents a Google service account. For example, my-other-app@appspot.gserviceaccount.com. * serviceAccount:{projectid}.svc.id.goog[{namespace}/{kubernetes-sa}]: An identifier for a Kubernetes service account. For example, my-project.svc.id.goog[my-namespace/my-kubernetes-sa]. * group:{emailid}: An email address that represents a Google group. For example, admins@example.com. * domain:{domain}: The G Suite domain (primary) that represents all the users of that domain. For example, google.com or example.com. * principal://iam.googleapis.com/locations/global/workforcePools/{pool_id}/subject/{subject_attribute_value}: A single identity in a workforce identity pool. * principalSet://iam.googleapis.com/locations/global/workforcePools/{pool_id}/group/{group_id}: All workforce identities in a group. * principalSet://iam.googleapis.com/locations/global/workforcePools/{pool_id}/attribute.{attribute_name}/{attribute_value}: All workforce identities with a specific attribute value. * principalSet://iam.googleapis.com/locations/global/workforcePools/{pool_id}/*: All identities in a workforce identity pool. * principal://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/subject/{subject_attribute_value}: A single identity in a workload identity pool. * principalSet://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/group/{group_id}: A workload identity pool group. * principalSet://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/attribute.{attribute_name}/{attribute_value}: All identities in a workload identity pool with a certain attribute. * principalSet://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/*: All identities in a workload identity pool. * deleted:user:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a user that has been recently deleted. For example, alice@example.com?uid=123456789012345678901. If the user is recovered, this value reverts to user:{emailid} and the recovered user retains the role in the binding. * deleted:serviceAccount:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a service account that has been recently deleted. For example, my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901. If the service account is undeleted, this value reverts to serviceAccount:{emailid} and the undeleted service account retains the role in the binding. * deleted:group:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a Google group that has been recently deleted. For example, admins@example.com?uid=123456789012345678901. If the group is recovered, this value reverts to group:{emailid} and the recovered group retains the role in the binding. * deleted:principal://iam.googleapis.com/locations/global/workforcePools/{pool_id}/subject/{subject_attribute_value}: Deleted single identity in a workforce identity pool. For example, deleted:principal://iam.googleapis.com/locations/global/workforcePools/my-pool-id/subject/my-subject-attribute-value.

role

string

Role that is assigned to the list of members, or principals. For example, roles/viewer, roles/editor, or roles/owner. For an overview of the IAM roles and permissions, see the IAM documentation. For a list of the available pre-defined roles, see here.

GoogleIamV1Policy

An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources. A Policy is a collection of bindings. A binding binds one or more members, or principals, to a single role. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role. For some types of Google Cloud resources, a binding can also specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation. JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 } YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3 For a description of IAM and its features, see the IAM documentation.
Fields
bindings[]

object (GoogleIamV1Binding)

Associates a list of members, or principals, with a role. Optionally, may specify a condition that determines how and when the bindings are applied. Each of the bindings must contain at least one principal. The bindings in a Policy can refer to up to 1,500 principals; up to 250 of these principals can be Google groups. Each occurrence of a principal counts towards these limits. For example, if the bindings grant 50 different roles to user:alice@example.com, and not to any other principal, then you can add another 1,450 principals to the bindings in the Policy.

etag

string (bytes format)

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy. Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.

version

integer (int32 format)

Specifies the format of the policy. Valid values are 0, 1, and 3. Requests that specify an invalid value are rejected. Any operation that affects conditional role bindings must specify version 3. This requirement applies to the following operations: * Getting a policy that includes a conditional role binding * Adding a conditional role binding to a policy * Changing a conditional role binding in a policy * Removing any role binding, with or without a condition, from a policy that includes conditions Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost. If a policy does not include any conditions, operations on that policy may specify any valid version or leave the field unset. To learn which resources support conditions in their IAM policies, see the IAM documentation.

GoogleIamV1SetIamPolicyRequest

Request message for SetIamPolicy method.
Fields
policy

object (GoogleIamV1Policy)

REQUIRED: The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Google Cloud services (such as Projects) might reject them.

GoogleIamV1TestIamPermissionsResponse

Response message for TestIamPermissions method.
Fields
permissions[]

string

A subset of TestPermissionsRequest.permissions that the caller is allowed.

GoogleLongrunningListOperationsResponse

The response message for Operations.ListOperations.
Fields
nextPageToken

string

The standard List next-page token.

operations[]

object (GoogleLongrunningOperation)

A list of operations that matches the specified filter in the request.

GoogleLongrunningOperation

This resource represents a long-running operation that is the result of a network API call.
Fields
done

boolean

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

error

object (GoogleRpcStatus)

The error result of the operation in case of failure or cancellation.

metadata

map (key: string, value: any)

Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.

name

string

The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.

response

map (key: string, value: any)

The normal, successful response of the operation. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.

GoogleRpcStatus

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Fields
code

integer (int32 format)

The status code, which should be an enum value of google.rpc.Code.

details[]

object

A list of messages that carry the error details. There is a common set of message types for APIs to use.

message

string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

GoogleTypeColor

Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of java.awt.Color in Java; it can also be trivially provided to UIColor's +colorWithRed:green:blue:alpha method in iOS; and, with just a little work, it can be easily formatted into a CSS rgba() string in JavaScript. This reference page doesn't have information about the absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor fromProto(Color protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color toProto(UIColor color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...
Fields
alpha

number (float format)

The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: pixel color = alpha * (this color) + (1.0 - alpha) * (background color) This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).

blue

number (float format)

The amount of blue in the color as a value in the interval [0, 1].

green

number (float format)

The amount of green in the color as a value in the interval [0, 1].

red

number (float format)

The amount of red in the color as a value in the interval [0, 1].

GoogleTypeDate

Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp
Fields
day

integer (int32 format)

Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.

month

integer (int32 format)

Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.

year

integer (int32 format)

Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.

GoogleTypeExpr

Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
Fields
description

string

Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.

expression

string

Textual representation of an expression in Common Expression Language syntax.

location

string

Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.

title

string

Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.

GoogleTypeInterval

Represents a time interval, encoded as a Timestamp start (inclusive) and a Timestamp end (exclusive). The start must be less than or equal to the end. When the start equals the end, the interval is empty (matches no time). When both start and end are unspecified, the interval matches any time.
Fields
endTime

string (Timestamp format)

Optional. Exclusive end of the interval. If specified, a Timestamp matching this interval will have to be before the end.

startTime

string (Timestamp format)

Optional. Inclusive start of the interval. If specified, a Timestamp matching this interval will have to be the same or after the start.

GoogleTypeMoney

Represents an amount of money with its currency type.
Fields
currencyCode

string

The three-letter currency code defined in ISO 4217.

nanos

integer (int32 format)

Number of nano (10^-9) units of the amount. The value must be between -999,999,999 and +999,999,999 inclusive. If units is positive, nanos must be positive or zero. If units is zero, nanos can be positive, zero, or negative. If units is negative, nanos must be negative or zero. For example $-1.75 is represented as units=-1 and nanos=-750,000,000.

units

string (int64 format)

The whole units of the amount. For example if currencyCode is "USD", then 1 unit is one US dollar.