Package google.cloud.aiplatform.v1beta1

Index

DatasetService

The service that manages Vertex AI Dataset and its child resources.

CreateDataset

rpc CreateDataset(CreateDatasetRequest) returns (Operation)

Creates a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.datasets.create

For more information, see the IAM documentation.

CreateDatasetVersion

rpc CreateDatasetVersion(CreateDatasetVersionRequest) returns (Operation)

Create a version from a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.datasetVersions.create

For more information, see the IAM documentation.

DeleteDataset

rpc DeleteDataset(DeleteDatasetRequest) returns (Operation)

Deletes a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasets.delete

For more information, see the IAM documentation.

DeleteDatasetVersion

rpc DeleteDatasetVersion(DeleteDatasetVersionRequest) returns (Operation)

Deletes a Dataset version.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasetVersions.delete

For more information, see the IAM documentation.

DeleteSavedQuery

rpc DeleteSavedQuery(DeleteSavedQueryRequest) returns (Operation)

Deletes a SavedQuery.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasets.delete

For more information, see the IAM documentation.

ExportData

rpc ExportData(ExportDataRequest) returns (Operation)

Exports data from a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasets.export

For more information, see the IAM documentation.

GetAnnotationSpec

rpc GetAnnotationSpec(GetAnnotationSpecRequest) returns (AnnotationSpec)

Gets an AnnotationSpec.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.annotationSpecs.get

For more information, see the IAM documentation.

GetDataset

rpc GetDataset(GetDatasetRequest) returns (Dataset)

Gets a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasets.get

For more information, see the IAM documentation.

GetDatasetVersion

rpc GetDatasetVersion(GetDatasetVersionRequest) returns (DatasetVersion)

Gets a Dataset version.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasetVersions.get

For more information, see the IAM documentation.

ImportData

rpc ImportData(ImportDataRequest) returns (Operation)

Imports data into a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasets.import

For more information, see the IAM documentation.

ListAnnotations

rpc ListAnnotations(ListAnnotationsRequest) returns (ListAnnotationsResponse)

Lists Annotations belongs to a dataitem This RPC is only available in InternalDatasetService. It is only used for exporting conversation data to CCAI Insights.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.annotations.list

For more information, see the IAM documentation.

ListDataItems

rpc ListDataItems(ListDataItemsRequest) returns (ListDataItemsResponse)

Lists DataItems in a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.dataItems.list

For more information, see the IAM documentation.

ListDatasetVersions

rpc ListDatasetVersions(ListDatasetVersionsRequest) returns (ListDatasetVersionsResponse)

Lists DatasetVersions in a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.datasetVersions.list

For more information, see the IAM documentation.

ListDatasets

rpc ListDatasets(ListDatasetsRequest) returns (ListDatasetsResponse)

Lists Datasets in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.datasets.list

For more information, see the IAM documentation.

ListSavedQueries

rpc ListSavedQueries(ListSavedQueriesRequest) returns (ListSavedQueriesResponse)

Lists SavedQueries in a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.datasets.get

For more information, see the IAM documentation.

RestoreDatasetVersion

rpc RestoreDatasetVersion(RestoreDatasetVersionRequest) returns (Operation)

Restores a dataset version.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasetVersions.restore

For more information, see the IAM documentation.

SearchDataItems

rpc SearchDataItems(SearchDataItemsRequest) returns (SearchDataItemsResponse)

Searches DataItems in a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the dataset resource:

  • aiplatform.dataItems.list

For more information, see the IAM documentation.

UpdateDataset

rpc UpdateDataset(UpdateDatasetRequest) returns (Dataset)

Updates a Dataset.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.datasets.update

For more information, see the IAM documentation.

UpdateDatasetVersion

rpc UpdateDatasetVersion(UpdateDatasetVersionRequest) returns (DatasetVersion)

Updates a DatasetVersion.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeploymentResourcePoolService

A service that manages the DeploymentResourcePool resource.

CreateDeploymentResourcePool

rpc CreateDeploymentResourcePool(CreateDeploymentResourcePoolRequest) returns (Operation)

Create a DeploymentResourcePool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.deploymentResourcePools.create

For more information, see the IAM documentation.

DeleteDeploymentResourcePool

rpc DeleteDeploymentResourcePool(DeleteDeploymentResourcePoolRequest) returns (Operation)

Delete a DeploymentResourcePool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.deploymentResourcePools.delete

For more information, see the IAM documentation.

GetDeploymentResourcePool

rpc GetDeploymentResourcePool(GetDeploymentResourcePoolRequest) returns (DeploymentResourcePool)

Get a DeploymentResourcePool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.deploymentResourcePools.get

For more information, see the IAM documentation.

ListDeploymentResourcePools

rpc ListDeploymentResourcePools(ListDeploymentResourcePoolsRequest) returns (ListDeploymentResourcePoolsResponse)

List DeploymentResourcePools in a location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.deploymentResourcePools.list

For more information, see the IAM documentation.

QueryDeployedModels

rpc QueryDeployedModels(QueryDeployedModelsRequest) returns (QueryDeployedModelsResponse)

List DeployedModels that have been deployed on this DeploymentResourcePool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the deploymentResourcePool resource:

  • aiplatform.deploymentResourcePools.queryDeployedModels

For more information, see the IAM documentation.

UpdateDeploymentResourcePool

rpc UpdateDeploymentResourcePool(UpdateDeploymentResourcePoolRequest) returns (Operation)

Update a DeploymentResourcePool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.deploymentResourcePools.update

For more information, see the IAM documentation.

EndpointService

A service for managing Vertex AI's Endpoints.

CreateEndpoint

rpc CreateEndpoint(CreateEndpointRequest) returns (Operation)

Creates an Endpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.endpoints.create

For more information, see the IAM documentation.

DeleteEndpoint

rpc DeleteEndpoint(DeleteEndpointRequest) returns (Operation)

Deletes an Endpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.endpoints.delete

For more information, see the IAM documentation.

DeployModel

rpc DeployModel(DeployModelRequest) returns (Operation)

Deploys a Model into this Endpoint, creating a DeployedModel within it.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.deploy

For more information, see the IAM documentation.

GetEndpoint

rpc GetEndpoint(GetEndpointRequest) returns (Endpoint)

Gets an Endpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.endpoints.get

For more information, see the IAM documentation.

ListEndpoints

rpc ListEndpoints(ListEndpointsRequest) returns (ListEndpointsResponse)

Lists Endpoints in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.endpoints.list

For more information, see the IAM documentation.

MutateDeployedModel

rpc MutateDeployedModel(MutateDeployedModelRequest) returns (Operation)

Updates an existing deployed model. Updatable fields include min_replica_count, max_replica_count, autoscaling_metric_specs, disable_container_logging (v1 only), and enable_container_logging (v1beta1 only).

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.deploy

For more information, see the IAM documentation.

UndeployModel

rpc UndeployModel(UndeployModelRequest) returns (Operation)

Undeploys a Model from an Endpoint, removing a DeployedModel from it, and freeing all resources it's using.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.undeploy

For more information, see the IAM documentation.

UpdateEndpoint

rpc UpdateEndpoint(UpdateEndpointRequest) returns (Endpoint)

Updates an Endpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.endpoints.update

For more information, see the IAM documentation.

UpdateEndpointLongRunning

rpc UpdateEndpointLongRunning(UpdateEndpointLongRunningRequest) returns (Operation)

Updates an Endpoint with a long running operation.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.endpoints.update

For more information, see the IAM documentation.

EvaluationService

Vertex AI Online Evaluation Service.

EvaluateInstances

rpc EvaluateInstances(EvaluateInstancesRequest) returns (EvaluateInstancesResponse)

Evaluates instances based on a given metric.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the location resource:

  • aiplatform.locations.evaluateInstances

For more information, see the IAM documentation.

ExtensionExecutionService

A service for Extension execution.

ExecuteExtension

rpc ExecuteExtension(ExecuteExtensionRequest) returns (ExecuteExtensionResponse)

Executes the request against a given extension.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.extensions.execute

For more information, see the IAM documentation.

QueryExtension

rpc QueryExtension(QueryExtensionRequest) returns (QueryExtensionResponse)

Queries an extension with a default controller.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.extensions.execute

For more information, see the IAM documentation.

ExtensionRegistryService

A service for managing Vertex AI's Extension registry.

DeleteExtension

rpc DeleteExtension(DeleteExtensionRequest) returns (Operation)

Deletes an Extension.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.extensions.delete

For more information, see the IAM documentation.

GetExtension

rpc GetExtension(GetExtensionRequest) returns (Extension)

Gets an Extension.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.extensions.get

For more information, see the IAM documentation.

ImportExtension

rpc ImportExtension(ImportExtensionRequest) returns (Operation)

Imports an Extension.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.extensions.import

For more information, see the IAM documentation.

ListExtensions

rpc ListExtensions(ListExtensionsRequest) returns (ListExtensionsResponse)

Lists Extensions in a location.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.extensions.list

For more information, see the IAM documentation.

UpdateExtension

rpc UpdateExtension(UpdateExtensionRequest) returns (Extension)

Updates an Extension.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.extensions.update

For more information, see the IAM documentation.

FeatureOnlineStoreAdminService

The service that handles CRUD and List for resources for FeatureOnlineStore.

CreateFeatureOnlineStore

rpc CreateFeatureOnlineStore(CreateFeatureOnlineStoreRequest) returns (Operation)

Creates a new FeatureOnlineStore in a given project and location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featureOnlineStores.create

For more information, see the IAM documentation.

CreateFeatureView

rpc CreateFeatureView(CreateFeatureViewRequest) returns (Operation)

Creates a new FeatureView in a given FeatureOnlineStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featureViews.create

For more information, see the IAM documentation.

DeleteFeatureOnlineStore

rpc DeleteFeatureOnlineStore(DeleteFeatureOnlineStoreRequest) returns (Operation)

Deletes a single FeatureOnlineStore. The FeatureOnlineStore must not contain any FeatureViews.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureOnlineStores.delete

For more information, see the IAM documentation.

DeleteFeatureView

rpc DeleteFeatureView(DeleteFeatureViewRequest) returns (Operation)

Deletes a single FeatureView.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureViews.delete

For more information, see the IAM documentation.

GetFeatureOnlineStore

rpc GetFeatureOnlineStore(GetFeatureOnlineStoreRequest) returns (FeatureOnlineStore)

Gets details of a single FeatureOnlineStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureOnlineStores.get

For more information, see the IAM documentation.

GetFeatureView

rpc GetFeatureView(GetFeatureViewRequest) returns (FeatureView)

Gets details of a single FeatureView.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureViews.get

For more information, see the IAM documentation.

GetFeatureViewSync

rpc GetFeatureViewSync(GetFeatureViewSyncRequest) returns (FeatureViewSync)

Gets details of a single FeatureViewSync.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureViewSyncs.get

For more information, see the IAM documentation.

ListFeatureOnlineStores

rpc ListFeatureOnlineStores(ListFeatureOnlineStoresRequest) returns (ListFeatureOnlineStoresResponse)

Lists FeatureOnlineStores in a given project and location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featureOnlineStores.list

For more information, see the IAM documentation.

ListFeatureViewSyncs

rpc ListFeatureViewSyncs(ListFeatureViewSyncsRequest) returns (ListFeatureViewSyncsResponse)

Lists FeatureViewSyncs in a given FeatureView.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featureViewSyncs.list

For more information, see the IAM documentation.

ListFeatureViews

rpc ListFeatureViews(ListFeatureViewsRequest) returns (ListFeatureViewsResponse)

Lists FeatureViews in a given FeatureOnlineStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featureViews.list

For more information, see the IAM documentation.

SyncFeatureView

rpc SyncFeatureView(SyncFeatureViewRequest) returns (SyncFeatureViewResponse)

Triggers on-demand sync for the FeatureView.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the featureView resource:

  • aiplatform.featureViews.sync

For more information, see the IAM documentation.

UpdateFeatureOnlineStore

rpc UpdateFeatureOnlineStore(UpdateFeatureOnlineStoreRequest) returns (Operation)

Updates the parameters of a single FeatureOnlineStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureOnlineStores.update

For more information, see the IAM documentation.

UpdateFeatureView

rpc UpdateFeatureView(UpdateFeatureViewRequest) returns (Operation)

Updates the parameters of a single FeatureView.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureViews.update

For more information, see the IAM documentation.

FeatureOnlineStoreService

A service for fetching feature values from the online store.

FetchFeatureValues

rpc FetchFeatureValues(FetchFeatureValuesRequest) returns (FetchFeatureValuesResponse)

Fetch feature values under a FeatureView.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the featureView resource:

  • aiplatform.featureViews.fetchFeatureValues

For more information, see the IAM documentation.

SearchNearestEntities

rpc SearchNearestEntities(SearchNearestEntitiesRequest) returns (SearchNearestEntitiesResponse)

Search the nearest entities under a FeatureView. Search only works for indexable feature view; if a feature view isn't indexable, returns Invalid argument response.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the featureView resource:

  • aiplatform.featureViews.searchNearestEntities

For more information, see the IAM documentation.

StreamingFetchFeatureValues

rpc StreamingFetchFeatureValues(StreamingFetchFeatureValuesRequest) returns (StreamingFetchFeatureValuesResponse)

Bidirectional streaming RPC to fetch feature values under a FeatureView. Requests may not have a one-to-one mapping to responses and responses may be returned out-of-order to reduce latency.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the featureView resource:

  • aiplatform.featureViews.fetchFeatureValues

For more information, see the IAM documentation.

FeatureRegistryService

The service that handles CRUD and List for resources for FeatureRegistry.

BatchCreateFeatures

rpc BatchCreateFeatures(BatchCreateFeaturesRequest) returns (Operation)

Creates a batch of Features in a given FeatureGroup.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.features.create

For more information, see the IAM documentation.

CreateFeature

rpc CreateFeature(CreateFeatureRequest) returns (Operation)

Creates a new Feature in a given FeatureGroup.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.features.create

For more information, see the IAM documentation.

CreateFeatureGroup

rpc CreateFeatureGroup(CreateFeatureGroupRequest) returns (Operation)

Creates a new FeatureGroup in a given project and location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featureGroups.create

For more information, see the IAM documentation.

CreateFeatureMonitor

rpc CreateFeatureMonitor(CreateFeatureMonitorRequest) returns (Operation)

Creates a new FeatureMonitor in a given project, location and FeatureGroup.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateFeatureMonitorJob

rpc CreateFeatureMonitorJob(CreateFeatureMonitorJobRequest) returns (FeatureMonitorJob)

Creates a new feature monitor job.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteFeature

rpc DeleteFeature(DeleteFeatureRequest) returns (Operation)

Deletes a single Feature.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.features.delete

For more information, see the IAM documentation.

DeleteFeatureGroup

rpc DeleteFeatureGroup(DeleteFeatureGroupRequest) returns (Operation)

Deletes a single FeatureGroup.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureGroups.delete

For more information, see the IAM documentation.

DeleteFeatureMonitor

rpc DeleteFeatureMonitor(DeleteFeatureMonitorRequest) returns (Operation)

Deletes a single FeatureMonitor.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetFeature

rpc GetFeature(GetFeatureRequest) returns (Feature)

Gets details of a single Feature.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.features.get

For more information, see the IAM documentation.

GetFeatureGroup

rpc GetFeatureGroup(GetFeatureGroupRequest) returns (FeatureGroup)

Gets details of a single FeatureGroup.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureGroups.get

For more information, see the IAM documentation.

GetFeatureMonitor

rpc GetFeatureMonitor(GetFeatureMonitorRequest) returns (FeatureMonitor)

Gets details of a single FeatureMonitor.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetFeatureMonitorJob

rpc GetFeatureMonitorJob(GetFeatureMonitorJobRequest) returns (FeatureMonitorJob)

Get a feature monitor job.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListFeatureGroups

rpc ListFeatureGroups(ListFeatureGroupsRequest) returns (ListFeatureGroupsResponse)

Lists FeatureGroups in a given project and location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featureGroups.list

For more information, see the IAM documentation.

ListFeatureMonitorJobs

rpc ListFeatureMonitorJobs(ListFeatureMonitorJobsRequest) returns (ListFeatureMonitorJobsResponse)

List feature monitor jobs.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListFeatureMonitors

rpc ListFeatureMonitors(ListFeatureMonitorsRequest) returns (ListFeatureMonitorsResponse)

Lists FeatureGroups in a given project and location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListFeatures

rpc ListFeatures(ListFeaturesRequest) returns (ListFeaturesResponse)

Lists Features in a given FeatureGroup.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.features.list

For more information, see the IAM documentation.

UpdateFeature

rpc UpdateFeature(UpdateFeatureRequest) returns (Operation)

Updates the parameters of a single Feature.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.features.update

For more information, see the IAM documentation.

UpdateFeatureGroup

rpc UpdateFeatureGroup(UpdateFeatureGroupRequest) returns (Operation)

Updates the parameters of a single FeatureGroup.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featureGroups.update

For more information, see the IAM documentation.

FeaturestoreOnlineServingService

A service for serving online feature values.

ReadFeatureValues

rpc ReadFeatureValues(ReadFeatureValuesRequest) returns (ReadFeatureValuesResponse)

Reads Feature values of a specific entity of an EntityType. For reading feature values of multiple entities of an EntityType, please use StreamingReadFeatureValues.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the entityType resource:

  • aiplatform.entityTypes.readFeatureValues

For more information, see the IAM documentation.

StreamingReadFeatureValues

rpc StreamingReadFeatureValues(StreamingReadFeatureValuesRequest) returns (ReadFeatureValuesResponse)

Reads Feature values for multiple entities. Depending on their size, data for different entities may be broken up across multiple responses.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the entityType resource:

  • aiplatform.entityTypes.streamingReadFeatureValues

For more information, see the IAM documentation.

WriteFeatureValues

rpc WriteFeatureValues(WriteFeatureValuesRequest) returns (WriteFeatureValuesResponse)

Writes Feature values of one or more entities of an EntityType.

The Feature values are merged into existing entities if any. The Feature values to be written must have timestamp within the online storage retention.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the entityType resource:

  • aiplatform.entityTypes.writeFeatureValues

For more information, see the IAM documentation.

FeaturestoreService

The service that handles CRUD and List for resources for Featurestore.

BatchCreateFeatures

rpc BatchCreateFeatures(BatchCreateFeaturesRequest) returns (Operation)

Creates a batch of Features in a given EntityType.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.features.create

For more information, see the IAM documentation.

BatchReadFeatureValues

rpc BatchReadFeatureValues(BatchReadFeatureValuesRequest) returns (Operation)

Batch reads Feature values from a Featurestore.

This API enables batch reading Feature values, where each read instance in the batch may read Feature values of entities from one or more EntityTypes. Point-in-time correctness is guaranteed for Feature values of each read instance as of each instance's read timestamp.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the featurestore resource:

  • aiplatform.featurestores.batchReadFeatureValues

For more information, see the IAM documentation.

CreateEntityType

rpc CreateEntityType(CreateEntityTypeRequest) returns (Operation)

Creates a new EntityType in a given Featurestore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.entityTypes.create

For more information, see the IAM documentation.

CreateFeature

rpc CreateFeature(CreateFeatureRequest) returns (Operation)

Creates a new Feature in a given EntityType.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.features.create

For more information, see the IAM documentation.

CreateFeaturestore

rpc CreateFeaturestore(CreateFeaturestoreRequest) returns (Operation)

Creates a new Featurestore in a given project and location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featurestores.create

For more information, see the IAM documentation.

DeleteEntityType

rpc DeleteEntityType(DeleteEntityTypeRequest) returns (Operation)

Deletes a single EntityType. The EntityType must not have any Features or force must be set to true for the request to succeed.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.entityTypes.delete

For more information, see the IAM documentation.

DeleteFeature

rpc DeleteFeature(DeleteFeatureRequest) returns (Operation)

Deletes a single Feature.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.features.delete

For more information, see the IAM documentation.

DeleteFeatureValues

rpc DeleteFeatureValues(DeleteFeatureValuesRequest) returns (Operation)

Delete Feature values from Featurestore.

The progress of the deletion is tracked by the returned operation. The deleted feature values are guaranteed to be invisible to subsequent read operations after the operation is marked as successfully done.

If a delete feature values operation fails, the feature values returned from reads and exports may be inconsistent. If consistency is required, the caller must retry the same delete request again and wait till the new operation returned is marked as successfully done.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the entityType resource:

  • aiplatform.entityTypes.deleteFeatureValues

For more information, see the IAM documentation.

DeleteFeaturestore

rpc DeleteFeaturestore(DeleteFeaturestoreRequest) returns (Operation)

Deletes a single Featurestore. The Featurestore must not contain any EntityTypes or force must be set to true for the request to succeed.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featurestores.delete

For more information, see the IAM documentation.

ExportFeatureValues

rpc ExportFeatureValues(ExportFeatureValuesRequest) returns (Operation)

Exports Feature values from all the entities of a target EntityType.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the entityType resource:

  • aiplatform.entityTypes.exportFeatureValues

For more information, see the IAM documentation.

GetEntityType

rpc GetEntityType(GetEntityTypeRequest) returns (EntityType)

Gets details of a single EntityType.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.entityTypes.get

For more information, see the IAM documentation.

GetFeature

rpc GetFeature(GetFeatureRequest) returns (Feature)

Gets details of a single Feature.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.features.get

For more information, see the IAM documentation.

GetFeaturestore

rpc GetFeaturestore(GetFeaturestoreRequest) returns (Featurestore)

Gets details of a single Featurestore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featurestores.get

For more information, see the IAM documentation.

ImportFeatureValues

rpc ImportFeatureValues(ImportFeatureValuesRequest) returns (Operation)

Imports Feature values into the Featurestore from a source storage.

The progress of the import is tracked by the returned operation. The imported features are guaranteed to be visible to subsequent read operations after the operation is marked as successfully done.

If an import operation fails, the Feature values returned from reads and exports may be inconsistent. If consistency is required, the caller must retry the same import request again and wait till the new operation returned is marked as successfully done.

There are also scenarios where the caller can cause inconsistency.

  • Source data for import contains multiple distinct Feature values for the same entity ID and timestamp.
  • Source is modified during an import. This includes adding, updating, or removing source data and/or metadata. Examples of updating metadata include but are not limited to changing storage location, storage class, or retention policy.
  • Online serving cluster is under-provisioned.
Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the entityType resource:

  • aiplatform.entityTypes.importFeatureValues

For more information, see the IAM documentation.

ListEntityTypes

rpc ListEntityTypes(ListEntityTypesRequest) returns (ListEntityTypesResponse)

Lists EntityTypes in a given Featurestore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.entityTypes.list

For more information, see the IAM documentation.

ListFeatures

rpc ListFeatures(ListFeaturesRequest) returns (ListFeaturesResponse)

Lists Features in a given EntityType.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.features.list

For more information, see the IAM documentation.

ListFeaturestores

rpc ListFeaturestores(ListFeaturestoresRequest) returns (ListFeaturestoresResponse)

Lists Featurestores in a given project and location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.featurestores.list

For more information, see the IAM documentation.

SearchFeatures

rpc SearchFeatures(SearchFeaturesRequest) returns (SearchFeaturesResponse)

Searches Features matching a query in a given project.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the location resource:

  • aiplatform.features.list

For more information, see the IAM documentation.

UpdateEntityType

rpc UpdateEntityType(UpdateEntityTypeRequest) returns (EntityType)

Updates the parameters of a single EntityType.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.entityTypes.update

For more information, see the IAM documentation.

UpdateFeature

rpc UpdateFeature(UpdateFeatureRequest) returns (Feature)

Updates the parameters of a single Feature.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.features.update

For more information, see the IAM documentation.

UpdateFeaturestore

rpc UpdateFeaturestore(UpdateFeaturestoreRequest) returns (Operation)

Updates the parameters of a single Featurestore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.featurestores.update

For more information, see the IAM documentation.

GenAiCacheService

Service for managing Vertex AI's CachedContent resource.

CreateCachedContent

rpc CreateCachedContent(CreateCachedContentRequest) returns (CachedContent)

Creates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.cachedContents.create

For more information, see the IAM documentation.

DeleteCachedContent

rpc DeleteCachedContent(DeleteCachedContentRequest) returns (Empty)

Deletes cached content

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.cachedContents.delete

For more information, see the IAM documentation.

GetCachedContent

rpc GetCachedContent(GetCachedContentRequest) returns (CachedContent)

Gets cached content configurations

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.cachedContents.get

For more information, see the IAM documentation.

ListCachedContents

rpc ListCachedContents(ListCachedContentsRequest) returns (ListCachedContentsResponse)

Lists cached contents in a project

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.cachedContents.list

For more information, see the IAM documentation.

UpdateCachedContent

rpc UpdateCachedContent(UpdateCachedContentRequest) returns (CachedContent)

Updates cached content configurations

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.cachedContents.update

For more information, see the IAM documentation.

GenAiTuningService

A service for creating and managing GenAI Tuning Jobs.

CancelTuningJob

rpc CancelTuningJob(CancelTuningJobRequest) returns (Empty)

Cancels a TuningJob. Starts asynchronous cancellation on the TuningJob. The server makes a best effort to cancel the job, but success is not guaranteed. Clients can use GenAiTuningService.GetTuningJob or other methods to check whether the cancellation succeeded or whether the job completed despite cancellation. On successful cancellation, the TuningJob is not deleted; instead it becomes a job with a TuningJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and TuningJob.state is set to CANCELLED.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tuningJobs.cancel

For more information, see the IAM documentation.

CreateTuningJob

rpc CreateTuningJob(CreateTuningJobRequest) returns (TuningJob)

Creates a TuningJob. A created TuningJob right away will be attempted to be run.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tuningJobs.create

For more information, see the IAM documentation.

GetTuningJob

rpc GetTuningJob(GetTuningJobRequest) returns (TuningJob)

Gets a TuningJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tuningJobs.get

For more information, see the IAM documentation.

ListTuningJobs

rpc ListTuningJobs(ListTuningJobsRequest) returns (ListTuningJobsResponse)

Lists TuningJobs in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tuningJobs.list

For more information, see the IAM documentation.

RebaseTunedModel

rpc RebaseTunedModel(RebaseTunedModelRequest) returns (Operation)

Rebase a TunedModel.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tuningJobs.create

For more information, see the IAM documentation.

IndexEndpointService

A service for managing Vertex AI's IndexEndpoints.

CreateIndexEndpoint

rpc CreateIndexEndpoint(CreateIndexEndpointRequest) returns (Operation)

Creates an IndexEndpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.indexEndpoints.create

For more information, see the IAM documentation.

DeleteIndexEndpoint

rpc DeleteIndexEndpoint(DeleteIndexEndpointRequest) returns (Operation)

Deletes an IndexEndpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.indexEndpoints.delete

For more information, see the IAM documentation.

DeployIndex

rpc DeployIndex(DeployIndexRequest) returns (Operation)

Deploys an Index into this IndexEndpoint, creating a DeployedIndex within it. Only non-empty Indexes can be deployed.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the indexEndpoint resource:

  • aiplatform.indexEndpoints.deploy

For more information, see the IAM documentation.

GetIndexEndpoint

rpc GetIndexEndpoint(GetIndexEndpointRequest) returns (IndexEndpoint)

Gets an IndexEndpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.indexEndpoints.get

For more information, see the IAM documentation.

ListIndexEndpoints

rpc ListIndexEndpoints(ListIndexEndpointsRequest) returns (ListIndexEndpointsResponse)

Lists IndexEndpoints in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.indexEndpoints.list

For more information, see the IAM documentation.

MutateDeployedIndex

rpc MutateDeployedIndex(MutateDeployedIndexRequest) returns (Operation)

Update an existing DeployedIndex under an IndexEndpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the indexEndpoint resource:

  • aiplatform.indexEndpoints.deploy

For more information, see the IAM documentation.

UndeployIndex

rpc UndeployIndex(UndeployIndexRequest) returns (Operation)

Undeploys an Index from an IndexEndpoint, removing a DeployedIndex from it, and freeing all resources it's using.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the indexEndpoint resource:

  • aiplatform.indexEndpoints.undeploy

For more information, see the IAM documentation.

UpdateIndexEndpoint

rpc UpdateIndexEndpoint(UpdateIndexEndpointRequest) returns (IndexEndpoint)

Updates an IndexEndpoint.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.indexEndpoints.update

For more information, see the IAM documentation.

IndexService

A service for creating and managing Vertex AI's Index resources.

CreateIndex

rpc CreateIndex(CreateIndexRequest) returns (Operation)

Creates an Index.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.indexes.create

For more information, see the IAM documentation.

DeleteIndex

rpc DeleteIndex(DeleteIndexRequest) returns (Operation)

Deletes an Index. An Index can only be deleted when all its DeployedIndexes had been undeployed.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.indexes.delete

For more information, see the IAM documentation.

GetIndex

rpc GetIndex(GetIndexRequest) returns (Index)

Gets an Index.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.indexes.get

For more information, see the IAM documentation.

ListIndexes

rpc ListIndexes(ListIndexesRequest) returns (ListIndexesResponse)

Lists Indexes in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.indexes.list

For more information, see the IAM documentation.

RemoveDatapoints

rpc RemoveDatapoints(RemoveDatapointsRequest) returns (RemoveDatapointsResponse)

Remove Datapoints from an Index.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the index resource:

  • aiplatform.indexes.update

For more information, see the IAM documentation.

UpdateIndex

rpc UpdateIndex(UpdateIndexRequest) returns (Operation)

Updates an Index.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.indexes.update

For more information, see the IAM documentation.

UpsertDatapoints

rpc UpsertDatapoints(UpsertDatapointsRequest) returns (UpsertDatapointsResponse)

Add/update Datapoints into an Index.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the index resource:

  • aiplatform.indexes.update

For more information, see the IAM documentation.

JobService

A service for creating and managing Vertex AI's jobs.

CancelBatchPredictionJob

rpc CancelBatchPredictionJob(CancelBatchPredictionJobRequest) returns (Empty)

Cancels a BatchPredictionJob.

Starts asynchronous cancellation on the BatchPredictionJob. The server makes the best effort to cancel the job, but success is not guaranteed. Clients can use JobService.GetBatchPredictionJob or other methods to check whether the cancellation succeeded or whether the job completed despite cancellation. On a successful cancellation, the BatchPredictionJob is not deleted;instead its BatchPredictionJob.state is set to CANCELLED. Any files already outputted by the job are not deleted.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.batchPredictionJobs.cancel

For more information, see the IAM documentation.

CancelCustomJob

rpc CancelCustomJob(CancelCustomJobRequest) returns (Empty)

Cancels a CustomJob. Starts asynchronous cancellation on the CustomJob. The server makes a best effort to cancel the job, but success is not guaranteed. Clients can use JobService.GetCustomJob or other methods to check whether the cancellation succeeded or whether the job completed despite cancellation. On successful cancellation, the CustomJob is not deleted; instead it becomes a job with a CustomJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and CustomJob.state is set to CANCELLED.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.customJobs.cancel

For more information, see the IAM documentation.

CancelHyperparameterTuningJob

rpc CancelHyperparameterTuningJob(CancelHyperparameterTuningJobRequest) returns (Empty)

Cancels a HyperparameterTuningJob. Starts asynchronous cancellation on the HyperparameterTuningJob. The server makes a best effort to cancel the job, but success is not guaranteed. Clients can use JobService.GetHyperparameterTuningJob or other methods to check whether the cancellation succeeded or whether the job completed despite cancellation. On successful cancellation, the HyperparameterTuningJob is not deleted; instead it becomes a job with a HyperparameterTuningJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and HyperparameterTuningJob.state is set to CANCELLED.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.hyperparameterTuningJobs.cancel

For more information, see the IAM documentation.

CreateBatchPredictionJob

rpc CreateBatchPredictionJob(CreateBatchPredictionJobRequest) returns (BatchPredictionJob)

Creates a BatchPredictionJob. A BatchPredictionJob once created will right away be attempted to start.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.batchPredictionJobs.create

For more information, see the IAM documentation.

CreateCustomJob

rpc CreateCustomJob(CreateCustomJobRequest) returns (CustomJob)

Creates a CustomJob. A created CustomJob right away will be attempted to be run.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.customJobs.create

For more information, see the IAM documentation.

CreateHyperparameterTuningJob

rpc CreateHyperparameterTuningJob(CreateHyperparameterTuningJobRequest) returns (HyperparameterTuningJob)

Creates a HyperparameterTuningJob

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.hyperparameterTuningJobs.create

For more information, see the IAM documentation.

CreateModelDeploymentMonitoringJob

rpc CreateModelDeploymentMonitoringJob(CreateModelDeploymentMonitoringJobRequest) returns (ModelDeploymentMonitoringJob)

Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelDeploymentMonitoringJobs.create

For more information, see the IAM documentation.

DeleteBatchPredictionJob

rpc DeleteBatchPredictionJob(DeleteBatchPredictionJobRequest) returns (Operation)

Deletes a BatchPredictionJob. Can only be called on jobs that already finished.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.batchPredictionJobs.delete

For more information, see the IAM documentation.

DeleteCustomJob

rpc DeleteCustomJob(DeleteCustomJobRequest) returns (Operation)

Deletes a CustomJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.customJobs.delete

For more information, see the IAM documentation.

DeleteHyperparameterTuningJob

rpc DeleteHyperparameterTuningJob(DeleteHyperparameterTuningJobRequest) returns (Operation)

Deletes a HyperparameterTuningJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.hyperparameterTuningJobs.delete

For more information, see the IAM documentation.

DeleteModelDeploymentMonitoringJob

rpc DeleteModelDeploymentMonitoringJob(DeleteModelDeploymentMonitoringJobRequest) returns (Operation)

Deletes a ModelDeploymentMonitoringJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelDeploymentMonitoringJobs.delete

For more information, see the IAM documentation.

GetBatchPredictionJob

rpc GetBatchPredictionJob(GetBatchPredictionJobRequest) returns (BatchPredictionJob)

Gets a BatchPredictionJob

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.batchPredictionJobs.get

For more information, see the IAM documentation.

GetCustomJob

rpc GetCustomJob(GetCustomJobRequest) returns (CustomJob)

Gets a CustomJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.customJobs.get

For more information, see the IAM documentation.

GetHyperparameterTuningJob

rpc GetHyperparameterTuningJob(GetHyperparameterTuningJobRequest) returns (HyperparameterTuningJob)

Gets a HyperparameterTuningJob

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.hyperparameterTuningJobs.get

For more information, see the IAM documentation.

GetModelDeploymentMonitoringJob

rpc GetModelDeploymentMonitoringJob(GetModelDeploymentMonitoringJobRequest) returns (ModelDeploymentMonitoringJob)

Gets a ModelDeploymentMonitoringJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelDeploymentMonitoringJobs.get

For more information, see the IAM documentation.

ListBatchPredictionJobs

rpc ListBatchPredictionJobs(ListBatchPredictionJobsRequest) returns (ListBatchPredictionJobsResponse)

Lists BatchPredictionJobs in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.batchPredictionJobs.list

For more information, see the IAM documentation.

ListCustomJobs

rpc ListCustomJobs(ListCustomJobsRequest) returns (ListCustomJobsResponse)

Lists CustomJobs in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.customJobs.list

For more information, see the IAM documentation.

ListHyperparameterTuningJobs

rpc ListHyperparameterTuningJobs(ListHyperparameterTuningJobsRequest) returns (ListHyperparameterTuningJobsResponse)

Lists HyperparameterTuningJobs in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.hyperparameterTuningJobs.list

For more information, see the IAM documentation.

ListModelDeploymentMonitoringJobs

rpc ListModelDeploymentMonitoringJobs(ListModelDeploymentMonitoringJobsRequest) returns (ListModelDeploymentMonitoringJobsResponse)

Lists ModelDeploymentMonitoringJobs in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelDeploymentMonitoringJobs.list

For more information, see the IAM documentation.

PauseModelDeploymentMonitoringJob

rpc PauseModelDeploymentMonitoringJob(PauseModelDeploymentMonitoringJobRequest) returns (Empty)

Pauses a ModelDeploymentMonitoringJob. If the job is running, the server makes a best effort to cancel the job. Will mark ModelDeploymentMonitoringJob.state to 'PAUSED'.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelDeploymentMonitoringJobs.pause

For more information, see the IAM documentation.

ResumeModelDeploymentMonitoringJob

rpc ResumeModelDeploymentMonitoringJob(ResumeModelDeploymentMonitoringJobRequest) returns (Empty)

Resumes a paused ModelDeploymentMonitoringJob. It will start to run from next scheduled time. A deleted ModelDeploymentMonitoringJob can't be resumed.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelDeploymentMonitoringJobs.resume

For more information, see the IAM documentation.

SearchModelDeploymentMonitoringStatsAnomalies

rpc SearchModelDeploymentMonitoringStatsAnomalies(SearchModelDeploymentMonitoringStatsAnomaliesRequest) returns (SearchModelDeploymentMonitoringStatsAnomaliesResponse)

Searches Model Monitoring Statistics generated within a given time window.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the modelDeploymentMonitoringJob resource:

  • aiplatform.modelDeploymentMonitoringJobs.searchStatsAnomalies

For more information, see the IAM documentation.

UpdateModelDeploymentMonitoringJob

rpc UpdateModelDeploymentMonitoringJob(UpdateModelDeploymentMonitoringJobRequest) returns (Operation)

Updates a ModelDeploymentMonitoringJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelDeploymentMonitoringJobs.update

For more information, see the IAM documentation.

LlmUtilityService

Service for LLM related utility functions.

ComputeTokens

rpc ComputeTokens(ComputeTokensRequest) returns (ComputeTokensResponse)

Return a list of tokens based on the input text.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

MatchService

MatchService is a Google managed service for efficient vector similarity search at scale.

MetadataService

Service for reading and writing metadata entries.

AddContextArtifactsAndExecutions

rpc AddContextArtifactsAndExecutions(AddContextArtifactsAndExecutionsRequest) returns (AddContextArtifactsAndExecutionsResponse)

Adds a set of Artifacts and Executions to a Context. If any of the Artifacts or Executions have already been added to a Context, they are simply skipped.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the context resource:

  • aiplatform.contexts.addContextArtifactsAndExecutions

For more information, see the IAM documentation.

AddContextChildren

rpc AddContextChildren(AddContextChildrenRequest) returns (AddContextChildrenResponse)

Adds a set of Contexts as children to a parent Context. If any of the child Contexts have already been added to the parent Context, they are simply skipped. If this call would create a cycle or cause any Context to have more than 10 parents, the request will fail with an INVALID_ARGUMENT error.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the context resource:

  • aiplatform.contexts.addContextChildren

For more information, see the IAM documentation.

AddExecutionEvents

rpc AddExecutionEvents(AddExecutionEventsRequest) returns (AddExecutionEventsResponse)

Adds Events to the specified Execution. An Event indicates whether an Artifact was used as an input or output for an Execution. If an Event already exists between the Execution and the Artifact, the Event is skipped.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the execution resource:

  • aiplatform.executions.addExecutionEvents

For more information, see the IAM documentation.

CreateArtifact

rpc CreateArtifact(CreateArtifactRequest) returns (Artifact)

Creates an Artifact associated with a MetadataStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.artifacts.create

For more information, see the IAM documentation.

CreateContext

rpc CreateContext(CreateContextRequest) returns (Context)

Creates a Context associated with a MetadataStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.contexts.create

For more information, see the IAM documentation.

CreateExecution

rpc CreateExecution(CreateExecutionRequest) returns (Execution)

Creates an Execution associated with a MetadataStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.executions.create

For more information, see the IAM documentation.

CreateMetadataSchema

rpc CreateMetadataSchema(CreateMetadataSchemaRequest) returns (MetadataSchema)

Creates a MetadataSchema.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.metadataSchemas.create

For more information, see the IAM documentation.

CreateMetadataStore

rpc CreateMetadataStore(CreateMetadataStoreRequest) returns (Operation)

Initializes a MetadataStore, including allocation of resources.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.metadataStores.create

For more information, see the IAM documentation.

DeleteArtifact

rpc DeleteArtifact(DeleteArtifactRequest) returns (Operation)

Deletes an Artifact.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.artifacts.delete

For more information, see the IAM documentation.

DeleteContext

rpc DeleteContext(DeleteContextRequest) returns (Operation)

Deletes a stored Context.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.contexts.delete

For more information, see the IAM documentation.

DeleteExecution

rpc DeleteExecution(DeleteExecutionRequest) returns (Operation)

Deletes an Execution.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.executions.delete

For more information, see the IAM documentation.

DeleteMetadataStore

rpc DeleteMetadataStore(DeleteMetadataStoreRequest) returns (Operation)

Deletes a single MetadataStore and all its child resources (Artifacts, Executions, and Contexts).

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.metadataStores.delete

For more information, see the IAM documentation.

GetArtifact

rpc GetArtifact(GetArtifactRequest) returns (Artifact)

Retrieves a specific Artifact.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.artifacts.get

For more information, see the IAM documentation.

GetContext

rpc GetContext(GetContextRequest) returns (Context)

Retrieves a specific Context.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.contexts.get

For more information, see the IAM documentation.

GetExecution

rpc GetExecution(GetExecutionRequest) returns (Execution)

Retrieves a specific Execution.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.executions.get

For more information, see the IAM documentation.

GetMetadataSchema

rpc GetMetadataSchema(GetMetadataSchemaRequest) returns (MetadataSchema)

Retrieves a specific MetadataSchema.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.metadataSchemas.get

For more information, see the IAM documentation.

GetMetadataStore

rpc GetMetadataStore(GetMetadataStoreRequest) returns (MetadataStore)

Retrieves a specific MetadataStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.metadataStores.get

For more information, see the IAM documentation.

ListArtifacts

rpc ListArtifacts(ListArtifactsRequest) returns (ListArtifactsResponse)

Lists Artifacts in the MetadataStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.artifacts.list

For more information, see the IAM documentation.

ListContexts

rpc ListContexts(ListContextsRequest) returns (ListContextsResponse)

Lists Contexts on the MetadataStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.contexts.list

For more information, see the IAM documentation.

ListExecutions

rpc ListExecutions(ListExecutionsRequest) returns (ListExecutionsResponse)

Lists Executions in the MetadataStore.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.executions.list

For more information, see the IAM documentation.

ListMetadataSchemas

rpc ListMetadataSchemas(ListMetadataSchemasRequest) returns (ListMetadataSchemasResponse)

Lists MetadataSchemas.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.metadataSchemas.list

For more information, see the IAM documentation.

ListMetadataStores

rpc ListMetadataStores(ListMetadataStoresRequest) returns (ListMetadataStoresResponse)

Lists MetadataStores for a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.metadataStores.list

For more information, see the IAM documentation.

PurgeArtifacts

rpc PurgeArtifacts(PurgeArtifactsRequest) returns (Operation)

Purges Artifacts.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.artifacts.delete

For more information, see the IAM documentation.

PurgeContexts

rpc PurgeContexts(PurgeContextsRequest) returns (Operation)

Purges Contexts.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.contexts.delete

For more information, see the IAM documentation.

PurgeExecutions

rpc PurgeExecutions(PurgeExecutionsRequest) returns (Operation)

Purges Executions.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.executions.delete

For more information, see the IAM documentation.

QueryArtifactLineageSubgraph

rpc QueryArtifactLineageSubgraph(QueryArtifactLineageSubgraphRequest) returns (LineageSubgraph)

Retrieves lineage of an Artifact represented through Artifacts and Executions connected by Event edges and returned as a LineageSubgraph.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the artifact resource:

  • aiplatform.artifacts.get

For more information, see the IAM documentation.

QueryContextLineageSubgraph

rpc QueryContextLineageSubgraph(QueryContextLineageSubgraphRequest) returns (LineageSubgraph)

Retrieves Artifacts and Executions within the specified Context, connected by Event edges and returned as a LineageSubgraph.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the context resource:

  • aiplatform.contexts.queryContextLineageSubgraph

For more information, see the IAM documentation.

QueryExecutionInputsAndOutputs

rpc QueryExecutionInputsAndOutputs(QueryExecutionInputsAndOutputsRequest) returns (LineageSubgraph)

Obtains the set of input and output Artifacts for this Execution, in the form of LineageSubgraph that also contains the Execution and connecting Events.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the execution resource:

  • aiplatform.executions.queryExecutionInputsAndOutputs

For more information, see the IAM documentation.

RemoveContextChildren

rpc RemoveContextChildren(RemoveContextChildrenRequest) returns (RemoveContextChildrenResponse)

Remove a set of children contexts from a parent Context. If any of the child Contexts were NOT added to the parent Context, they are simply skipped.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the context resource:

  • aiplatform.contexts.addContextChildren

For more information, see the IAM documentation.

UpdateArtifact

rpc UpdateArtifact(UpdateArtifactRequest) returns (Artifact)

Updates a stored Artifact.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.artifacts.update

For more information, see the IAM documentation.

UpdateContext

rpc UpdateContext(UpdateContextRequest) returns (Context)

Updates a stored Context.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.contexts.update

For more information, see the IAM documentation.

UpdateExecution

rpc UpdateExecution(UpdateExecutionRequest) returns (Execution)

Updates a stored Execution.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.executions.update

For more information, see the IAM documentation.

MigrationService

A service that migrates resources from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.

BatchMigrateResources

rpc BatchMigrateResources(BatchMigrateResourcesRequest) returns (Operation)

Batch migrates resources from ml.googleapis.com, automl.googleapis.com, and datalabeling.googleapis.com to Vertex AI.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.migratableResources.migrate

For more information, see the IAM documentation.

SearchMigratableResources

rpc SearchMigratableResources(SearchMigratableResourcesRequest) returns (SearchMigratableResourcesResponse)

Searches all of the resources in automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com that can be migrated to Vertex AI's given location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.migratableResources.search

For more information, see the IAM documentation.

ModelGardenService

The interface of Model Garden Service.

GetPublisherModel

rpc GetPublisherModel(GetPublisherModelRequest) returns (PublisherModel)

Gets a Model Garden publisher model.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListPublisherModels

rpc ListPublisherModels(ListPublisherModelsRequest) returns (ListPublisherModelsResponse)

Lists publisher models in Model Garden.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ModelMonitoringService

A service for creating and managing Vertex AI Model moitoring. This includes ModelMonitor resources, ModelMonitoringJob resources.

CreateModelMonitor

rpc CreateModelMonitor(CreateModelMonitorRequest) returns (Operation)

Creates a ModelMonitor.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelMonitors.create

For more information, see the IAM documentation.

CreateModelMonitoringJob

rpc CreateModelMonitoringJob(CreateModelMonitoringJobRequest) returns (ModelMonitoringJob)

Creates a ModelMonitoringJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelMonitoringJobs.create

For more information, see the IAM documentation.

DeleteModelMonitor

rpc DeleteModelMonitor(DeleteModelMonitorRequest) returns (Operation)

Deletes a ModelMonitor.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelMonitors.delete

For more information, see the IAM documentation.

DeleteModelMonitoringJob

rpc DeleteModelMonitoringJob(DeleteModelMonitoringJobRequest) returns (Operation)

Deletes a ModelMonitoringJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelMonitoringJobs.delete

For more information, see the IAM documentation.

GetModelMonitor

rpc GetModelMonitor(GetModelMonitorRequest) returns (ModelMonitor)

Gets a ModelMonitor.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelMonitors.get

For more information, see the IAM documentation.

GetModelMonitoringJob

rpc GetModelMonitoringJob(GetModelMonitoringJobRequest) returns (ModelMonitoringJob)

Gets a ModelMonitoringJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelMonitoringJobs.get

For more information, see the IAM documentation.

ListModelMonitoringJobs

rpc ListModelMonitoringJobs(ListModelMonitoringJobsRequest) returns (ListModelMonitoringJobsResponse)

Lists ModelMonitoringJobs. Callers may choose to read across multiple Monitors as per AIP-159 by using '-' (the hyphen or dash character) as a wildcard character instead of modelMonitor id in the parent. Format projects/{project_id}/locations/{location}/moodelMonitors/-/modelMonitoringJobs

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelMonitoringJobs.list

For more information, see the IAM documentation.

ListModelMonitors

rpc ListModelMonitors(ListModelMonitorsRequest) returns (ListModelMonitorsResponse)

Lists ModelMonitors in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelMonitors.list

For more information, see the IAM documentation.

SearchModelMonitoringAlerts

rpc SearchModelMonitoringAlerts(SearchModelMonitoringAlertsRequest) returns (SearchModelMonitoringAlertsResponse)

Returns the Model Monitoring alerts.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the modelMonitor resource:

  • aiplatform.modelMonitors.searchModelMonitoringAlerts

For more information, see the IAM documentation.

SearchModelMonitoringStats

rpc SearchModelMonitoringStats(SearchModelMonitoringStatsRequest) returns (SearchModelMonitoringStatsResponse)

Searches Model Monitoring Stats generated within a given time window.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the modelMonitor resource:

  • aiplatform.modelMonitors.searchModelMonitoringStats

For more information, see the IAM documentation.

UpdateModelMonitor

rpc UpdateModelMonitor(UpdateModelMonitorRequest) returns (Operation)

Updates a ModelMonitor.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelMonitors.update

For more information, see the IAM documentation.

ModelService

A service for managing Vertex AI's machine learning Models.

BatchImportEvaluatedAnnotations

rpc BatchImportEvaluatedAnnotations(BatchImportEvaluatedAnnotationsRequest) returns (BatchImportEvaluatedAnnotationsResponse)

Imports a list of externally generated EvaluatedAnnotations.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelEvaluationSlices.import

For more information, see the IAM documentation.

BatchImportModelEvaluationSlices

rpc BatchImportModelEvaluationSlices(BatchImportModelEvaluationSlicesRequest) returns (BatchImportModelEvaluationSlicesResponse)

Imports a list of externally generated ModelEvaluationSlice.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelEvaluationSlices.import

For more information, see the IAM documentation.

CopyModel

rpc CopyModel(CopyModelRequest) returns (Operation)

Copies an already existing Vertex AI Model into the specified Location. The source Model must exist in the same Project. When copying custom Models, the users themselves are responsible for Model.metadata content to be region-agnostic, as well as making sure that any resources (e.g. files) it depends on remain accessible.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.models.upload

For more information, see the IAM documentation.

DeleteModel

rpc DeleteModel(DeleteModelRequest) returns (Operation)

Deletes a Model.

A model cannot be deleted if any Endpoint resource has a DeployedModel based on the model in its deployed_models field.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.models.delete

For more information, see the IAM documentation.

DeleteModelVersion

rpc DeleteModelVersion(DeleteModelVersionRequest) returns (Operation)

Deletes a Model version.

Model version can only be deleted if there are no DeployedModels created from it. Deleting the only version in the Model is not allowed. Use DeleteModel for deleting the Model instead.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.models.delete

For more information, see the IAM documentation.

ExportModel

rpc ExportModel(ExportModelRequest) returns (Operation)

Exports a trained, exportable Model to a location specified by the user. A Model is considered to be exportable if it has at least one supported export format.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.models.export

For more information, see the IAM documentation.

GetModel

rpc GetModel(GetModelRequest) returns (Model)

Gets a Model.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.models.get

For more information, see the IAM documentation.

GetModelEvaluation

rpc GetModelEvaluation(GetModelEvaluationRequest) returns (ModelEvaluation)

Gets a ModelEvaluation.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelEvaluations.get

For more information, see the IAM documentation.

GetModelEvaluationSlice

rpc GetModelEvaluationSlice(GetModelEvaluationSliceRequest) returns (ModelEvaluationSlice)

Gets a ModelEvaluationSlice.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.modelEvaluationSlices.get

For more information, see the IAM documentation.

ImportModelEvaluation

rpc ImportModelEvaluation(ImportModelEvaluationRequest) returns (ModelEvaluation)

Imports an externally generated ModelEvaluation.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelEvaluations.import

For more information, see the IAM documentation.

ListModelEvaluationSlices

rpc ListModelEvaluationSlices(ListModelEvaluationSlicesRequest) returns (ListModelEvaluationSlicesResponse)

Lists ModelEvaluationSlices in a ModelEvaluation.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelEvaluationSlices.list

For more information, see the IAM documentation.

ListModelEvaluations

rpc ListModelEvaluations(ListModelEvaluationsRequest) returns (ListModelEvaluationsResponse)

Lists ModelEvaluations in a Model.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.modelEvaluations.list

For more information, see the IAM documentation.

ListModelVersions

rpc ListModelVersions(ListModelVersionsRequest) returns (ListModelVersionsResponse)

Lists versions of the specified model.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.models.get

For more information, see the IAM documentation.

ListModels

rpc ListModels(ListModelsRequest) returns (ListModelsResponse)

Lists Models in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.models.list

For more information, see the IAM documentation.

MergeVersionAliases

rpc MergeVersionAliases(MergeVersionAliasesRequest) returns (Model)

Merges a set of aliases for a Model version.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.models.update

For more information, see the IAM documentation.

UpdateExplanationDataset

rpc UpdateExplanationDataset(UpdateExplanationDatasetRequest) returns (Operation)

Incrementally update the dataset used for an examples model.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the model resource:

  • aiplatform.models.update

For more information, see the IAM documentation.

UpdateModel

rpc UpdateModel(UpdateModelRequest) returns (Model)

Updates a Model.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.models.update

For more information, see the IAM documentation.

UploadModel

rpc UploadModel(UploadModelRequest) returns (Operation)

Uploads a Model artifact into Vertex AI.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.models.upload

For more information, see the IAM documentation.

NotebookService

The interface for Vertex Notebook service (a.k.a. Colab on Workbench).

AssignNotebookRuntime

rpc AssignNotebookRuntime(AssignNotebookRuntimeRequest) returns (Operation)

Assigns a NotebookRuntime to a user for a particular Notebook file. This method will either returns an existing assignment or generates a new one.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.notebookRuntimes.assign

For more information, see the IAM documentation.

CreateNotebookExecutionJob

rpc CreateNotebookExecutionJob(CreateNotebookExecutionJobRequest) returns (Operation)

Creates a NotebookExecutionJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.notebookExecutionJobs.create

For more information, see the IAM documentation.

CreateNotebookRuntimeTemplate

rpc CreateNotebookRuntimeTemplate(CreateNotebookRuntimeTemplateRequest) returns (Operation)

Creates a NotebookRuntimeTemplate.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.notebookRuntimeTemplates.create

For more information, see the IAM documentation.

DeleteNotebookExecutionJob

rpc DeleteNotebookExecutionJob(DeleteNotebookExecutionJobRequest) returns (Operation)

Deletes a NotebookExecutionJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookExecutionJobs.delete

For more information, see the IAM documentation.

DeleteNotebookRuntime

rpc DeleteNotebookRuntime(DeleteNotebookRuntimeRequest) returns (Operation)

Deletes a NotebookRuntime.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookRuntimes.delete

For more information, see the IAM documentation.

DeleteNotebookRuntimeTemplate

rpc DeleteNotebookRuntimeTemplate(DeleteNotebookRuntimeTemplateRequest) returns (Operation)

Deletes a NotebookRuntimeTemplate.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookRuntimeTemplates.delete

For more information, see the IAM documentation.

GetNotebookExecutionJob

rpc GetNotebookExecutionJob(GetNotebookExecutionJobRequest) returns (NotebookExecutionJob)

Gets a NotebookExecutionJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookExecutionJobs.get

For more information, see the IAM documentation.

GetNotebookRuntime

rpc GetNotebookRuntime(GetNotebookRuntimeRequest) returns (NotebookRuntime)

Gets a NotebookRuntime.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookRuntimes.get

For more information, see the IAM documentation.

GetNotebookRuntimeTemplate

rpc GetNotebookRuntimeTemplate(GetNotebookRuntimeTemplateRequest) returns (NotebookRuntimeTemplate)

Gets a NotebookRuntimeTemplate.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookRuntimeTemplates.get

For more information, see the IAM documentation.

ListNotebookExecutionJobs

rpc ListNotebookExecutionJobs(ListNotebookExecutionJobsRequest) returns (ListNotebookExecutionJobsResponse)

Lists NotebookExecutionJobs in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.notebookExecutionJobs.list

For more information, see the IAM documentation.

ListNotebookRuntimeTemplates

rpc ListNotebookRuntimeTemplates(ListNotebookRuntimeTemplatesRequest) returns (ListNotebookRuntimeTemplatesResponse)

Lists NotebookRuntimeTemplates in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.notebookRuntimeTemplates.list

For more information, see the IAM documentation.

ListNotebookRuntimes

rpc ListNotebookRuntimes(ListNotebookRuntimesRequest) returns (ListNotebookRuntimesResponse)

Lists NotebookRuntimes in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.notebookRuntimes.list

For more information, see the IAM documentation.

StartNotebookRuntime

rpc StartNotebookRuntime(StartNotebookRuntimeRequest) returns (Operation)

Starts a NotebookRuntime.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookRuntimes.start

For more information, see the IAM documentation.

StopNotebookRuntime

rpc StopNotebookRuntime(StopNotebookRuntimeRequest) returns (Operation)

Stops a NotebookRuntime.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateNotebookRuntimeTemplate

rpc UpdateNotebookRuntimeTemplate(UpdateNotebookRuntimeTemplateRequest) returns (NotebookRuntimeTemplate)

Updates a NotebookRuntimeTemplate.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookRuntimeTemplates.update

For more information, see the IAM documentation.

UpgradeNotebookRuntime

rpc UpgradeNotebookRuntime(UpgradeNotebookRuntimeRequest) returns (Operation)

Upgrades a NotebookRuntime.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.notebookRuntimes.upgrade

For more information, see the IAM documentation.

PersistentResourceService

A service for managing Vertex AI's machine learning PersistentResource.

CreatePersistentResource

rpc CreatePersistentResource(CreatePersistentResourceRequest) returns (Operation)

Creates a PersistentResource.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.persistentResources.create

For more information, see the IAM documentation.

DeletePersistentResource

rpc DeletePersistentResource(DeletePersistentResourceRequest) returns (Operation)

Deletes a PersistentResource.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.persistentResources.delete

For more information, see the IAM documentation.

GetPersistentResource

rpc GetPersistentResource(GetPersistentResourceRequest) returns (PersistentResource)

Gets a PersistentResource.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.persistentResources.get

For more information, see the IAM documentation.

ListPersistentResources

rpc ListPersistentResources(ListPersistentResourcesRequest) returns (ListPersistentResourcesResponse)

Lists PersistentResources in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.persistentResources.list

For more information, see the IAM documentation.

RebootPersistentResource

rpc RebootPersistentResource(RebootPersistentResourceRequest) returns (Operation)

Reboots a PersistentResource.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdatePersistentResource

rpc UpdatePersistentResource(UpdatePersistentResourceRequest) returns (Operation)

Updates a PersistentResource.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

PipelineService

A service for creating and managing Vertex AI's pipelines. This includes both TrainingPipeline resources (used for AutoML and custom training) and PipelineJob resources (used for Vertex AI Pipelines).

BatchCancelPipelineJobs

rpc BatchCancelPipelineJobs(BatchCancelPipelineJobsRequest) returns (Operation)

Batch cancel PipelineJobs. Firstly the server will check if all the jobs are in non-terminal states, and skip the jobs that are already terminated. If the operation failed, none of the pipeline jobs are cancelled. The server will poll the states of all the pipeline jobs periodically to check the cancellation status. This operation will return an LRO.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.pipelineJobs.cancel

For more information, see the IAM documentation.

BatchDeletePipelineJobs

rpc BatchDeletePipelineJobs(BatchDeletePipelineJobsRequest) returns (Operation)

Batch deletes PipelineJobs The Operation is atomic. If it fails, none of the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs are deleted.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.pipelineJobs.delete

For more information, see the IAM documentation.

CancelPipelineJob

rpc CancelPipelineJob(CancelPipelineJobRequest) returns (Empty)

Cancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use PipelineService.GetPipelineJob or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the PipelineJob is not deleted; instead it becomes a pipeline with a PipelineJob.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and PipelineJob.state is set to CANCELLED.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.pipelineJobs.cancel

For more information, see the IAM documentation.

CancelTrainingPipeline

rpc CancelTrainingPipeline(CancelTrainingPipelineRequest) returns (Empty)

Cancels a TrainingPipeline. Starts asynchronous cancellation on the TrainingPipeline. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use PipelineService.GetTrainingPipeline or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the TrainingPipeline is not deleted; instead it becomes a pipeline with a TrainingPipeline.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and TrainingPipeline.state is set to CANCELLED.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.trainingPipelines.cancel

For more information, see the IAM documentation.

CreatePipelineJob

rpc CreatePipelineJob(CreatePipelineJobRequest) returns (PipelineJob)

Creates a PipelineJob. A PipelineJob will run immediately when created.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.pipelineJobs.create

For more information, see the IAM documentation.

CreateTrainingPipeline

rpc CreateTrainingPipeline(CreateTrainingPipelineRequest) returns (TrainingPipeline)

Creates a TrainingPipeline. A created TrainingPipeline right away will be attempted to be run.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.trainingPipelines.create

For more information, see the IAM documentation.

DeletePipelineJob

rpc DeletePipelineJob(DeletePipelineJobRequest) returns (Operation)

Deletes a PipelineJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.pipelineJobs.delete

For more information, see the IAM documentation.

DeleteTrainingPipeline

rpc DeleteTrainingPipeline(DeleteTrainingPipelineRequest) returns (Operation)

Deletes a TrainingPipeline.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.trainingPipelines.delete

For more information, see the IAM documentation.

GetPipelineJob

rpc GetPipelineJob(GetPipelineJobRequest) returns (PipelineJob)

Gets a PipelineJob.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.pipelineJobs.get

For more information, see the IAM documentation.

GetTrainingPipeline

rpc GetTrainingPipeline(GetTrainingPipelineRequest) returns (TrainingPipeline)

Gets a TrainingPipeline.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.trainingPipelines.get

For more information, see the IAM documentation.

ListPipelineJobs

rpc ListPipelineJobs(ListPipelineJobsRequest) returns (ListPipelineJobsResponse)

Lists PipelineJobs in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.pipelineJobs.list

For more information, see the IAM documentation.

ListTrainingPipelines

rpc ListTrainingPipelines(ListTrainingPipelinesRequest) returns (ListTrainingPipelinesResponse)

Lists TrainingPipelines in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.trainingPipelines.list

For more information, see the IAM documentation.

PredictionService

A service for online predictions and explanations.

ChatCompletions

rpc ChatCompletions(ChatCompletionsRequest) returns (HttpBody)

Exposes an OpenAI-compatible endpoint for chat completions.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

CountTokens

rpc CountTokens(CountTokensRequest) returns (CountTokensResponse)

Perform a token counting.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

DirectPredict

rpc DirectPredict(DirectPredictRequest) returns (DirectPredictResponse)

Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

DirectRawPredict

rpc DirectRawPredict(DirectRawPredictRequest) returns (DirectRawPredictResponse)

Perform an unary online prediction request to a gRPC model server for custom containers.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

Explain

rpc Explain(ExplainRequest) returns (ExplainResponse)

Perform an online explanation.

If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.explain

For more information, see the IAM documentation.

GenerateContent

rpc GenerateContent(GenerateContentRequest) returns (GenerateContentResponse)

Generate content with multimodal inputs.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the model resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

Predict

rpc Predict(PredictRequest) returns (PredictResponse)

Perform an online prediction.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

RawPredict

rpc RawPredict(RawPredictRequest) returns (HttpBody)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's DeployedModel that served this prediction.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

ServerStreamingPredict

rpc ServerStreamingPredict(StreamingPredictRequest) returns (StreamingPredictResponse)

Perform a server-side streaming online prediction request for Vertex LLM streaming.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

StreamDirectPredict

rpc StreamDirectPredict(StreamDirectPredictRequest) returns (StreamDirectPredictResponse)

Perform a streaming online prediction request to a gRPC model server for Vertex first-party products and frameworks.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

StreamDirectRawPredict

rpc StreamDirectRawPredict(StreamDirectRawPredictRequest) returns (StreamDirectRawPredictResponse)

Perform a streaming online prediction request to a gRPC model server for custom containers.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

StreamGenerateContent

rpc StreamGenerateContent(GenerateContentRequest) returns (GenerateContentResponse)

Generate content with multimodal inputs with streaming support.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the model resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

StreamRawPredict

rpc StreamRawPredict(StreamRawPredictRequest) returns (HttpBody)

Perform a streaming online prediction with an arbitrary HTTP payload.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

StreamingPredict

rpc StreamingPredict(StreamingPredictRequest) returns (StreamingPredictResponse)

Perform a streaming online prediction request for Vertex first-party products and frameworks.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

StreamingRawPredict

rpc StreamingRawPredict(StreamingRawPredictRequest) returns (StreamingRawPredictResponse)

Perform a streaming online prediction request through gRPC.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/cloud-platform.read-only
  • https://www.googleapis.com/auth/cloud-vertex-ai.firstparty.predict

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.predict

For more information, see the IAM documentation.

ReasoningEngineExecutionService

A service for executing queries on Reasoning Engine.

QueryReasoningEngine

rpc QueryReasoningEngine(QueryReasoningEngineRequest) returns (QueryReasoningEngineResponse)

Queries using a reasoning engine.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.reasoningEngines.query

For more information, see the IAM documentation.

StreamQueryReasoningEngine

rpc StreamQueryReasoningEngine(StreamQueryReasoningEngineRequest) returns (HttpBody)

Streams queries using a reasoning engine.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.reasoningEngines.query

For more information, see the IAM documentation.

ReasoningEngineService

A service for managing Vertex AI's Reasoning Engines.

CreateReasoningEngine

rpc CreateReasoningEngine(CreateReasoningEngineRequest) returns (Operation)

Creates a reasoning engine.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.reasoningEngines.create

For more information, see the IAM documentation.

DeleteReasoningEngine

rpc DeleteReasoningEngine(DeleteReasoningEngineRequest) returns (Operation)

Deletes a reasoning engine.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.reasoningEngines.delete

For more information, see the IAM documentation.

GetReasoningEngine

rpc GetReasoningEngine(GetReasoningEngineRequest) returns (ReasoningEngine)

Gets a reasoning engine.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.reasoningEngines.get

For more information, see the IAM documentation.

ListReasoningEngines

rpc ListReasoningEngines(ListReasoningEnginesRequest) returns (ListReasoningEnginesResponse)

Lists reasoning engines in a location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.reasoningEngines.list

For more information, see the IAM documentation.

UpdateReasoningEngine

rpc UpdateReasoningEngine(UpdateReasoningEngineRequest) returns (Operation)

Updates a reasoning engine.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.reasoningEngines.update

For more information, see the IAM documentation.

ScheduleService

A service for creating and managing Vertex AI's Schedule resources to periodically launch shceudled runs to make API calls.

CreateSchedule

rpc CreateSchedule(CreateScheduleRequest) returns (Schedule)

Creates a Schedule.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permissions on the parent resource:

  • aiplatform.pipelineJobs.create
  • aiplatform.schedules.create

For more information, see the IAM documentation.

DeleteSchedule

rpc DeleteSchedule(DeleteScheduleRequest) returns (Operation)

Deletes a Schedule.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.schedules.delete

For more information, see the IAM documentation.

GetSchedule

rpc GetSchedule(GetScheduleRequest) returns (Schedule)

Gets a Schedule.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.schedules.get

For more information, see the IAM documentation.

ListSchedules

rpc ListSchedules(ListSchedulesRequest) returns (ListSchedulesResponse)

Lists Schedules in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.schedules.list

For more information, see the IAM documentation.

PauseSchedule

rpc PauseSchedule(PauseScheduleRequest) returns (Empty)

Pauses a Schedule. Will mark Schedule.state to 'PAUSED'. If the schedule is paused, no new runs will be created. Already created runs will NOT be paused or canceled.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.schedules.update

For more information, see the IAM documentation.

ResumeSchedule

rpc ResumeSchedule(ResumeScheduleRequest) returns (Empty)

Resumes a paused Schedule to start scheduling new runs. Will mark Schedule.state to 'ACTIVE'. Only paused Schedule can be resumed.

When the Schedule is resumed, new runs will be scheduled starting from the next execution time after the current time based on the time_specification in the Schedule. If Schedule.catch_up is set up true, all missed runs will be scheduled for backfill first.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.schedules.update

For more information, see the IAM documentation.

UpdateSchedule

rpc UpdateSchedule(UpdateScheduleRequest) returns (Schedule)

Updates an active or paused Schedule.

When the Schedule is updated, new runs will be scheduled starting from the updated next execution time after the update time based on the time_specification in the updated Schedule. All unstarted runs before the update time will be skipped while already created runs will NOT be paused or canceled.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.schedules.update

For more information, see the IAM documentation.

SpecialistPoolService

A service for creating and managing Customer SpecialistPools. When customers start Data Labeling jobs, they can reuse/create Specialist Pools to bring their own Specialists to label the data. Customers can add/remove Managers for the Specialist Pool on Cloud console, then Managers will get email notifications to manage Specialists and tasks on CrowdCompute console.

CreateSpecialistPool

rpc CreateSpecialistPool(CreateSpecialistPoolRequest) returns (Operation)

Creates a SpecialistPool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.specialistPools.create

For more information, see the IAM documentation.

DeleteSpecialistPool

rpc DeleteSpecialistPool(DeleteSpecialistPoolRequest) returns (Operation)

Deletes a SpecialistPool as well as all Specialists in the pool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.specialistPools.delete

For more information, see the IAM documentation.

GetSpecialistPool

rpc GetSpecialistPool(GetSpecialistPoolRequest) returns (SpecialistPool)

Gets a SpecialistPool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.specialistPools.get

For more information, see the IAM documentation.

ListSpecialistPools

rpc ListSpecialistPools(ListSpecialistPoolsRequest) returns (ListSpecialistPoolsResponse)

Lists SpecialistPools in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.specialistPools.list

For more information, see the IAM documentation.

UpdateSpecialistPool

rpc UpdateSpecialistPool(UpdateSpecialistPoolRequest) returns (Operation)

Updates a SpecialistPool.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.specialistPools.update

For more information, see the IAM documentation.

TensorboardService

TensorboardService

BatchCreateTensorboardRuns

rpc BatchCreateTensorboardRuns(BatchCreateTensorboardRunsRequest) returns (BatchCreateTensorboardRunsResponse)

Batch create TensorboardRuns.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardRuns.batchCreate

For more information, see the IAM documentation.

BatchCreateTensorboardTimeSeries

rpc BatchCreateTensorboardTimeSeries(BatchCreateTensorboardTimeSeriesRequest) returns (BatchCreateTensorboardTimeSeriesResponse)

Batch create TensorboardTimeSeries that belong to a TensorboardExperiment.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardTimeSeries.batchCreate

For more information, see the IAM documentation.

BatchReadTensorboardTimeSeriesData

rpc BatchReadTensorboardTimeSeriesData(BatchReadTensorboardTimeSeriesDataRequest) returns (BatchReadTensorboardTimeSeriesDataResponse)

Reads multiple TensorboardTimeSeries' data. The data point number limit is 1000 for scalars, 100 for tensors and blob references. If the number of data points stored is less than the limit, all data is returned. Otherwise, the number limit of data points is randomly selected from this time series and returned.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the tensorboard resource:

  • aiplatform.tensorboardTimeSeries.batchRead

For more information, see the IAM documentation.

CreateTensorboard

rpc CreateTensorboard(CreateTensorboardRequest) returns (Operation)

Creates a Tensorboard.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboards.create

For more information, see the IAM documentation.

CreateTensorboardExperiment

rpc CreateTensorboardExperiment(CreateTensorboardExperimentRequest) returns (TensorboardExperiment)

Creates a TensorboardExperiment.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardExperiments.create

For more information, see the IAM documentation.

CreateTensorboardRun

rpc CreateTensorboardRun(CreateTensorboardRunRequest) returns (TensorboardRun)

Creates a TensorboardRun.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardRuns.create

For more information, see the IAM documentation.

CreateTensorboardTimeSeries

rpc CreateTensorboardTimeSeries(CreateTensorboardTimeSeriesRequest) returns (TensorboardTimeSeries)

Creates a TensorboardTimeSeries.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardTimeSeries.create

For more information, see the IAM documentation.

DeleteTensorboard

rpc DeleteTensorboard(DeleteTensorboardRequest) returns (Operation)

Deletes a Tensorboard.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboards.delete

For more information, see the IAM documentation.

DeleteTensorboardExperiment

rpc DeleteTensorboardExperiment(DeleteTensorboardExperimentRequest) returns (Operation)

Deletes a TensorboardExperiment.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardExperiments.delete

For more information, see the IAM documentation.

DeleteTensorboardRun

rpc DeleteTensorboardRun(DeleteTensorboardRunRequest) returns (Operation)

Deletes a TensorboardRun.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardRuns.delete

For more information, see the IAM documentation.

DeleteTensorboardTimeSeries

rpc DeleteTensorboardTimeSeries(DeleteTensorboardTimeSeriesRequest) returns (Operation)

Deletes a TensorboardTimeSeries.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardTimeSeries.delete

For more information, see the IAM documentation.

ExportTensorboardTimeSeriesData

rpc ExportTensorboardTimeSeriesData(ExportTensorboardTimeSeriesDataRequest) returns (ExportTensorboardTimeSeriesDataResponse)

Exports a TensorboardTimeSeries' data. Data is returned in paginated responses.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the tensorboardTimeSeries resource:

  • aiplatform.tensorboardTimeSeries.read

For more information, see the IAM documentation.

GetTensorboard

rpc GetTensorboard(GetTensorboardRequest) returns (Tensorboard)

Gets a Tensorboard.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboards.get

For more information, see the IAM documentation.

GetTensorboardExperiment

rpc GetTensorboardExperiment(GetTensorboardExperimentRequest) returns (TensorboardExperiment)

Gets a TensorboardExperiment.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardExperiments.get

For more information, see the IAM documentation.

GetTensorboardRun

rpc GetTensorboardRun(GetTensorboardRunRequest) returns (TensorboardRun)

Gets a TensorboardRun.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardRuns.get

For more information, see the IAM documentation.

GetTensorboardTimeSeries

rpc GetTensorboardTimeSeries(GetTensorboardTimeSeriesRequest) returns (TensorboardTimeSeries)

Gets a TensorboardTimeSeries.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardTimeSeries.get

For more information, see the IAM documentation.

ListTensorboardExperiments

rpc ListTensorboardExperiments(ListTensorboardExperimentsRequest) returns (ListTensorboardExperimentsResponse)

Lists TensorboardExperiments in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardExperiments.list

For more information, see the IAM documentation.

ListTensorboardRuns

rpc ListTensorboardRuns(ListTensorboardRunsRequest) returns (ListTensorboardRunsResponse)

Lists TensorboardRuns in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardRuns.list

For more information, see the IAM documentation.

ListTensorboardTimeSeries

rpc ListTensorboardTimeSeries(ListTensorboardTimeSeriesRequest) returns (ListTensorboardTimeSeriesResponse)

Lists TensorboardTimeSeries in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboardTimeSeries.list

For more information, see the IAM documentation.

ListTensorboards

rpc ListTensorboards(ListTensorboardsRequest) returns (ListTensorboardsResponse)

Lists Tensorboards in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.tensorboards.list

For more information, see the IAM documentation.

ReadTensorboardBlobData

rpc ReadTensorboardBlobData(ReadTensorboardBlobDataRequest) returns (ReadTensorboardBlobDataResponse)

Gets bytes of TensorboardBlobs. This is to allow reading blob data stored in consumer project's Cloud Storage bucket without users having to obtain Cloud Storage access permission.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the timeSeries resource:

  • aiplatform.tensorboardTimeSeries.read

For more information, see the IAM documentation.

ReadTensorboardSize

rpc ReadTensorboardSize(ReadTensorboardSizeRequest) returns (ReadTensorboardSizeResponse)

Returns the storage size for a given TensorBoard instance.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ReadTensorboardTimeSeriesData

rpc ReadTensorboardTimeSeriesData(ReadTensorboardTimeSeriesDataRequest) returns (ReadTensorboardTimeSeriesDataResponse)

Reads a TensorboardTimeSeries' data. By default, if the number of data points stored is less than 1000, all data is returned. Otherwise, 1000 data points is randomly selected from this time series and returned. This value can be changed by changing max_data_points, which can't be greater than 10k.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the tensorboardTimeSeries resource:

  • aiplatform.tensorboardTimeSeries.read

For more information, see the IAM documentation.

ReadTensorboardUsage

rpc ReadTensorboardUsage(ReadTensorboardUsageRequest) returns (ReadTensorboardUsageResponse)

Returns a list of monthly active users for a given TensorBoard instance.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateTensorboard

rpc UpdateTensorboard(UpdateTensorboardRequest) returns (Operation)

Updates a Tensorboard.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboards.update

For more information, see the IAM documentation.

UpdateTensorboardExperiment

rpc UpdateTensorboardExperiment(UpdateTensorboardExperimentRequest) returns (TensorboardExperiment)

Updates a TensorboardExperiment.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardExperiments.update

For more information, see the IAM documentation.

UpdateTensorboardRun

rpc UpdateTensorboardRun(UpdateTensorboardRunRequest) returns (TensorboardRun)

Updates a TensorboardRun.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardRuns.update

For more information, see the IAM documentation.

UpdateTensorboardTimeSeries

rpc UpdateTensorboardTimeSeries(UpdateTensorboardTimeSeriesRequest) returns (TensorboardTimeSeries)

Updates a TensorboardTimeSeries.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.tensorboardTimeSeries.update

For more information, see the IAM documentation.

WriteTensorboardExperimentData

rpc WriteTensorboardExperimentData(WriteTensorboardExperimentDataRequest) returns (WriteTensorboardExperimentDataResponse)

Write time series data points of multiple TensorboardTimeSeries in multiple TensorboardRun's. If any data fail to be ingested, an error is returned.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the tensorboardExperiment resource:

  • aiplatform.tensorboardExperiments.write

For more information, see the IAM documentation.

WriteTensorboardRunData

rpc WriteTensorboardRunData(WriteTensorboardRunDataRequest) returns (WriteTensorboardRunDataResponse)

Write time series data points into multiple TensorboardTimeSeries under a TensorboardRun. If any data fail to be ingested, an error is returned.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the tensorboardRun resource:

  • aiplatform.tensorboardRuns.write

For more information, see the IAM documentation.

VertexRagDataService

A service for managing user data for RAG.

CreateRagCorpus

rpc CreateRagCorpus(CreateRagCorpusRequest) returns (Operation)

Creates a RagCorpus.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteRagCorpus

rpc DeleteRagCorpus(DeleteRagCorpusRequest) returns (Operation)

Deletes a RagCorpus.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteRagFile

rpc DeleteRagFile(DeleteRagFileRequest) returns (Operation)

Deletes a RagFile.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetRagCorpus

rpc GetRagCorpus(GetRagCorpusRequest) returns (RagCorpus)

Gets a RagCorpus.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetRagFile

rpc GetRagFile(GetRagFileRequest) returns (RagFile)

Gets a RagFile.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ImportRagFiles

rpc ImportRagFiles(ImportRagFilesRequest) returns (Operation)

Import files from Google Cloud Storage or Google Drive into a RagCorpus.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListRagCorpora

rpc ListRagCorpora(ListRagCorporaRequest) returns (ListRagCorporaResponse)

Lists RagCorpora in a Location.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListRagFiles

rpc ListRagFiles(ListRagFilesRequest) returns (ListRagFilesResponse)

Lists RagFiles in a RagCorpus.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateRagCorpus

rpc UpdateRagCorpus(UpdateRagCorpusRequest) returns (Operation)

Updates a RagCorpus.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

VertexRagService

A service for retrieving relevant contexts.

AugmentPrompt

rpc AugmentPrompt(AugmentPromptRequest) returns (AugmentPromptResponse)

Given an input prompt, it returns augmented prompt from vertex rag store to guide LLM towards generating grounded responses.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CorroborateContent

rpc CorroborateContent(CorroborateContentRequest) returns (CorroborateContentResponse)

Given an input text, it returns a score that evaluates the factuality of the text. It also extracts and returns claims from the text and provides supporting facts.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

RetrieveContexts

rpc RetrieveContexts(RetrieveContextsRequest) returns (RetrieveContextsResponse)

Retrieves relevant contexts for a query.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

VizierService

Vertex AI Vizier API.

Vertex AI Vizier is a service to solve blackbox optimization problems, such as tuning machine learning hyperparameters and searching over deep learning architectures.

AddTrialMeasurement

rpc AddTrialMeasurement(AddTrialMeasurementRequest) returns (Trial)

Adds a measurement of the objective metrics to a Trial. This measurement is assumed to have been taken before the Trial is complete.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the trialName resource:

  • aiplatform.trials.update

For more information, see the IAM documentation.

CheckTrialEarlyStoppingState

rpc CheckTrialEarlyStoppingState(CheckTrialEarlyStoppingStateRequest) returns (Operation)

Checks whether a Trial should stop or not. Returns a long-running operation. When the operation is successful, it will contain a CheckTrialEarlyStoppingStateResponse.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the trialName resource:

  • aiplatform.trials.get

For more information, see the IAM documentation.

CompleteTrial

rpc CompleteTrial(CompleteTrialRequest) returns (Trial)

Marks a Trial as complete.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.trials.update

For more information, see the IAM documentation.

CreateStudy

rpc CreateStudy(CreateStudyRequest) returns (Study)

Creates a Study. A resource name will be generated after creation of the Study.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.studies.create

For more information, see the IAM documentation.

CreateTrial

rpc CreateTrial(CreateTrialRequest) returns (Trial)

Adds a user provided Trial to a Study.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.trials.create

For more information, see the IAM documentation.

DeleteStudy

rpc DeleteStudy(DeleteStudyRequest) returns (Empty)

Deletes a Study.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.studies.delete

For more information, see the IAM documentation.

DeleteTrial

rpc DeleteTrial(DeleteTrialRequest) returns (Empty)

Deletes a Trial.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.trials.delete

For more information, see the IAM documentation.

GetStudy

rpc GetStudy(GetStudyRequest) returns (Study)

Gets a Study by name.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.studies.get

For more information, see the IAM documentation.

GetTrial

rpc GetTrial(GetTrialRequest) returns (Trial)

Gets a Trial.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.trials.get

For more information, see the IAM documentation.

ListOptimalTrials

rpc ListOptimalTrials(ListOptimalTrialsRequest) returns (ListOptimalTrialsResponse)

Lists the pareto-optimal Trials for multi-objective Study or the optimal Trials for single-objective Study. The definition of pareto-optimal can be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.trials.list

For more information, see the IAM documentation.

ListStudies

rpc ListStudies(ListStudiesRequest) returns (ListStudiesResponse)

Lists all the studies in a region for an associated project.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.studies.list

For more information, see the IAM documentation.

ListTrials

rpc ListTrials(ListTrialsRequest) returns (ListTrialsResponse)

Lists the Trials associated with a Study.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.trials.list

For more information, see the IAM documentation.

LookupStudy

rpc LookupStudy(LookupStudyRequest) returns (Study)

Looks a study up using the user-defined display_name field instead of the fully qualified resource name.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.studies.list

For more information, see the IAM documentation.

StopTrial

rpc StopTrial(StopTrialRequest) returns (Trial)

Stops a Trial.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the name resource:

  • aiplatform.trials.update

For more information, see the IAM documentation.

SuggestTrials

rpc SuggestTrials(SuggestTrialsRequest) returns (Operation)

Adds one or more Trials to a Study, with parameter values suggested by Vertex AI Vizier. Returns a long-running operation associated with the generation of Trial suggestions. When this long-running operation succeeds, it will contain a SuggestTrialsResponse.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the parent resource:

  • aiplatform.trials.create

For more information, see the IAM documentation.

AcceleratorType

Represents a hardware accelerator type.

Enums
ACCELERATOR_TYPE_UNSPECIFIED Unspecified accelerator type, which means no accelerator.
NVIDIA_TESLA_K80

Deprecated: Nvidia Tesla K80 GPU has reached end of support, see https://cloud.google.com/compute/docs/eol/k80-eol.

NVIDIA_TESLA_P100 Nvidia Tesla P100 GPU.
NVIDIA_TESLA_V100 Nvidia Tesla V100 GPU.
NVIDIA_TESLA_P4 Nvidia Tesla P4 GPU.
NVIDIA_TESLA_T4 Nvidia Tesla T4 GPU.
NVIDIA_TESLA_A100 Nvidia Tesla A100 GPU.
NVIDIA_A100_80GB Nvidia A100 80GB GPU.
NVIDIA_L4 Nvidia L4 GPU.
NVIDIA_H100_80GB Nvidia H100 80Gb GPU.
NVIDIA_H100_MEGA_80GB Nvidia H100 Mega 80Gb GPU.
TPU_V2 TPU v2.
TPU_V3 TPU v3.
TPU_V4_POD TPU v4.
TPU_V5_LITEPOD TPU v5.

AddContextArtifactsAndExecutionsRequest

Request message for MetadataService.AddContextArtifactsAndExecutions.

Fields
context

string

Required. The resource name of the Context that the Artifacts and Executions belong to. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

artifacts[]

string

The resource names of the Artifacts to attribute to the Context.

Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}

executions[]

string

The resource names of the Executions to associate with the Context.

Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}

AddContextArtifactsAndExecutionsResponse

This type has no fields.

Response message for MetadataService.AddContextArtifactsAndExecutions.

AddContextChildrenRequest

Request message for MetadataService.AddContextChildren.

Fields
context

string

Required. The resource name of the parent Context.

Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

child_contexts[]

string

The resource names of the child Contexts.

AddContextChildrenResponse

This type has no fields.

Response message for MetadataService.AddContextChildren.

AddExecutionEventsRequest

Request message for MetadataService.AddExecutionEvents.

Fields
execution

string

Required. The resource name of the Execution that the Events connect Artifacts with. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}

events[]

Event

The Events to create and add.

AddExecutionEventsResponse

This type has no fields.

Response message for MetadataService.AddExecutionEvents.

AddTrialMeasurementRequest

Request message for VizierService.AddTrialMeasurement.

Fields
trial_name

string

Required. The name of the trial to add measurement. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}

measurement

Measurement

Required. The measurement to be added to a Trial.

Annotation

Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.

Fields
name

string

Output only. Resource name of the Annotation.

payload_schema_uri

string

Required. Google Cloud Storage URI points to a YAML file describing payload. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with the parent Dataset's metadata.

payload

Value

Required. The schema of the payload can be found in payload_schema.

create_time

Timestamp

Output only. Timestamp when this Annotation was created.

update_time

Timestamp

Output only. Timestamp when this Annotation was last updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

annotation_source

UserActionReference

Output only. The source of the Annotation.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your Annotations.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Annotation(System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Annotation:

  • "aiplatform.googleapis.com/annotation_set_name": optional, name of the UI's annotation set this Annotation belongs to. If not set, the Annotation is not visible in the UI.

  • "aiplatform.googleapis.com/payload_schema": output only, its value is the payload_schema's title.

AnnotationSpec

Identifies a concept with which DataItems may be annotated with.

Fields
name

string

Output only. Resource name of the AnnotationSpec.

display_name

string

Required. The user-defined name of the AnnotationSpec. The name can be up to 128 characters long and can consist of any UTF-8 characters.

create_time

Timestamp

Output only. Timestamp when this AnnotationSpec was created.

update_time

Timestamp

Output only. Timestamp when AnnotationSpec was last updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

ApiAuth

The generic reusable api auth config.

Fields
Union field auth_config. The auth config. auth_config can be only one of the following:
api_key_config

ApiKeyConfig

The API secret.

ApiKeyConfig

The API secret.

Fields
api_key_secret_version

string

Required. The SecretManager secret version resource name storing API key. e.g. projects/{project}/secrets/{secret}/versions/{version}

Artifact

Instance of a general artifact.

Fields
name

string

Output only. The resource name of the Artifact.

display_name

string

User provided display name of the Artifact. May be up to 128 Unicode characters.

uri

string

The uniform resource identifier of the artifact file. May be empty if there is no actual artifact file.

etag

string

An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your Artifacts.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Artifact (System labels are excluded).

create_time

Timestamp

Output only. Timestamp when this Artifact was created.

update_time

Timestamp

Output only. Timestamp when this Artifact was last updated.

state

State

The state of this Artifact. This is a property of the Artifact, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines), and the system does not prescribe or check the validity of state transitions.

schema_title

string

The title of the schema describing the metadata.

Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

schema_version

string

The version of the schema in schema_name to use.

Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

metadata

Struct

Properties of the Artifact. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.

description

string

Description of the Artifact

State

Describes the state of the Artifact.

Enums
STATE_UNSPECIFIED Unspecified state for the Artifact.
PENDING A state used by systems like Vertex AI Pipelines to indicate that the underlying data item represented by this Artifact is being created.
LIVE A state indicating that the Artifact should exist, unless something external to the system deletes it.

ArtifactTypeSchema

The definition of a artifact type in MLMD.

Fields
schema_version

string

The schema version of the artifact. If the value is not set, it defaults to the latest version in the system.

Union field kind.

kind can be only one of the following:

schema_title

string

The name of the type. The format of the title must be: <namespace>.<title>. Examples: - aiplatform.Model - acme.CustomModel When this field is set, the type must be pre-registered in the MLMD store.

schema_uri
(deprecated)

string

Points to a YAML file stored on Cloud Storage describing the format. Deprecated. Use [PipelineArtifactTypeSchema.schema_title][] or [PipelineArtifactTypeSchema.instance_schema][] instead.

instance_schema

string

Contains a raw YAML string, describing the format of the properties of the type.

AssignNotebookRuntimeOperationMetadata

Metadata information for NotebookService.AssignNotebookRuntime.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

progress_message

string

A human-readable message that shows the intermediate progress details of NotebookRuntime.

AssignNotebookRuntimeRequest

Request message for NotebookService.AssignNotebookRuntime.

Fields
parent

string

Required. The resource name of the Location to get the NotebookRuntime assignment. Format: projects/{project}/locations/{location}

notebook_runtime_template

string

Required. The resource name of the NotebookRuntimeTemplate based on which a NotebookRuntime will be assigned (reuse or create a new one).

notebook_runtime

NotebookRuntime

Required. Provide runtime specific information (e.g. runtime owner, notebook id) used for NotebookRuntime assignment.

notebook_runtime_id

string

Optional. User specified ID for the notebook runtime.

Attribution

Attribution that explains a particular prediction output.

Fields
baseline_output_value

double

Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs.

If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located by output_index.

If there are multiple baselines, their output values are averaged.

instance_output_value

double

Output only. Model predicted output on the corresponding [explanation instance][ExplainRequest.instances]. The field name of the output is determined by the key in ExplanationMetadata.outputs.

If the Model predicted output has multiple dimensions, this is the value in the output located by output_index.

feature_attributions

Value

Output only. Attributions of each explained feature. Features are extracted from the prediction instances according to explanation metadata for inputs.

The value is a struct, whose keys are the name of the feature. The values are how much the feature in the instance contributed to the predicted result.

The format of the value is determined by the feature's input format:

  • If the feature is a scalar value, the attribution value is a floating number.

  • If the feature is an array of scalar values, the attribution value is an array.

  • If the feature is a struct, the attribution value is a struct. The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct.

The ExplanationMetadata.feature_attributions_schema_uri field, pointed to by the ExplanationSpec field of the Endpoint.deployed_models object, points to the schema file that describes the features and their attribution values (if it is populated).

output_index[]

int32

Output only. The index that locates the explained prediction output.

If the prediction output is a scalar value, output_index is not populated. If the prediction output has multiple dimensions, the length of the output_index list is the same as the number of dimensions of the output. The i-th element in output_index is the element index of the i-th dimension of the output vector. Indices start from 0.

output_display_name

string

Output only. The display name of the output identified by output_index. For example, the predicted class name by a multi-classification Model.

This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using output_index.

approximation_error

double

Output only. Error of feature_attributions caused by approximation used in the explanation method. Lower value means more precise attributions.

See this introduction for more information.

output_name

string

Output only. Name of the explain output. Specified as the key in ExplanationMetadata.outputs.

AugmentPromptRequest

Request message for AugmentPrompt.

Fields
parent

string

Required. The resource name of the Location from which to augment prompt. The users must have permission to make a call in the project. Format: projects/{project}/locations/{location}.

contents[]

Content

Optional. Input content to augment, only text format is supported for now.

model

Model

Optional. Metadata of the backend deployed model.

Union field data_source. The data source for retrieving contexts. data_source can be only one of the following:
vertex_rag_store

VertexRagStore

Optional. Retrieves contexts from the Vertex RagStore.

Model

Metadata of the backend deployed model.

Fields
model

string

Optional. The model that the user will send the augmented prompt for content generation.

model_version

string

Optional. The model version of the backend deployed model.

AugmentPromptResponse

Response message for AugmentPrompt.

Fields
augmented_prompt[]

Content

Augmented prompt, only text format is supported for now.

facts[]

Fact

Retrieved facts from RAG data sources.

AuthConfig

Auth configuration to run the extension.

Fields
auth_type

AuthType

Type of auth scheme.

Union field auth_config.

auth_config can be only one of the following:

api_key_config

ApiKeyConfig

Config for API key auth.

http_basic_auth_config

HttpBasicAuthConfig

Config for HTTP Basic auth.

google_service_account_config

GoogleServiceAccountConfig

Config for Google Service Account auth.

oauth_config

OauthConfig

Config for user oauth.

oidc_config

OidcConfig

Config for user OIDC auth.

ApiKeyConfig

Config for authentication with API key.

Fields
name

string

Required. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name.

api_key_secret

string

Optional. The name of the SecretManager secret version resource storing the API key. Format: projects/{project}/secrets/{secrete}/versions/{version}

http_element_location

HttpElementLocation

Required. The location of the API key.

GoogleServiceAccountConfig

Config for Google Service Account Authentication.

Fields
service_account

string

Optional. The service account that the extension execution service runs as.

HttpBasicAuthConfig

Config for HTTP Basic Authentication.

Fields
credential_secret

string

Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: projects/{project}/secrets/{secrete}/versions/{version}

OauthConfig

Config for user oauth.

Fields

Union field oauth_config.

oauth_config can be only one of the following:

access_token

string

Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.

service_account

string

The service account used to generate access tokens for executing the Extension.

OidcConfig

Config for user OIDC auth.

Fields

Union field oidc_config.

oidc_config can be only one of the following:

id_token

string

OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.

service_account

string

The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc).

AuthType

Type of Auth.

Enums
AUTH_TYPE_UNSPECIFIED
NO_AUTH No Auth.
API_KEY_AUTH API Key Auth.
HTTP_BASIC_AUTH HTTP Basic Auth.
GOOGLE_SERVICE_ACCOUNT_AUTH Google Service Account Auth.
OAUTH OAuth auth.
OIDC_AUTH OpenID Connect (OIDC) Auth.

AutomaticResources

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.

Fields
min_replica_count

int32

Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.

max_replica_count

int32

Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.

AutoscalingMetricSpec

The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.

Fields
metric_name

string

Required. The resource metric name. Supported metrics:

  • For Online Prediction:
  • aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle
  • aiplatform.googleapis.com/prediction/online/cpu/utilization
target

int32

The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.

AvroSource

The storage details for Avro input content.

Fields
gcs_source

GcsSource

Required. Google Cloud Storage location.

BatchCancelPipelineJobsOperationMetadata

Runtime operation information for PipelineService.BatchCancelPipelineJobs.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

BatchCancelPipelineJobsRequest

Request message for PipelineService.BatchCancelPipelineJobs.

Fields
parent

string

Required. The name of the PipelineJobs' parent resource. Format: projects/{project}/locations/{location}

names[]

string

Required. The names of the PipelineJobs to cancel. A maximum of 32 PipelineJobs can be cancelled in a batch. Format: projects/{project}/locations/{location}/pipelineJobs/{pipelineJob}

BatchCancelPipelineJobsResponse

Response message for PipelineService.BatchCancelPipelineJobs.

Fields
pipeline_jobs[]

PipelineJob

PipelineJobs cancelled.

BatchCreateFeaturesOperationMetadata

Details of operations that perform batch create Features.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Feature.

BatchCreateFeaturesRequest

Request message for FeaturestoreService.BatchCreateFeatures. Request message for FeatureRegistryService.BatchCreateFeatures.

Fields
parent

string

Required. The resource name of the EntityType/FeatureGroup to create the batch of Features under. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type} projects/{project}/locations/{location}/featureGroups/{feature_group}

requests[]

CreateFeatureRequest

Required. The request message specifying the Features to create. All Features must be created under the same parent EntityType / FeatureGroup. The parent field in each child request message can be omitted. If parent is set in a child request, then the value must match the parent value in this request message.

BatchCreateFeaturesResponse

Response message for FeaturestoreService.BatchCreateFeatures.

Fields
features[]

Feature

The Features created.

BatchCreateTensorboardRunsRequest

Request message for TensorboardService.BatchCreateTensorboardRuns.

Fields
parent

string

Required. The resource name of the TensorboardExperiment to create the TensorboardRuns in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment} The parent field in the CreateTensorboardRunRequest messages must match this field.

requests[]

CreateTensorboardRunRequest

Required. The request message specifying the TensorboardRuns to create. A maximum of 1000 TensorboardRuns can be created in a batch.

BatchCreateTensorboardRunsResponse

Response message for TensorboardService.BatchCreateTensorboardRuns.

Fields
tensorboard_runs[]

TensorboardRun

The created TensorboardRuns.

BatchCreateTensorboardTimeSeriesRequest

Request message for TensorboardService.BatchCreateTensorboardTimeSeries.

Fields
parent

string

Required. The resource name of the TensorboardExperiment to create the TensorboardTimeSeries in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment} The TensorboardRuns referenced by the parent fields in the CreateTensorboardTimeSeriesRequest messages must be sub resources of this TensorboardExperiment.

requests[]

CreateTensorboardTimeSeriesRequest

Required. The request message specifying the TensorboardTimeSeries to create. A maximum of 1000 TensorboardTimeSeries can be created in a batch.

BatchCreateTensorboardTimeSeriesResponse

Response message for TensorboardService.BatchCreateTensorboardTimeSeries.

Fields
tensorboard_time_series[]

TensorboardTimeSeries

The created TensorboardTimeSeries.

BatchDedicatedResources

A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.

Fields
machine_spec

MachineSpec

Required. Immutable. The specification of a single machine.

starting_replica_count

int32

Immutable. The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than max_replica_count

max_replica_count

int32

Immutable. The maximum number of machine replicas the batch operation may be scaled to. The default value is 10.

BatchDeletePipelineJobsRequest

Request message for PipelineService.BatchDeletePipelineJobs.

Fields
parent

string

Required. The name of the PipelineJobs' parent resource. Format: projects/{project}/locations/{location}

names[]

string

Required. The names of the PipelineJobs to delete. A maximum of 32 PipelineJobs can be deleted in a batch. Format: projects/{project}/locations/{location}/pipelineJobs/{pipelineJob}

BatchDeletePipelineJobsResponse

Response message for PipelineService.BatchDeletePipelineJobs.

Fields
pipeline_jobs[]

PipelineJob

PipelineJobs deleted.

BatchImportEvaluatedAnnotationsRequest

Request message for ModelService.BatchImportEvaluatedAnnotations

Fields
parent

string

Required. The name of the parent ModelEvaluationSlice resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}/slices/{slice}

evaluated_annotations[]

EvaluatedAnnotation

Required. Evaluated annotations resource to be imported.

BatchImportEvaluatedAnnotationsResponse

Response message for ModelService.BatchImportEvaluatedAnnotations

Fields
imported_evaluated_annotations_count

int32

Output only. Number of EvaluatedAnnotations imported.

BatchImportModelEvaluationSlicesRequest

Request message for ModelService.BatchImportModelEvaluationSlices

Fields
parent

string

Required. The name of the parent ModelEvaluation resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}

model_evaluation_slices[]

ModelEvaluationSlice

Required. Model evaluation slice resource to be imported.

BatchImportModelEvaluationSlicesResponse

Response message for ModelService.BatchImportModelEvaluationSlices

Fields
imported_model_evaluation_slices[]

string

Output only. List of imported ModelEvaluationSlice.name.

BatchMigrateResourcesOperationMetadata

Runtime operation information for MigrationService.BatchMigrateResources.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

partial_results[]

PartialResult

Partial results that reflect the latest migration operation progress.

PartialResult

Represents a partial result in batch migration operation for one MigrateResourceRequest.

Fields
request

MigrateResourceRequest

It's the same as the value in BatchMigrateResourcesRequest.migrate_resource_requests.

Union field result. If the resource's migration is ongoing, none of the result will be set. If the resource's migration is finished, either error or one of the migrated resource name will be filled. result can be only one of the following:
error

Status

The error result of the migration request in case of failure.

model

string

Migrated model resource name.

dataset

string

Migrated dataset resource name.

BatchMigrateResourcesRequest

Request message for MigrationService.BatchMigrateResources.

Fields
parent

string

Required. The location of the migrated resource will live in. Format: projects/{project}/locations/{location}

migrate_resource_requests[]

MigrateResourceRequest

Required. The request messages specifying the resources to migrate. They must be in the same location as the destination. Up to 50 resources can be migrated in one batch.

BatchMigrateResourcesResponse

Response message for MigrationService.BatchMigrateResources.

Fields
migrate_resource_responses[]

MigrateResourceResponse

Successfully migrated resources.

BatchPredictionJob

A job that uses a Model to produce predictions on multiple input instances. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.

Fields
name

string

Output only. Resource name of the BatchPredictionJob.

display_name

string

Required. The user-defined name of this BatchPredictionJob.

model

string

The name of the Model resource that produces the predictions via this job, must share the same ancestor Location. Starting this job has no impact on any existing deployments of the Model and their resources. Exactly one of model and unmanaged_container_model must be set.

The model resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

The model resource could also be a publisher model. Example: publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

model_version_id

string

Output only. The version ID of the Model that produces the predictions via this job.

unmanaged_container_model

UnmanagedContainerModel

Contains model information necessary to perform batch prediction without requiring uploading to model registry. Exactly one of model and unmanaged_container_model must be set.

input_config

InputConfig

Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's PredictSchemata's instance_schema_uri.

instance_config

InstanceConfig

Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model.

model_parameters

Value

The parameters that govern the predictions. The schema of the parameters may be specified via the Model's PredictSchemata's parameters_schema_uri.

output_config

OutputConfig

Required. The Configuration specifying where output predictions should be written. The schema of any single prediction may be specified as a concatenation of Model's PredictSchemata's instance_schema_uri and prediction_schema_uri.

dedicated_resources

BatchDedicatedResources

The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.

service_account

string

The service account that the DeployedModel's container runs as. If not specified, a system generated one will be used, which has minimal permissions and the custom container, if used, may not have enough permission to access other Google Cloud resources.

Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

manual_batch_tuning_parameters

ManualBatchTuningParameters

Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources are used (in other cases Vertex AI does the tuning itself).

generate_explanation

bool

Generate explanation with the batch prediction results.

When set to true, the batch prediction output changes based on the predictions_format field of the BatchPredictionJob.output_config object:

  • bigquery: output includes a column named explanation. The value is a struct that conforms to the Explanation object.
  • jsonl: The JSON objects on each line include an additional entry keyed explanation. The value of the entry is a JSON object that conforms to the Explanation object.
  • csv: Generating explanations for CSV format is not supported.

If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.

explanation_spec

ExplanationSpec

Explanation configuration for this BatchPredictionJob. Can be specified only if generate_explanation is set to true.

This value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of the explanation_spec object is not populated, the corresponding field of the Model.explanation_spec object is inherited.

output_info

OutputInfo

Output only. Information further describing the output of this job.

state

JobState

Output only. The detailed state of the job.

error

Status

Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

partial_failures[]

Status

Output only. Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.

resources_consumed

ResourcesConsumed

Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes.

Note: This field currently may be not populated for batch predictions that use AutoML Models.

completion_stats

CompletionStats

Output only. Statistics on completed and failed prediction instances.

create_time

Timestamp

Output only. Time when the BatchPredictionJob was created.

start_time

Timestamp

Output only. Time when the BatchPredictionJob for the first time entered the JOB_STATE_RUNNING state.

end_time

Timestamp

Output only. Time when the BatchPredictionJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

update_time

Timestamp

Output only. Time when the BatchPredictionJob was most recently updated.

labels

map<string, string>

The labels with user-defined metadata to organize BatchPredictionJobs.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

encryption_spec

EncryptionSpec

Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.

model_monitoring_config

ModelMonitoringConfig

Model monitoring config will be used for analysis model behaviors, based on the input and output to the batch prediction job, as well as the provided training dataset.

model_monitoring_stats_anomalies[]

ModelMonitoringStatsAnomalies

Get batch prediction job monitoring statistics.

model_monitoring_status

Status

Output only. The running status of the model monitoring pipeline.

disable_container_logging

bool

For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

User can disable container logging by setting this flag to true.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

InputConfig

Configures the input to BatchPredictionJob. See Model.supported_input_storage_formats for Model's supported input formats, and how instances should be expressed via any of them.

Fields
instances_format

string

Required. The format in which instances are given, must be one of the Model's supported_input_storage_formats.

Union field source. Required. The source of the input. source can be only one of the following:
gcs_source

GcsSource

The Cloud Storage location for the input instances.

bigquery_source

BigQuerySource

The BigQuery location of the input table. The schema of the table should be in the format described by the given context OpenAPI Schema, if one is provided. The table may contain additional columns that are not described by the schema, and they will be ignored.

InstanceConfig

Configuration defining how to transform batch prediction input instances to the instances that the Model accepts.

Fields
instance_type

string

The format of the instance that the Model accepts. Vertex AI will convert compatible batch prediction input instance formats to the specified format.

Supported values are:

  • object: Each input is converted to JSON object format.

    • For bigquery, each row is converted to an object.
    • For jsonl, each line of the JSONL input must be an object.
    • Does not apply to csv, file-list, tf-record, or tf-record-gzip.
  • array: Each input is converted to JSON array format.

    • For bigquery, each row is converted to an array. The order of columns is determined by the BigQuery column order, unless included_fields is populated. included_fields must be populated for specifying field orders.
    • For jsonl, if each line of the JSONL input is an object, included_fields must be populated for specifying field orders.
    • Does not apply to csv, file-list, tf-record, or tf-record-gzip.

If not specified, Vertex AI converts the batch prediction input as follows:

  • For bigquery and csv, the behavior is the same as array. The order of columns is the same as defined in the file or table, unless included_fields is populated.
  • For jsonl, the prediction instance format is determined by each line of the input.
  • For tf-record/tf-record-gzip, each record will be converted to an object in the format of {"b64": <value>}, where <value> is the Base64-encoded string of the content of the record.
  • For file-list, each file in the list will be converted to an object in the format of {"b64": <value>}, where <value> is the Base64-encoded string of the content of the file.
key_field

string

The name of the field that is considered as a key.

The values identified by the key field is not included in the transformed instances that is sent to the Model. This is similar to specifying this name of the field in excluded_fields. In addition, the batch prediction output will not include the instances. Instead the output will only include the value of the key field, in a field named key in the output:

  • For jsonl output format, the output will have a key field instead of the instance field.
  • For csv/bigquery output format, the output will have have a key column instead of the instance feature columns.

The input must be JSONL with objects at each line, CSV, BigQuery or TfRecord.

included_fields[]

string

Fields that will be included in the prediction instance that is sent to the Model.

If instance_type is array, the order of field names in included_fields also determines the order of the values in the array.

When included_fields is populated, excluded_fields must be empty.

The input must be JSONL with objects at each line, BigQuery or TfRecord.

excluded_fields[]

string

Fields that will be excluded in the prediction instance that is sent to the Model.

Excluded will be attached to the batch prediction output if key_field is not specified.

When excluded_fields is populated, included_fields must be empty.

The input must be JSONL with objects at each line, BigQuery or TfRecord.

OutputConfig

Configures the output of BatchPredictionJob. See Model.supported_output_storage_formats for supported output formats, and how predictions are expressed via any of them.

Fields
predictions_format

string

Required. The format in which Vertex AI gives the predictions, must be one of the Model's supported_output_storage_formats.

Union field destination. Required. The destination of the output. destination can be only one of the following:
gcs_destination

GcsDestination

The Cloud Storage location of the directory where the output is to be written to. In the given directory a new directory is created. Its name is prediction-<model-display-name>-<job-create-time>, where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. Inside of it files predictions_0001.<extension>, predictions_0002.<extension>, ..., predictions_N.<extension> are created where <extension> depends on chosen predictions_format, and N may equal 0001 and depends on the total number of successfully predicted instances. If the Model has both instance and prediction schemata defined then each such file contains predictions as per the predictions_format. If prediction for any instance failed (partially or completely), then an additional errors_0001.<extension>, errors_0002.<extension>,..., errors_N.<extension> files are created (N depends on total number of failed predictions). These files contain the failed instances, as per their schema, followed by an additional error field which as value has google.rpc.Status containing only code and message fields.

bigquery_destination

BigQueryDestination

The BigQuery project or dataset location where the output is to be written to. If project is provided, a new dataset is created with name prediction_<model-display-name>_<job-create-time> where is made BigQuery-dataset-name compatible (for example, most special characters become underscores), and timestamp is in YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In the dataset two tables will be created, predictions, and errors. If the Model has both instance and prediction schemata defined then the tables have columns as follows: The predictions table contains instances for which the prediction succeeded, it has columns as per a concatenation of the Model's instance and prediction schemata. The errors table contains rows for which the prediction has failed, it has instance columns, as per the instance schema, followed by a single "errors" column, which as values has google.rpc.Status represented as a STRUCT, and containing only code and message.

OutputInfo

Further describes this job's output. Supplements output_config.

Fields
bigquery_output_table

string

Output only. The name of the BigQuery table created, in predictions_<timestamp> format, into which the prediction output is written. Can be used by UI to generate the BigQuery output path, for example.

Union field output_location. The output location into which prediction output is written. output_location can be only one of the following:
gcs_output_directory

string

Output only. The full path of the Cloud Storage directory created, into which the prediction output is written.

bigquery_output_dataset

string

Output only. The path of the BigQuery dataset created, in bq://projectId.bqDatasetId format, into which the prediction output is written.

BatchReadFeatureValuesOperationMetadata

Details of operations that batch reads Feature values.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Featurestore batch read Features values.

BatchReadFeatureValuesRequest

Request message for FeaturestoreService.BatchReadFeatureValues.

Fields
featurestore

string

Required. The resource name of the Featurestore from which to query Feature values. Format: projects/{project}/locations/{location}/featurestores/{featurestore}

destination

FeatureValueDestination

Required. Specifies output location and format.

pass_through_fields[]

PassThroughField

When not empty, the specified fields in the *_read_instances source will be joined as-is in the output, in addition to those fields from the Featurestore Entity.

For BigQuery source, the type of the pass-through values will be automatically inferred. For CSV source, the pass-through values will be passed as opaque bytes.

entity_type_specs[]

EntityTypeSpec

Required. Specifies EntityType grouping Features to read values of and settings.

start_time

Timestamp

Optional. Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.

Union field read_option.

read_option can be only one of the following:

csv_read_instances

CsvSource

Each read instance consists of exactly one read timestamp and one or more entity IDs identifying entities of the corresponding EntityTypes whose Features are requested.

Each output instance contains Feature values of requested entities concatenated together as of the read time.

An example read instance may be foo_entity_id, bar_entity_id, 2020-01-01T10:00:00.123Z.

An example output instance may be foo_entity_id, bar_entity_id, 2020-01-01T10:00:00.123Z, foo_entity_feature1_value, bar_entity_feature2_value.

Timestamp in each read instance must be millisecond-aligned.

csv_read_instances are read instances stored in a plain-text CSV file. The header should be: [ENTITY_TYPE_ID1], [ENTITY_TYPE_ID2], ..., timestamp

The columns can be in any order.

Values in the timestamp column must use the RFC 3339 format, e.g. 2012-07-30T10:43:17.123Z.

bigquery_read_instances

BigQuerySource

Similar to csv_read_instances, but from BigQuery source.

EntityTypeSpec

Selects Features of an EntityType to read values of and specifies read settings.

Fields
entity_type_id

string

Required. ID of the EntityType to select Features. The EntityType id is the entity_type_id specified during EntityType creation.

feature_selector

FeatureSelector

Required. Selectors choosing which Feature values to read from the EntityType.

settings[]

DestinationFeatureSetting

Per-Feature settings for the batch read.

PassThroughField

Describe pass-through fields in read_instance source.

Fields
field_name

string

Required. The name of the field in the CSV header or the name of the column in BigQuery table. The naming restriction is the same as Feature.name.

BatchReadFeatureValuesResponse

This type has no fields.

Response message for FeaturestoreService.BatchReadFeatureValues.

BatchReadTensorboardTimeSeriesDataRequest

Request message for TensorboardService.BatchReadTensorboardTimeSeriesData.

Fields
tensorboard

string

Required. The resource name of the Tensorboard containing TensorboardTimeSeries to read data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}. The TensorboardTimeSeries referenced by time_series must be sub resources of this Tensorboard.

time_series[]

string

Required. The resource names of the TensorboardTimeSeries to read data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}

BatchReadTensorboardTimeSeriesDataResponse

Response message for TensorboardService.BatchReadTensorboardTimeSeriesData.

Fields
time_series_data[]

TimeSeriesData

The returned time series data.

BigQueryDestination

The BigQuery location for the output content.

Fields
output_uri

string

Required. BigQuery URI to a project or table, up to 2000 characters long.

When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist.

Accepted forms:

  • BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.

BigQuerySource

The BigQuery location for the input content.

Fields
input_uri

string

Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms:

  • BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.

BleuInput

Input for bleu metric.

Fields
metric_spec

BleuSpec

Required. Spec for bleu score metric.

instances[]

BleuInstance

Required. Repeated bleu instances.

BleuInstance

Spec for bleu instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Required. Ground truth used to compare against the prediction.

BleuMetricValue

Bleu metric value for an instance.

Fields
score

float

Output only. Bleu score.

BleuResults

Results for bleu metric.

Fields
bleu_metric_values[]

BleuMetricValue

Output only. Bleu metric values.

BleuSpec

Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1.

Fields
use_effective_order

bool

Optional. Whether to use_effective_order to compute bleu score.

Blob

Content blob.

Fields
mime_type

string

Required. The IANA standard MIME type of the source data.

data

bytes

Required. Raw bytes.

BlurBaselineConfig

Config for blur baseline.

When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383

Fields
max_blur_sigma

float

The standard deviation of the blur kernel for the blurred baseline. The same blurring parameter is used for both the height and the width dimension. If not set, the method defaults to the zero (i.e. black for images) baseline.

BoolArray

A list of boolean values.

Fields
values[]

bool

A list of bool values.

CachedContent

A resource used in LLM queries for users to explicitly specify what to cache and how to cache.

Fields
name

string

Immutable. Identifier. The server-generated resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}

display_name

string

Optional. Immutable. The user-generated meaningful display name of the cached content.

model

string

Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}

system_instruction

Content

Optional. Input only. Immutable. Developer set system instruction. Currently, text only

contents[]

Content

Optional. Input only. Immutable. The content to cache

tools[]

Tool

Optional. Input only. Immutable. A list of Tools the model may use to generate the next response

tool_config

ToolConfig

Optional. Input only. Immutable. Tool config. This config is shared for all tools

create_time

Timestamp

Output only. Creatation time of the cache entry.

update_time

Timestamp

Output only. When the cache entry was last updated in UTC time.

usage_metadata

UsageMetadata

Output only. Metadata on the usage of the cached content.

Union field expiration. Expiration time of the cached content. expiration can be only one of the following:
expire_time

Timestamp

Timestamp of when this resource is considered expired. This is always provided on output, regardless of what was sent on input.

ttl

Duration

Input only. The TTL for this resource. The expiration time is computed: now + TTL.

UsageMetadata

Metadata on the usage of the cached content.

Fields
total_token_count

int32

Total number of tokens that the cached content consumes.

text_count

int32

Number of text characters.

image_count

int32

Number of images.

video_duration_seconds

int32

Duration of video in seconds.

audio_duration_seconds

int32

Duration of audio in seconds.

CancelBatchPredictionJobRequest

Request message for JobService.CancelBatchPredictionJob.

Fields
name

string

Required. The name of the BatchPredictionJob to cancel. Format: projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}

CancelCustomJobRequest

Request message for JobService.CancelCustomJob.

Fields
name

string

Required. The name of the CustomJob to cancel. Format: projects/{project}/locations/{location}/customJobs/{custom_job}

CancelHyperparameterTuningJobRequest

Request message for JobService.CancelHyperparameterTuningJob.

Fields
name

string

Required. The name of the HyperparameterTuningJob to cancel. Format: projects/{project}/locations/{location}/hyperparameterTuningJobs/{hyperparameter_tuning_job}

CancelPipelineJobRequest

Request message for PipelineService.CancelPipelineJob.

Fields
name

string

Required. The name of the PipelineJob to cancel. Format: projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}

CancelTrainingPipelineRequest

Request message for PipelineService.CancelTrainingPipeline.

Fields
name

string

Required. The name of the TrainingPipeline to cancel. Format: projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}

CancelTuningJobRequest

Request message for GenAiTuningService.CancelTuningJob.

Fields
name

string

Required. The name of the TuningJob to cancel. Format: projects/{project}/locations/{location}/tuningJobs/{tuning_job}

Candidate

A response candidate generated from the model.

Fields
index

int32

Output only. Index of the candidate.

content

Content

Output only. Content parts of the candidate.

avg_logprobs

double

Output only. Average log probability score of the candidate.

logprobs_result

LogprobsResult

Output only. Log-likelihood scores for the response tokens and top tokens

finish_reason

FinishReason

Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

safety_ratings[]

SafetyRating

Output only. List of ratings for the safety of a response candidate.

There is at most one rating per category.

citation_metadata

CitationMetadata

Output only. Source attribution of the generated content.

grounding_metadata

GroundingMetadata

Output only. Metadata specifies sources used to ground generated content.

finish_message

string

Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when finish_reason is set.

FinishReason

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

Enums
FINISH_REASON_UNSPECIFIED The finish reason is unspecified.
STOP Token generation reached a natural stopping point or a configured stop sequence.
MAX_TOKENS Token generation reached the configured maximum output tokens.
SAFETY Token generation stopped because the content potentially contains safety violations. NOTE: When streaming, content is empty if content filters blocks the output.
RECITATION The token generation stopped because of potential recitation.
OTHER All other reasons that stopped the token generation.
BLOCKLIST Token generation stopped because the content contains forbidden terms.
PROHIBITED_CONTENT Token generation stopped for potentially containing prohibited content.
SPII Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII).
MALFORMED_FUNCTION_CALL The function call generated by the model is invalid.

ChatCompletionsRequest

Request message for [PredictionService.ChatCompletions]

Fields
endpoint

string

Required. The name of the endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

http_body

HttpBody

Optional. The prediction input. Supports HTTP headers and arbitrary data payload.

CheckTrialEarlyStoppingStateMetatdata

This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for suggesting Trials.

study

string

The name of the Study that the Trial belongs to.

trial

string

The Trial name.

CheckTrialEarlyStoppingStateRequest

Request message for VizierService.CheckTrialEarlyStoppingState.

Fields
trial_name

string

Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}

CheckTrialEarlyStoppingStateResponse

Response message for VizierService.CheckTrialEarlyStoppingState.

Fields
should_stop

bool

True if the Trial should stop.

Citation

Source attributions for content.

Fields
start_index

int32

Output only. Start index into the content.

end_index

int32

Output only. End index into the content.

uri

string

Output only. Url reference of the attribution.

title

string

Output only. Title of the attribution.

license

string

Output only. License of the attribution.

publication_date

Date

Output only. Publication date of the attribution.

CitationMetadata

A collection of source attributions for a piece of content.

Fields
citations[]

Citation

Output only. List of citations.

Claim

Claim that is extracted from the input text and facts that support it.

Fields
fact_indexes[]

int32

Indexes of the facts supporting this claim.

start_index

int32

Index in the input text where the claim starts (inclusive).

end_index

int32

Index in the input text where the claim ends (exclusive).

score

float

Confidence score of this corroboration.

ClientConnectionConfig

Configurations (e.g. inference timeout) that are applied on your endpoints.

Fields
inference_timeout

Duration

Customizable online prediction request timeout.

CodeExecutionResult

Result of executing the [ExecutableCode].

Always follows a part containing the [ExecutableCode].

Fields
outcome

Outcome

Required. Outcome of the code execution.

output

string

Optional. Contains stdout when code execution is successful, stderr or other description otherwise.

Outcome

Enumeration of possible outcomes of the code execution.

Enums
OUTCOME_UNSPECIFIED Unspecified status. This value should not be used.
OUTCOME_OK Code execution completed successfully.
OUTCOME_FAILED Code execution finished but with a failure. stderr should contain the reason.
OUTCOME_DEADLINE_EXCEEDED Code execution ran for too long, and was cancelled. There may or may not be a partial output present.

CoherenceInput

Input for coherence metric.

Fields
metric_spec

CoherenceSpec

Required. Spec for coherence score metric.

instance

CoherenceInstance

Required. Coherence instance.

CoherenceInstance

Spec for coherence instance.

Fields
prediction

string

Required. Output of the evaluated model.

CoherenceResult

Spec for coherence result.

Fields
explanation

string

Output only. Explanation for coherence score.

score

float

Output only. Coherence score.

confidence

float

Output only. Confidence for coherence score.

CoherenceSpec

Spec for coherence score metric.

Fields
version

int32

Optional. Which version to use for evaluation.

CometInput

Input for Comet metric.

Fields
metric_spec

CometSpec

Required. Spec for comet metric.

instance

CometInstance

Required. Comet instance.

CometInstance

Spec for Comet instance - The fields used for evaluation are dependent on the comet version.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

source

string

Optional. Source text in original language.

CometResult

Spec for Comet result - calculates the comet score for the given instance using the version specified in the spec.

Fields
score

float

Output only. Comet score. Range depends on version.

CometSpec

Spec for Comet metric.

Fields
source_language

string

Optional. Source language in BCP-47 format.

target_language

string

Optional. Target language in BCP-47 format. Covers both prediction and reference.

version

CometVersion

Required. Which version to use for evaluation.

CometVersion

Comet version options.

Enums
COMET_VERSION_UNSPECIFIED Comet version unspecified.
COMET_22_SRC_REF Comet 22 for translation + source + reference (source-reference-combined).

CompleteTrialRequest

Request message for VizierService.CompleteTrial.

Fields
name

string

Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}

final_measurement

Measurement

Optional. If provided, it will be used as the completed Trial's final_measurement; Otherwise, the service will auto-select a previously reported measurement as the final-measurement

trial_infeasible

bool

Optional. True if the Trial cannot be run with the given Parameter, and final_measurement will be ignored.

infeasible_reason

string

Optional. A human readable reason why the trial was infeasible. This should only be provided if trial_infeasible is true.

CompletionStats

Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.

Fields
successful_count

int64

Output only. The number of entities that had been processed successfully.

failed_count

int64

Output only. The number of entities for which any error was encountered.

incomplete_count

int64

Output only. In cases when enough errors are encountered a job, pipeline, or operation may be failed as a whole. Below is the number of entities for which the processing had not been finished (either in successful or failed state). Set to -1 if the number is unknown (for example, the operation failed before the total entity number could be collected).

successful_forecast_point_count

int64

Output only. The number of the successful forecast points that are generated by the forecasting model. This is ONLY used by the forecasting batch prediction.

ComputeTokensRequest

Request message for ComputeTokens RPC call.

Fields
endpoint

string

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances[]

Value

Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

model

string

Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*

contents[]

Content

Optional. Input content.

ComputeTokensResponse

Response message for ComputeTokens RPC call.

Fields
tokens_info[]

TokensInfo

Lists of tokens info from the input. A ComputeTokensRequest could have multiple instances with a prompt in each instance. We also need to return lists of tokens info for the request with multiple instances.

ContainerRegistryDestination

The Container Registry location for the container image.

Fields
output_uri

string

Required. Container Registry URI of a container image. Only Google Container Registry and Artifact Registry are supported now. Accepted forms:

  • Google Container Registry path. For example: gcr.io/projectId/imageName:tag.

  • Artifact Registry path. For example: us-central1-docker.pkg.dev/projectId/repoName/imageName:tag.

If a tag is not specified, "latest" will be used as the default tag.

ContainerSpec

The spec of a Container.

Fields
image_uri

string

Required. The URI of a container image in the Container Registry that is to be run on each worker replica.

command[]

string

The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.

args[]

string

The arguments to be passed when starting the container.

env[]

EnvVar

Environment variables to be passed to the container. Maximum limit is 100.

Content

The base structured datatype containing multi-part content of a message.

A Content includes a role field designating the producer of the Content and a parts field containing multi-part data that contains the content of the message turn.

Fields
role

string

Optional. The producer of the content. Must be either 'user' or 'model'.

Useful to set for multi-turn conversations, otherwise can be left blank or unset.

parts[]

Part

Required. Ordered Parts that constitute a single message. Parts may have different IANA MIME types.

Context

Instance of a general context.

Fields
name

string

Immutable. The resource name of the Context.

display_name

string

User provided display name of the Context. May be up to 128 Unicode characters.

etag

string

An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your Contexts.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).

create_time

Timestamp

Output only. Timestamp when this Context was created.

update_time

Timestamp

Output only. Timestamp when this Context was last updated.

parent_contexts[]

string

Output only. A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.

schema_title

string

The title of the schema describing the metadata.

Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

schema_version

string

The version of the schema in schema_name to use.

Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

metadata

Struct

Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.

description

string

Description of the Context

CopyModelOperationMetadata

Details of ModelService.CopyModel operation.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

CopyModelRequest

Request message for ModelService.CopyModel.

Fields
parent

string

Required. The resource name of the Location into which to copy the Model. Format: projects/{project}/locations/{location}

source_model

string

Required. The resource name of the Model to copy. That Model must be in the same Project. Format: projects/{project}/locations/{location}/models/{model}

encryption_spec

EncryptionSpec

Customer-managed encryption key options. If this is set, then the Model copy will be encrypted with the provided encryption key.

Union field destination_model. If both fields are unset, a new Model will be created with a generated ID. destination_model can be only one of the following:
model_id

string

Optional. Copy source_model into a new Model with this ID. The ID will become the final component of the model resource name.

This value may be up to 63 characters, and valid characters are [a-z0-9_-]. The first character cannot be a number or hyphen.

parent_model

string

Optional. Specify this field to copy source_model into this existing Model as a new version. Format: projects/{project}/locations/{location}/models/{model}

CopyModelResponse

Response message of ModelService.CopyModel operation.

Fields
model

string

The name of the copied Model resource. Format: projects/{project}/locations/{location}/models/{model}

model_version_id

string

Output only. The version ID of the model that is copied.

CorpusStatus

RagCorpus status.

Fields
state

State

Output only. RagCorpus life state.

error_status

string

Output only. Only when the state field is ERROR.

State

RagCorpus life state.

Enums
UNKNOWN This state is not supposed to happen.
INITIALIZED RagCorpus resource entry is initialized, but hasn't done validation.
ACTIVE RagCorpus is provisioned successfully and is ready to serve.
ERROR RagCorpus is in a problematic situation. See error_message field for details.

CorroborateContentRequest

Request message for CorroborateContent.

Fields
parent

string

Required. The resource name of the Location from which to corroborate text. The users must have permission to make a call in the project. Format: projects/{project}/locations/{location}.

facts[]

Fact

Optional. Facts used to generate the text can also be used to corroborate the text.

parameters

Parameters

Optional. Parameters that can be set to override default settings per request.

content

Content

Optional. Input content to corroborate, only text format is supported for now.

Parameters

Parameters that can be overrided per request.

Fields
citation_threshold

double

Optional. Only return claims with citation score larger than the threshold.

CorroborateContentResponse

Response message for CorroborateContent.

Fields
claims[]

Claim

Claims that are extracted from the input content and facts that support the claims.

corroboration_score

float

Confidence score of corroborating content. Value is [0,1] with 1 is the most confidence.

CountTokensRequest

Request message for PredictionService.CountTokens.

Fields
endpoint

string

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

model

string

Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*

instances[]

Value

Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

contents[]

Content

Optional. Input content.

tools[]

Tool

Optional. A list of Tools the model may use to generate the next response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

system_instruction

Content

Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.

generation_config

GenerationConfig

Optional. Generation config that the model will use to generate the response.

CountTokensResponse

Response message for PredictionService.CountTokens.

Fields
total_tokens

int32

The total number of tokens counted across all instances from the request.

total_billable_characters

int32

The total number of billable characters counted across all instances from the request.

CreateArtifactRequest

Request message for MetadataService.CreateArtifact.

Fields
parent

string

Required. The resource name of the MetadataStore where the Artifact should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

artifact

Artifact

Required. The Artifact to create.

artifact_id

string

The {artifact} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact} If not provided, the Artifact's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/. Must be unique across all Artifacts in the parent MetadataStore. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting Artifact.)

CreateBatchPredictionJobRequest

Request message for JobService.CreateBatchPredictionJob.

Fields
parent

string

Required. The resource name of the Location to create the BatchPredictionJob in. Format: projects/{project}/locations/{location}

batch_prediction_job

BatchPredictionJob

Required. The BatchPredictionJob to create.

CreateCachedContentRequest

Request message for GenAiCacheService.CreateCachedContent.

Fields
parent

string

Required. The parent resource where the cached content will be created

cached_content

CachedContent

Required. The cached content to create

CreateContextRequest

Request message for MetadataService.CreateContext.

Fields
parent

string

Required. The resource name of the MetadataStore where the Context should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

context

Context

Required. The Context to create.

context_id

string

The {context} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}. If not provided, the Context's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/. Must be unique across all Contexts in the parent MetadataStore. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting Context.)

CreateCustomJobRequest

Request message for JobService.CreateCustomJob.

Fields
parent

string

Required. The resource name of the Location to create the CustomJob in. Format: projects/{project}/locations/{location}

custom_job

CustomJob

Required. The CustomJob to create.

CreateDatasetOperationMetadata

Runtime operation information for DatasetService.CreateDataset.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateDatasetRequest

Request message for DatasetService.CreateDataset.

Fields
parent

string

Required. The resource name of the Location to create the Dataset in. Format: projects/{project}/locations/{location}

dataset

Dataset

Required. The Dataset to create.

CreateDatasetVersionOperationMetadata

Runtime operation information for DatasetService.CreateDatasetVersion.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

CreateDatasetVersionRequest

Request message for DatasetService.CreateDatasetVersion.

Fields
parent

string

Required. The name of the Dataset resource. Format: projects/{project}/locations/{location}/datasets/{dataset}

dataset_version

DatasetVersion

Required. The version to be created. The same CMEK policies with the original Dataset will be applied the dataset version. So here we don't need to specify the EncryptionSpecType here.

CreateDeploymentResourcePoolOperationMetadata

Runtime operation information for CreateDeploymentResourcePool method.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateDeploymentResourcePoolRequest

Request message for CreateDeploymentResourcePool method.

Fields
parent

string

Required. The parent location resource where this DeploymentResourcePool will be created. Format: projects/{project}/locations/{location}

deployment_resource_pool

DeploymentResourcePool

Required. The DeploymentResourcePool to create.

deployment_resource_pool_id

string

Required. The ID to use for the DeploymentResourcePool, which will become the final component of the DeploymentResourcePool's resource name.

The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.

CreateEndpointOperationMetadata

Runtime operation information for EndpointService.CreateEndpoint.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateEndpointRequest

Request message for EndpointService.CreateEndpoint.

Fields
parent

string

Required. The resource name of the Location to create the Endpoint in. Format: projects/{project}/locations/{location}

endpoint

Endpoint

Required. The Endpoint to create.

endpoint_id

string

Immutable. The ID to use for endpoint, which will become the final component of the endpoint resource name. If not provided, Vertex AI will generate a value for this ID.

If the first character is a letter, this value may be up to 63 characters, and valid characters are [a-z0-9-]. The last character must be a letter or number.

If the first character is a number, this value may be up to 9 characters, and valid characters are [0-9] with no leading zeros.

When using HTTP/JSON, this field is populated based on a query string argument, such as ?endpoint_id=12345. This is the fallback for fields that are not included in either the URI or the body.

CreateEntityTypeOperationMetadata

Details of operations that perform create EntityType.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for EntityType.

CreateEntityTypeRequest

Request message for FeaturestoreService.CreateEntityType.

Fields
parent

string

Required. The resource name of the Featurestore to create EntityTypes. Format: projects/{project}/locations/{location}/featurestores/{featurestore}

entity_type

EntityType

The EntityType to create.

entity_type_id

string

Required. The ID to use for the EntityType, which will become the final component of the EntityType's resource name.

This value may be up to 60 characters, and valid characters are [a-z0-9_]. The first character cannot be a number.

The value must be unique within a featurestore.

CreateExecutionRequest

Request message for MetadataService.CreateExecution.

Fields
parent

string

Required. The resource name of the MetadataStore where the Execution should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

execution

Execution

Required. The Execution to create.

execution_id

string

The {execution} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution} If not provided, the Execution's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/. Must be unique across all Executions in the parent MetadataStore. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting Execution.)

CreateExtensionControllerOperationMetadata

Details of ExtensionControllerService.CreateExtensionController operation.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

CreateFeatureGroupOperationMetadata

Details of operations that perform create FeatureGroup.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for FeatureGroup.

CreateFeatureGroupRequest

Request message for FeatureRegistryService.CreateFeatureGroup.

Fields
parent

string

Required. The resource name of the Location to create FeatureGroups. Format: projects/{project}/locations/{location}

feature_group

FeatureGroup

Required. The FeatureGroup to create.

feature_group_id

string

Required. The ID to use for this FeatureGroup, which will become the final component of the FeatureGroup's resource name.

This value may be up to 128 characters, and valid characters are [a-z0-9_]. The first character cannot be a number.

The value must be unique within the project and location.

CreateFeatureMonitorJobRequest

Request message for [FeatureRegistryService.CreateFeatureMonitorJobRequest][].

Fields
parent

string

Required. The resource name of FeatureMonitor to create FeatureMonitorJob. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}/featureMonitors/{feature_monitor}

feature_monitor_job

FeatureMonitorJob

Required. The Monitor to create.

feature_monitor_job_id

int64

Optional. Output only. System-generated ID for feature monitor job.

CreateFeatureMonitorOperationMetadata

Details of operations that perform create FeatureMonitor.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Feature.

CreateFeatureMonitorRequest

Request message for [FeatureRegistryService.CreateFeatureMonitorRequest][].

Fields
parent

string

Required. The resource name of FeatureGroup to create FeatureMonitor. Format: projects/{project}/locations/{location}/featureGroups/{featuregroup}

feature_monitor

FeatureMonitor

Required. The Monitor to create.

feature_monitor_id

string

Required. The ID to use for this FeatureMonitor, which will become the final component of the FeatureGroup's resource name.

This value may be up to 60 characters, and valid characters are [a-z0-9_]. The first character cannot be a number.

The value must be unique within the FeatureGroup.

CreateFeatureOnlineStoreOperationMetadata

Details of operations that perform create FeatureOnlineStore.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for FeatureOnlineStore.

CreateFeatureOnlineStoreRequest

Request message for FeatureOnlineStoreAdminService.CreateFeatureOnlineStore.

Fields
parent

string

Required. The resource name of the Location to create FeatureOnlineStores. Format: projects/{project}/locations/{location}

feature_online_store

FeatureOnlineStore

Required. The FeatureOnlineStore to create.

feature_online_store_id

string

Required. The ID to use for this FeatureOnlineStore, which will become the final component of the FeatureOnlineStore's resource name.

This value may be up to 60 characters, and valid characters are [a-z0-9_]. The first character cannot be a number.

The value must be unique within the project and location.

CreateFeatureOperationMetadata

Details of operations that perform create Feature.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Feature.

CreateFeatureRequest

Request message for FeaturestoreService.CreateFeature. Request message for FeatureRegistryService.CreateFeature.

Fields
parent

string

Required. The resource name of the EntityType or FeatureGroup to create a Feature. Format for entity_type as parent: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type} Format for feature_group as parent: projects/{project}/locations/{location}/featureGroups/{feature_group}

feature

Feature

Required. The Feature to create.

feature_id

string

Required. The ID to use for the Feature, which will become the final component of the Feature's resource name.

This value may be up to 128 characters, and valid characters are [a-z0-9_]. The first character cannot be a number.

The value must be unique within an EntityType/FeatureGroup.

CreateFeatureViewOperationMetadata

Details of operations that perform create FeatureView.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for FeatureView Create.

CreateFeatureViewRequest

Request message for FeatureOnlineStoreAdminService.CreateFeatureView.

Fields
parent

string

Required. The resource name of the FeatureOnlineStore to create FeatureViews. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}

feature_view

FeatureView

Required. The FeatureView to create.

feature_view_id

string

Required. The ID to use for the FeatureView, which will become the final component of the FeatureView's resource name.

This value may be up to 60 characters, and valid characters are [a-z0-9_]. The first character cannot be a number.

The value must be unique within a FeatureOnlineStore.

run_sync_immediately

bool

Immutable. If set to true, one on demand sync will be run immediately, regardless whether the FeatureView.sync_config is configured or not.

CreateFeaturestoreOperationMetadata

Details of operations that perform create Featurestore.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Featurestore.

CreateFeaturestoreRequest

Request message for FeaturestoreService.CreateFeaturestore.

Fields
parent

string

Required. The resource name of the Location to create Featurestores. Format: projects/{project}/locations/{location}

featurestore

Featurestore

Required. The Featurestore to create.

featurestore_id

string

Required. The ID to use for this Featurestore, which will become the final component of the Featurestore's resource name.

This value may be up to 60 characters, and valid characters are [a-z0-9_]. The first character cannot be a number.

The value must be unique within the project and location.

CreateHyperparameterTuningJobRequest

Request message for JobService.CreateHyperparameterTuningJob.

Fields
parent

string

Required. The resource name of the Location to create the HyperparameterTuningJob in. Format: projects/{project}/locations/{location}

hyperparameter_tuning_job

HyperparameterTuningJob

Required. The HyperparameterTuningJob to create.

CreateIndexEndpointOperationMetadata

Runtime operation information for IndexEndpointService.CreateIndexEndpoint.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateIndexEndpointRequest

Request message for IndexEndpointService.CreateIndexEndpoint.

Fields
parent

string

Required. The resource name of the Location to create the IndexEndpoint in. Format: projects/{project}/locations/{location}

index_endpoint

IndexEndpoint

Required. The IndexEndpoint to create.

CreateIndexOperationMetadata

Runtime operation information for IndexService.CreateIndex.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

nearest_neighbor_search_operation_metadata

NearestNeighborSearchOperationMetadata

The operation metadata with regard to Matching Engine Index operation.

CreateIndexRequest

Request message for IndexService.CreateIndex.

Fields
parent

string

Required. The resource name of the Location to create the Index in. Format: projects/{project}/locations/{location}

index

Index

Required. The Index to create.

CreateMetadataSchemaRequest

Request message for MetadataService.CreateMetadataSchema.

Fields
parent

string

Required. The resource name of the MetadataStore where the MetadataSchema should be created. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

metadata_schema

MetadataSchema

Required. The MetadataSchema to create.

metadata_schema_id

string

The {metadata_schema} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/metadataSchemas/{metadataschema} If not provided, the MetadataStore's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/. Must be unique across all MetadataSchemas in the parent Location. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting MetadataSchema.)

CreateMetadataStoreOperationMetadata

Details of operations that perform MetadataService.CreateMetadataStore.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for creating a MetadataStore.

CreateMetadataStoreRequest

Request message for MetadataService.CreateMetadataStore.

Fields
parent

string

Required. The resource name of the Location where the MetadataStore should be created. Format: projects/{project}/locations/{location}/

metadata_store

MetadataStore

Required. The MetadataStore to create.

metadata_store_id

string

The {metadatastore} portion of the resource name with the format: projects/{project}/locations/{location}/metadataStores/{metadatastore} If not provided, the MetadataStore's ID will be a UUID generated by the service. Must be 4-128 characters in length. Valid characters are /[a-z][0-9]-/. Must be unique across all MetadataStores in the parent Location. (Otherwise the request will fail with ALREADY_EXISTS, or PERMISSION_DENIED if the caller can't view the preexisting MetadataStore.)

CreateModelDeploymentMonitoringJobRequest

Request message for JobService.CreateModelDeploymentMonitoringJob.

Fields
parent

string

Required. The parent of the ModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}

model_deployment_monitoring_job

ModelDeploymentMonitoringJob

Required. The ModelDeploymentMonitoringJob to create

CreateModelMonitorOperationMetadata

Runtime operation information for ModelMonitoringService.CreateModelMonitor.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateModelMonitorRequest

Request message for ModelMonitoringService.CreateModelMonitor.

Fields
parent

string

Required. The resource name of the Location to create the ModelMonitor in. Format: projects/{project}/locations/{location}

model_monitor

ModelMonitor

Required. The ModelMonitor to create.

model_monitor_id

string

Optional. The ID to use for the Model Monitor, which will become the final component of the model monitor resource name.

The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.

CreateModelMonitoringJobRequest

Request message for ModelMonitoringService.CreateModelMonitoringJob.

Fields
parent

string

Required. The parent of the ModelMonitoringJob. Format: projects/{project}/locations/{location}/modelMoniitors/{model_monitor}

model_monitoring_job

ModelMonitoringJob

Required. The ModelMonitoringJob to create

model_monitoring_job_id

string

Optional. The ID to use for the Model Monitoring Job, which will become the final component of the model monitoring job resource name.

The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.

CreateNotebookExecutionJobOperationMetadata

Metadata information for NotebookService.CreateNotebookExecutionJob.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

progress_message

string

A human-readable message that shows the intermediate progress details of NotebookRuntime.

CreateNotebookExecutionJobRequest

Request message for [NotebookService.CreateNotebookExecutionJob]

Fields
parent

string

Required. The resource name of the Location to create the NotebookExecutionJob. Format: projects/{project}/locations/{location}

notebook_execution_job

NotebookExecutionJob

Required. The NotebookExecutionJob to create.

notebook_execution_job_id

string

Optional. User specified ID for the NotebookExecutionJob.

CreateNotebookRuntimeTemplateOperationMetadata

Metadata information for NotebookService.CreateNotebookRuntimeTemplate.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateNotebookRuntimeTemplateRequest

Request message for NotebookService.CreateNotebookRuntimeTemplate.

Fields
parent

string

Required. The resource name of the Location to create the NotebookRuntimeTemplate. Format: projects/{project}/locations/{location}

notebook_runtime_template

NotebookRuntimeTemplate

Required. The NotebookRuntimeTemplate to create.

notebook_runtime_template_id

string

Optional. User specified ID for the notebook runtime template.

CreatePersistentResourceOperationMetadata

Details of operations that perform create PersistentResource.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for PersistentResource.

progress_message

string

Progress Message for Create LRO

CreatePersistentResourceRequest

Request message for PersistentResourceService.CreatePersistentResource.

Fields
parent

string

Required. The resource name of the Location to create the PersistentResource in. Format: projects/{project}/locations/{location}

persistent_resource

PersistentResource

Required. The PersistentResource to create.

persistent_resource_id

string

Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name.

The maximum length is 63 characters, and valid characters are /^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/.

CreatePipelineJobRequest

Request message for PipelineService.CreatePipelineJob.

Fields
parent

string

Required. The resource name of the Location to create the PipelineJob in. Format: projects/{project}/locations/{location}

pipeline_job

PipelineJob

Required. The PipelineJob to create.

pipeline_job_id

string

The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated.

This value should be less than 128 characters, and valid characters are /[a-z][0-9]-/.

CreateRagCorpusOperationMetadata

Runtime operation information for VertexRagDataService.CreateRagCorpus.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateRagCorpusRequest

Request message for VertexRagDataService.CreateRagCorpus.

Fields
parent

string

Required. The resource name of the Location to create the RagCorpus in. Format: projects/{project}/locations/{location}

rag_corpus

RagCorpus

Required. The RagCorpus to create.

CreateReasoningEngineOperationMetadata

Details of ReasoningEngineService.CreateReasoningEngine operation.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

CreateReasoningEngineRequest

Request message for ReasoningEngineService.CreateReasoningEngine.

Fields
parent

string

Required. The resource name of the Location to create the ReasoningEngine in. Format: projects/{project}/locations/{location}

reasoning_engine

ReasoningEngine

Required. The ReasoningEngine to create.

CreateRegistryFeatureOperationMetadata

Details of operations that perform create FeatureGroup.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Feature.

CreateScheduleRequest

Request message for ScheduleService.CreateSchedule.

Fields
parent

string

Required. The resource name of the Location to create the Schedule in. Format: projects/{project}/locations/{location}

schedule

Schedule

Required. The Schedule to create.

CreateSolverOperationMetadata

Runtime operation information for SolverService.CreateSolver.

Fields
generic_metadata

GenericOperationMetadata

The generic operation information.

CreateSpecialistPoolOperationMetadata

Runtime operation information for SpecialistPoolService.CreateSpecialistPool.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

CreateSpecialistPoolRequest

Request message for SpecialistPoolService.CreateSpecialistPool.

Fields
parent

string

Required. The parent Project name for the new SpecialistPool. The form is projects/{project}/locations/{location}.

specialist_pool

SpecialistPool

Required. The SpecialistPool to create.

CreateStudyRequest

Request message for VizierService.CreateStudy.

Fields
parent

string

Required. The resource name of the Location to create the CustomJob in. Format: projects/{project}/locations/{location}

study

Study

Required. The Study configuration used to create the Study.

CreateTensorboardExperimentRequest

Request message for TensorboardService.CreateTensorboardExperiment.

Fields
parent

string

Required. The resource name of the Tensorboard to create the TensorboardExperiment in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

tensorboard_experiment

TensorboardExperiment

The TensorboardExperiment to create.

tensorboard_experiment_id

string

Required. The ID to use for the Tensorboard experiment, which becomes the final component of the Tensorboard experiment's resource name.

This value should be 1-128 characters, and valid characters are /[a-z][0-9]-/.

CreateTensorboardOperationMetadata

Details of operations that perform create Tensorboard.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Tensorboard.

CreateTensorboardRequest

Request message for TensorboardService.CreateTensorboard.

Fields
parent

string

Required. The resource name of the Location to create the Tensorboard in. Format: projects/{project}/locations/{location}

tensorboard

Tensorboard

Required. The Tensorboard to create.

CreateTensorboardRunRequest

Request message for TensorboardService.CreateTensorboardRun.

Fields
parent

string

Required. The resource name of the TensorboardExperiment to create the TensorboardRun in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

tensorboard_run

TensorboardRun

Required. The TensorboardRun to create.

tensorboard_run_id

string

Required. The ID to use for the Tensorboard run, which becomes the final component of the Tensorboard run's resource name.

This value should be 1-128 characters, and valid characters are /[a-z][0-9]-/.

CreateTensorboardTimeSeriesRequest

Request message for TensorboardService.CreateTensorboardTimeSeries.

Fields
parent

string

Required. The resource name of the TensorboardRun to create the TensorboardTimeSeries in. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

tensorboard_time_series_id

string

Optional. The user specified unique ID to use for the TensorboardTimeSeries, which becomes the final component of the TensorboardTimeSeries's resource name. This value should match "[a-z0-9][a-z0-9-]{0, 127}"

tensorboard_time_series

TensorboardTimeSeries

Required. The TensorboardTimeSeries to create.

CreateTrainingPipelineRequest

Request message for PipelineService.CreateTrainingPipeline.

Fields
parent

string

Required. The resource name of the Location to create the TrainingPipeline in. Format: projects/{project}/locations/{location}

training_pipeline

TrainingPipeline

Required. The TrainingPipeline to create.

CreateTrialRequest

Request message for VizierService.CreateTrial.

Fields
parent

string

Required. The resource name of the Study to create the Trial in. Format: projects/{project}/locations/{location}/studies/{study}

trial

Trial

Required. The Trial to create.

CreateTuningJobRequest

Request message for GenAiTuningService.CreateTuningJob.

Fields
parent

string

Required. The resource name of the Location to create the TuningJob in. Format: projects/{project}/locations/{location}

tuning_job

TuningJob

Required. The TuningJob to create.

CsvDestination

The storage details for CSV output content.

Fields
gcs_destination

GcsDestination

Required. Google Cloud Storage location.

CsvSource

The storage details for CSV input content.

Fields
gcs_source

GcsSource

Required. Google Cloud Storage location.

CustomJob

Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).

Fields
name

string

Output only. Resource name of a CustomJob.

display_name

string

Required. The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.

job_spec

CustomJobSpec

Required. Job spec.

state

JobState

Output only. The detailed state of the job.

create_time

Timestamp

Output only. Time when the CustomJob was created.

start_time

Timestamp

Output only. Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.

end_time

Timestamp

Output only. Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

update_time

Timestamp

Output only. Time when the CustomJob was most recently updated.

error

Status

Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

labels

map<string, string>

The labels with user-defined metadata to organize CustomJobs.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

encryption_spec

EncryptionSpec

Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.

web_access_uris

map<string, string>

Output only. URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true.

The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool.

The values are the URIs for each node's interactive shell.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

CustomJobSpec

Represents the spec of a CustomJob.

Fields
persistent_resource_id

string

Optional. The ID of the PersistentResource in the same Project and Location which to run

If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.

worker_pool_specs[]

WorkerPoolSpec

Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.

scheduling

Scheduling

Scheduling options for a CustomJob.

service_account

string

Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.

network

string

Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name.

To specify this field, you must have already configured VPC Network Peering for Vertex AI.

If this field is left unspecified, the job is not peered with any network.

reserved_ip_ranges[]

string

Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job.

If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network.

Example: ['vertex-ai-ip-range'].

base_output_directory

GcsDestination

The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory.

The following Vertex AI environment variables will be passed to containers or python modules when this field is set:

For CustomJob:

  • AIP_MODEL_DIR = <base_output_directory>/model/
  • AIP_CHECKPOINT_DIR = <base_output_directory>/checkpoints/
  • AIP_TENSORBOARD_LOG_DIR = <base_output_directory>/logs/

For CustomJob backing a Trial of HyperparameterTuningJob:

  • AIP_MODEL_DIR = <base_output_directory>/<trial_id>/model/
  • AIP_CHECKPOINT_DIR = <base_output_directory>/<trial_id>/checkpoints/
  • AIP_TENSORBOARD_LOG_DIR = <base_output_directory>/<trial_id>/logs/
protected_artifact_location_id

string

The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations

tensorboard

string

Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

enable_web_access

bool

Optional. Whether you want Vertex AI to enable interactive shell access to training containers.

If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).

enable_dashboard_access

bool

Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container.

If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).

experiment

string

Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}

experiment_run

string

Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}

models[]

string

Optional. The name of the Model resources for which to generate a mapping to artifact URIs. Applicable only to some of the Google-provided custom jobs. Format: projects/{project}/locations/{location}/models/{model}

In order to retrieve a specific version of the model, also provide the version ID or version alias. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden If no version ID or alias is specified, the "default" version will be returned. The "default" version alias is created for the first version of the model, and can be moved to other versions later on. There will be exactly one default version.

DataItem

A piece of data in a Dataset. Could be an image, a video, a document or plain text.

Fields
name

string

Output only. The resource name of the DataItem.

create_time

Timestamp

Output only. Timestamp when this DataItem was created.

update_time

Timestamp

Output only. Timestamp when this DataItem was last updated.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your DataItems.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one DataItem(System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

payload

Value

Required. The data that the DataItem represents (for example, an image or a text snippet). The schema of the payload is stored in the parent Dataset's metadata schema's dataItemSchemaUri field.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

DataItemView

A container for a single DataItem and Annotations on it.

Fields
data_item

DataItem

The DataItem.

annotations[]

Annotation

The Annotations on the DataItem. If too many Annotations should be returned for the DataItem, this field will be truncated per annotations_limit in request. If it was, then the has_truncated_annotations will be set to true.

has_truncated_annotations

bool

True if and only if the Annotations field has been truncated. It happens if more Annotations for this DataItem met the request's annotation_filter than are allowed to be returned by annotations_limit. Note that if Annotations field is not being returned due to field mask, then this field will not be set to true no matter how many Annotations are there.

Dataset

A collection of DataItems and Annotations on them.

Fields
name

string

Output only. Identifier. The resource name of the Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}

display_name

string

Required. The user-defined name of the Dataset. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

The description of the Dataset.

metadata_schema_uri

string

Required. Points to a YAML file stored on Google Cloud Storage describing additional information about the Dataset. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/metadata/.

metadata

Value

Required. Additional information about the Dataset.

data_item_count

int64

Output only. The number of DataItems in this Dataset. Only apply for non-structured Dataset.

create_time

Timestamp

Output only. Timestamp when this Dataset was created.

update_time

Timestamp

Output only. Timestamp when this Dataset was last updated.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your Datasets.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Dataset:

  • "aiplatform.googleapis.com/dataset_metadata_schema": output only, its value is the metadata_schema's title.
saved_queries[]

SavedQuery

All SavedQueries belong to the Dataset will be returned in List/Get Dataset response. The annotation_specs field will not be populated except for UI cases which will only use annotation_spec_count. In CreateDataset request, a SavedQuery is created together if this field is set, up to one SavedQuery can be set in CreateDatasetRequest. The SavedQuery should not contain any AnnotationSpec.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a Dataset. If set, this Dataset and all sub-resources of this Dataset will be secured by this key.

metadata_artifact

string

Output only. The resource name of the Artifact that was created in MetadataStore when creating the Dataset. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}.

model_reference

string

Optional. Reference to the public base model last used by the dataset. Only set for prompt datasets.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

DatasetDistribution

Distribution computed over a tuning dataset.

Fields
sum

double

Output only. Sum of a given population of values.

min

double

Output only. The minimum of the population values.

max

double

Output only. The maximum of the population values.

mean

double

Output only. The arithmetic mean of the values in the population.

median

double

Output only. The median of the values in the population.

p5

double

Output only. The 5th percentile of the values in the population.

p95

double

Output only. The 95th percentile of the values in the population.

buckets[]

DistributionBucket

Output only. Defines the histogram bucket.

DistributionBucket

Dataset bucket used to create a histogram for the distribution given a population of values.

Fields
count

int64

Output only. Number of values in the bucket.

left

double

Output only. Left bound of the bucket.

right

double

Output only. Right bound of the bucket.

DatasetStats

Statistics computed over a tuning dataset.

Fields
tuning_dataset_example_count

int64

Output only. Number of examples in the tuning dataset.

total_tuning_character_count

int64

Output only. Number of tuning characters in the tuning dataset.

total_billable_character_count

int64

Output only. Number of billable characters in the tuning dataset.

tuning_step_count

int64

Output only. Number of tuning steps for this Tuning Job.

user_input_token_distribution

DatasetDistribution

Output only. Dataset distributions for the user input tokens.

user_message_per_example_distribution

DatasetDistribution

Output only. Dataset distributions for the messages per example.

user_dataset_examples[]

Content

Output only. Sample user messages in the training dataset uri.

user_output_token_distribution

DatasetDistribution

Output only. Dataset distributions for the user output tokens.

DatasetVersion

Describes the dataset version.

Fields
name

string

Output only. Identifier. The resource name of the DatasetVersion.

create_time

Timestamp

Output only. Timestamp when this DatasetVersion was created.

update_time

Timestamp

Output only. Timestamp when this DatasetVersion was last updated.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

big_query_dataset_name

string

Output only. Name of the associated BigQuery dataset.

display_name

string

The user-defined name of the DatasetVersion. The name can be up to 128 characters long and can consist of any UTF-8 characters.

metadata

Value

Required. Output only. Additional information about the DatasetVersion.

model_reference

string

Output only. Reference to the public base model last used by the dataset version. Only set for prompt dataset versions.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

DedicatedResources

A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.

Fields
machine_spec

MachineSpec

Required. Immutable. The specification of a single machine used by the prediction.

min_replica_count

int32

Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1.

If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.

max_replica_count

int32

Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value.

The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).

required_replica_count

int32

Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial model deployment/mutation is desired. If set, the model deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.

autoscaling_metric_specs[]

AutoscalingMetricSpec

Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric.

If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics.

If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set.

For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.

spot

bool

Optional. If true, schedule the deployment workload on spot VMs.

DeleteArtifactRequest

Request message for MetadataService.DeleteArtifact.

Fields
name

string

Required. The resource name of the Artifact to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}

etag

string

Optional. The etag of the Artifact to delete. If this is provided, it must match the server's etag. Otherwise, the request will fail with a FAILED_PRECONDITION.

DeleteBatchPredictionJobRequest

Request message for JobService.DeleteBatchPredictionJob.

Fields
name

string

Required. The name of the BatchPredictionJob resource to be deleted. Format: projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}

DeleteCachedContentRequest

Request message for GenAiCacheService.DeleteCachedContent.

Fields
name

string

Required. The resource name referring to the cached content

DeleteContextRequest

Request message for MetadataService.DeleteContext.

Fields
name

string

Required. The resource name of the Context to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

force

bool

The force deletion semantics is still undefined. Users should not use this field.

etag

string

Optional. The etag of the Context to delete. If this is provided, it must match the server's etag. Otherwise, the request will fail with a FAILED_PRECONDITION.

DeleteCustomJobRequest

Request message for JobService.DeleteCustomJob.

Fields
name

string

Required. The name of the CustomJob resource to be deleted. Format: projects/{project}/locations/{location}/customJobs/{custom_job}

DeleteDatasetRequest

Request message for DatasetService.DeleteDataset.

Fields
name

string

Required. The resource name of the Dataset to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}

DeleteDatasetVersionRequest

Request message for DatasetService.DeleteDatasetVersion.

Fields
name

string

Required. The resource name of the Dataset version to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}/datasetVersions/{dataset_version}

DeleteDeploymentResourcePoolRequest

Request message for DeleteDeploymentResourcePool method.

Fields
name

string

Required. The name of the DeploymentResourcePool to delete. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

DeleteEndpointRequest

Request message for EndpointService.DeleteEndpoint.

Fields
name

string

Required. The name of the Endpoint resource to be deleted. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

DeleteEntityTypeRequest

Request message for FeaturestoreService.DeleteEntityType.

Fields
name

string

Required. The name of the EntityType to be deleted. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}

force

bool

If set to true, any Features for this EntityType will also be deleted. (Otherwise, the request will only work if the EntityType has no Features.)

DeleteExecutionRequest

Request message for MetadataService.DeleteExecution.

Fields
name

string

Required. The resource name of the Execution to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}

etag

string

Optional. The etag of the Execution to delete. If this is provided, it must match the server's etag. Otherwise, the request will fail with a FAILED_PRECONDITION.

DeleteExtensionRequest

Request message for ExtensionRegistryService.DeleteExtension.

Fields
name

string

Required. The name of the Extension resource to be deleted. Format: projects/{project}/locations/{location}/extensions/{extension}

DeleteFeatureGroupRequest

Request message for FeatureRegistryService.DeleteFeatureGroup.

Fields
name

string

Required. The name of the FeatureGroup to be deleted. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}

force

bool

If set to true, any Features under this FeatureGroup will also be deleted. (Otherwise, the request will only work if the FeatureGroup has no Features.)

DeleteFeatureMonitorRequest

Request message for FeatureRegistryService.DeleteFeatureMonitor.

Fields
name

string

Required. The name of the FeatureMonitor to be deleted. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}/featureMonitors/{feature_monitor}

DeleteFeatureOnlineStoreRequest

Request message for FeatureOnlineStoreAdminService.DeleteFeatureOnlineStore.

Fields
name

string

Required. The name of the FeatureOnlineStore to be deleted. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}

force

bool

If set to true, any FeatureViews and Features for this FeatureOnlineStore will also be deleted. (Otherwise, the request will only work if the FeatureOnlineStore has no FeatureViews.)

DeleteFeatureRequest

Request message for FeaturestoreService.DeleteFeature. Request message for FeatureRegistryService.DeleteFeature.

Fields
name

string

Required. The name of the Features to be deleted. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}/features/{feature} projects/{project}/locations/{location}/featureGroups/{feature_group}/features/{feature}

DeleteFeatureValuesOperationMetadata

Details of operations that delete Feature values.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Featurestore delete Features values.

DeleteFeatureValuesRequest

Request message for FeaturestoreService.DeleteFeatureValues.

Fields
entity_type

string

Required. The resource name of the EntityType grouping the Features for which values are being deleted from. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}

Union field DeleteOption. Defines options to select feature values to be deleted. DeleteOption can be only one of the following:
select_entity

SelectEntity

Select feature values to be deleted by specifying entities.

select_time_range_and_feature

SelectTimeRangeAndFeature

Select feature values to be deleted by specifying time range and features.

SelectEntity

Message to select entity. If an entity id is selected, all the feature values corresponding to the entity id will be deleted, including the entityId.

Fields
entity_id_selector

EntityIdSelector

Required. Selectors choosing feature values of which entity id to be deleted from the EntityType.

SelectTimeRangeAndFeature

Message to select time range and feature. Values of the selected feature generated within an inclusive time range will be deleted. Using this option permanently deletes the feature values from the specified feature IDs within the specified time range. This might include data from the online storage. If you want to retain any deleted historical data in the online storage, you must re-ingest it.

Fields
time_range

Interval

Required. Select feature generated within a half-inclusive time range. The time range is lower inclusive and upper exclusive.

feature_selector

FeatureSelector

Required. Selectors choosing which feature values to be deleted from the EntityType.

skip_online_storage_delete

bool

If set, data will not be deleted from online storage. When time range is older than the data in online storage, setting this to be true will make the deletion have no impact on online serving.

DeleteFeatureValuesResponse

Response message for FeaturestoreService.DeleteFeatureValues.

Fields
Union field response. Response based on which delete option is specified in the request response can be only one of the following:
select_entity

SelectEntity

Response for request specifying the entities to delete

select_time_range_and_feature

SelectTimeRangeAndFeature

Response for request specifying time range and feature

SelectEntity

Response message if the request uses the SelectEntity option.

Fields
offline_storage_deleted_entity_row_count

int64

The count of deleted entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage.

online_storage_deleted_entity_count

int64

The count of deleted entities in the online storage. Each entity ID corresponds to one entity.

SelectTimeRangeAndFeature

Response message if the request uses the SelectTimeRangeAndFeature option.

Fields
impacted_feature_count

int64

The count of the features or columns impacted. This is the same as the feature count in the request.

offline_storage_modified_entity_row_count

int64

The count of modified entity rows in the offline storage. Each row corresponds to the combination of an entity ID and a timestamp. One entity ID can have multiple rows in the offline storage. Within each row, only the features specified in the request are deleted.

online_storage_modified_entity_count

int64

The count of modified entities in the online storage. Each entity ID corresponds to one entity. Within each entity, only the features specified in the request are deleted.

DeleteFeatureViewRequest

Request message for FeatureOnlineStoreAdminService.DeleteFeatureView.

Fields
name

string

Required. The name of the FeatureView to be deleted. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

DeleteFeaturestoreRequest

Request message for FeaturestoreService.DeleteFeaturestore.

Fields
name

string

Required. The name of the Featurestore to be deleted. Format: projects/{project}/locations/{location}/featurestores/{featurestore}

force

bool

If set to true, any EntityTypes and Features for this Featurestore will also be deleted. (Otherwise, the request will only work if the Featurestore has no EntityTypes.)

DeleteHyperparameterTuningJobRequest

Request message for JobService.DeleteHyperparameterTuningJob.

Fields
name

string

Required. The name of the HyperparameterTuningJob resource to be deleted. Format: projects/{project}/locations/{location}/hyperparameterTuningJobs/{hyperparameter_tuning_job}

DeleteIndexEndpointRequest

Request message for IndexEndpointService.DeleteIndexEndpoint.

Fields
name

string

Required. The name of the IndexEndpoint resource to be deleted. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}

DeleteIndexRequest

Request message for IndexService.DeleteIndex.

Fields
name

string

Required. The name of the Index resource to be deleted. Format: projects/{project}/locations/{location}/indexes/{index}

DeleteMetadataStoreOperationMetadata

Details of operations that perform MetadataService.DeleteMetadataStore.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for deleting a MetadataStore.

DeleteMetadataStoreRequest

Request message for MetadataService.DeleteMetadataStore.

Fields
name

string

Required. The resource name of the MetadataStore to delete. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

force
(deprecated)

bool

Deprecated: Field is no longer supported.

DeleteModelDeploymentMonitoringJobRequest

Request message for JobService.DeleteModelDeploymentMonitoringJob.

Fields
name

string

Required. The resource name of the model monitoring job to delete. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}

DeleteModelMonitorRequest

Request message for ModelMonitoringService.DeleteModelMonitor.

Fields
name

string

Required. The name of the ModelMonitor resource to be deleted. Format: projects/{project}/locations/{location}/modelMonitords/{model_monitor}

force

bool

Optional. Force delete the model monitor with schedules.

DeleteModelMonitoringJobRequest

Request message for ModelMonitoringService.DeleteModelMonitoringJob.

Fields
name

string

Required. The resource name of the model monitoring job to delete. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job}

DeleteModelRequest

Request message for ModelService.DeleteModel.

Fields
name

string

Required. The name of the Model resource to be deleted. Format: projects/{project}/locations/{location}/models/{model}

DeleteModelVersionRequest

Request message for ModelService.DeleteModelVersion.

Fields
name

string

Required. The name of the model version to be deleted, with a version ID explicitly included.

Example: projects/{project}/locations/{location}/models/{model}@1234

DeleteNotebookExecutionJobRequest

Request message for [NotebookService.DeleteNotebookExecutionJob]

Fields
name

string

Required. The name of the NotebookExecutionJob resource to be deleted.

DeleteNotebookRuntimeRequest

Request message for NotebookService.DeleteNotebookRuntime.

Fields
name

string

Required. The name of the NotebookRuntime resource to be deleted. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.

DeleteNotebookRuntimeTemplateRequest

Request message for NotebookService.DeleteNotebookRuntimeTemplate.

Fields
name

string

Required. The name of the NotebookRuntimeTemplate resource to be deleted. Format: projects/{project}/locations/{location}/notebookRuntimeTemplates/{notebook_runtime_template}

DeleteOperationMetadata

Details of operations that perform deletes of any entities.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

DeletePersistentResourceRequest

Request message for PersistentResourceService.DeletePersistentResource.

Fields
name

string

Required. The name of the PersistentResource to be deleted. Format: projects/{project}/locations/{location}/persistentResources/{persistent_resource}

DeletePipelineJobRequest

Request message for PipelineService.DeletePipelineJob.

Fields
name

string

Required. The name of the PipelineJob resource to be deleted. Format: projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}

DeleteRagCorpusRequest

Request message for VertexRagDataService.DeleteRagCorpus.

Fields
name

string

Required. The name of the RagCorpus resource to be deleted. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}

force

bool

Optional. If set to true, any RagFiles in this RagCorpus will also be deleted. Otherwise, the request will only work if the RagCorpus has no RagFiles.

DeleteRagFileRequest

Request message for VertexRagDataService.DeleteRagFile.

Fields
name

string

Required. The name of the RagFile resource to be deleted. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}/ragFiles/{rag_file}

DeleteReasoningEngineRequest

Request message for ReasoningEngineService.DeleteReasoningEngine.

Fields
name

string

Required. The name of the ReasoningEngine resource to be deleted. Format: projects/{project}/locations/{location}/reasoningEngines/{reasoning_engine}

DeleteSavedQueryRequest

Request message for DatasetService.DeleteSavedQuery.

Fields
name

string

Required. The resource name of the SavedQuery to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}/savedQueries/{saved_query}

DeleteScheduleRequest

Request message for ScheduleService.DeleteSchedule.

Fields
name

string

Required. The name of the Schedule resource to be deleted. Format: projects/{project}/locations/{location}/schedules/{schedule}

DeleteSpecialistPoolRequest

Request message for SpecialistPoolService.DeleteSpecialistPool.

Fields
name

string

Required. The resource name of the SpecialistPool to delete. Format: projects/{project}/locations/{location}/specialistPools/{specialist_pool}

force

bool

If set to true, any specialist managers in this SpecialistPool will also be deleted. (Otherwise, the request will only work if the SpecialistPool has no specialist managers.)

DeleteStudyRequest

Request message for VizierService.DeleteStudy.

Fields
name

string

Required. The name of the Study resource to be deleted. Format: projects/{project}/locations/{location}/studies/{study}

DeleteTensorboardExperimentRequest

Request message for TensorboardService.DeleteTensorboardExperiment.

Fields
name

string

Required. The name of the TensorboardExperiment to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

DeleteTensorboardRequest

Request message for TensorboardService.DeleteTensorboard.

Fields
name

string

Required. The name of the Tensorboard to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

DeleteTensorboardRunRequest

Request message for TensorboardService.DeleteTensorboardRun.

Fields
name

string

Required. The name of the TensorboardRun to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

DeleteTensorboardTimeSeriesRequest

Request message for TensorboardService.DeleteTensorboardTimeSeries.

Fields
name

string

Required. The name of the TensorboardTimeSeries to be deleted. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}

DeleteTrainingPipelineRequest

Request message for PipelineService.DeleteTrainingPipeline.

Fields
name

string

Required. The name of the TrainingPipeline resource to be deleted. Format: projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}

DeleteTrialRequest

Request message for VizierService.DeleteTrial.

Fields
name

string

Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}

DeployIndexOperationMetadata

Runtime operation information for IndexEndpointService.DeployIndex.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

deployed_index_id

string

The unique index id specified by user

DeployIndexRequest

Request message for IndexEndpointService.DeployIndex.

Fields
index_endpoint

string

Required. The name of the IndexEndpoint resource into which to deploy an Index. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}

deployed_index

DeployedIndex

Required. The DeployedIndex to be created within the IndexEndpoint.

DeployIndexResponse

Response message for IndexEndpointService.DeployIndex.

Fields
deployed_index

DeployedIndex

The DeployedIndex that had been deployed in the IndexEndpoint.

DeployModelOperationMetadata

Runtime operation information for EndpointService.DeployModel.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

DeployModelRequest

Request message for EndpointService.DeployModel.

Fields
endpoint

string

Required. The name of the Endpoint resource into which to deploy a Model. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

deployed_model

DeployedModel

Required. The DeployedModel to be created within the Endpoint. Note that Endpoint.traffic_split must be updated for the DeployedModel to start receiving traffic, either as part of this call, or via EndpointService.UpdateEndpoint.

traffic_split

map<string, int32>

A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel.

If this field is non-empty, then the Endpoint's traffic_split will be overwritten with it. To refer to the ID of the just being deployed Model, a "0" should be used, and the actual ID of the new DeployedModel will be filled in its place by this method. The traffic percentage values must add up to 100.

If this field is empty, then the Endpoint's traffic_split is not updated.

DeployModelResponse

Response message for EndpointService.DeployModel.

Fields
deployed_model

DeployedModel

The DeployedModel that had been deployed in the Endpoint.

DeploySolverOperationMetadata

Runtime operation information for SolverService.DeploySolver.

Fields
generic_metadata

GenericOperationMetadata

The generic operation information.

DeployedIndex

A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.

Fields
id

string

Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.

index

string

Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.

display_name

string

The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.

create_time

Timestamp

Output only. Timestamp when the DeployedIndex was created.

private_endpoints

IndexPrivateEndpoints

Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.

index_sync_time

Timestamp

Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.

automatic_resources

AutomaticResources

Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.

dedicated_resources

DedicatedResources

Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.

Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard.

Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard.

Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32.

n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.

enable_access_logging

bool

Optional. If true, private endpoint's access logs are sent to Cloud Logging.

These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest.

Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.

deployed_index_auth_config

DeployedIndexAuthConfig

Optional. If set, the authentication is enabled for the private endpoint.

reserved_ip_ranges[]

string

Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex.

If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network.

The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range'].

For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.

deployment_group

string

Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group.

Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed.

Note: we only support up to 5 deployment groups(not including 'default').

psc_automation_configs[]

PSCAutomationConfig

Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.

DeployedIndexAuthConfig

Used to set up the auth on the DeployedIndex's private endpoint.

Fields
auth_provider

AuthProvider

Defines the authentication provider that the DeployedIndex uses.

AuthProvider

Configuration for an authentication provider, including support for JSON Web Token (JWT).

Fields
audiences[]

string

The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.

allowed_issuers[]

string

A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format:

service-account-name@project-id.iam.gserviceaccount.com

DeployedIndexRef

Points to a DeployedIndex.

Fields
index_endpoint

string

Immutable. A resource name of the IndexEndpoint.

deployed_index_id

string

Immutable. The ID of the DeployedIndex in the above IndexEndpoint.

display_name

string

Output only. The display name of the DeployedIndex.

DeployedModel

A deployment of a Model. Endpoints contain one or more DeployedModels.

Fields
id

string

Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.

This value should be 1-10 characters, and valid characters are /[0-9]/.

model

string

Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint.

The resource name may contain version id or version alias to specify the version. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden if no version is specified, the default version will be deployed.

model_version_id

string

Output only. The version ID of the model that is deployed.

display_name

string

The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.

create_time

Timestamp

Output only. Timestamp when the DeployedModel was created.

explanation_spec

ExplanationSpec

Explanation configuration for this DeployedModel.

When deploying a Model using EndpointService.DeployModel, this value overrides the value of Model.explanation_spec. All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.

disable_explanations

bool

If true, deploy the model without explainable feature, regardless the existence of Model.explanation_spec or explanation_spec.

service_account

string

The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.

Users deploying the Model must have the iam.serviceAccounts.actAs permission on this service account.

enable_container_logging

bool

If true, the container of the DeployedModel instances will send stderr and stdout streams to Cloud Logging.

Only supported for custom-trained Models and AutoML Tabular Models.

enable_access_logging

bool

If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request.

Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

private_endpoints

PrivateEndpoints

Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.

faster_deployment_config

FasterDeploymentConfig

Configuration for faster model deployment.

status

Status

Output only. Runtime status of the deployed model.

system_labels

map<string, string>

System labels to apply to Model Garden deployments. System labels are managed by Google for internal use only.

Union field prediction_resources. The prediction (for example, the machine) resources that the DeployedModel uses. The user is billed for the resources (at least their minimal amount) even if the DeployedModel receives no traffic. Not all Models support all resources types. See Model.supported_deployment_resources_types. Required except for Large Model Deploy use cases. prediction_resources can be only one of the following:
dedicated_resources

DedicatedResources

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

automatic_resources

AutomaticResources

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.

shared_resources

string

The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

Status

Runtime status of the deployed model.

Fields
message

string

Output only. The latest deployed model's status message (if any).

last_update_time

Timestamp

Output only. The time at which the status was last updated.

available_replica_count

int32

Output only. The number of available replicas of the deployed model.

DeployedModelRef

Points to a DeployedModel.

Fields
endpoint

string

Immutable. A resource name of an Endpoint.

deployed_model_id

string

Immutable. An ID of a DeployedModel in the above Endpoint.

DeploymentResourcePool

A description of resources that can be shared by multiple DeployedModels, whose underlying specification consists of a DedicatedResources.

Fields
name

string

Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

dedicated_resources

DedicatedResources

Required. The underlying DedicatedResources that the DeploymentResourcePool uses.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec.

service_account

string

The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project.

Users deploying the Models to this DeploymentResourcePool must have the iam.serviceAccounts.actAs permission on this service account.

disable_container_logging

bool

If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send stderr and stdout streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing.

User can disable container logging by setting this flag to true.

create_time

Timestamp

Output only. Timestamp when this DeploymentResourcePool was created.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

DestinationFeatureSetting

Fields
feature_id

string

Required. The ID of the Feature to apply the setting to.

destination_field

string

Specify the field name in the export destination. If not specified, Feature ID is used.

DirectPredictRequest

Request message for PredictionService.DirectPredict.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

inputs[]

Tensor

The prediction input.

parameters

Tensor

The parameters that govern the prediction.

DirectPredictResponse

Response message for PredictionService.DirectPredict.

Fields
outputs[]

Tensor

The prediction output.

parameters

Tensor

The parameters that govern the prediction.

DirectRawPredictRequest

Request message for PredictionService.DirectRawPredict.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

method_name

string

Fully qualified name of the API method being invoked to perform predictions.

Format: /namespace.Service/Method/ Example: /tensorflow.serving.PredictionService/Predict

input

bytes

The prediction input.

DirectRawPredictResponse

Response message for PredictionService.DirectRawPredict.

Fields
output

bytes

The prediction output.

DirectUploadSource

This type has no fields.

The input content is encapsulated and uploaded in the request.

DiskSpec

Represents the spec of disk options.

Fields
boot_disk_type

string

Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

boot_disk_size_gb

int32

Size in GB of the boot disk (default is 100GB).

DistillationDataStats

Statistics computed for datasets used for distillation.

Fields
training_dataset_stats

DatasetStats

Output only. Statistics computed for the training dataset.

DistillationHyperParameters

Hyperparameters for Distillation.

Fields
adapter_size

AdapterSize

Optional. Adapter size for distillation.

epoch_count

int64

Optional. Number of complete passes the model makes over the entire training dataset during training.

learning_rate_multiplier

double

Optional. Multiplier for adjusting the default learning rate.

DistillationSpec

Tuning Spec for Distillation.

Fields
training_dataset_uri
(deprecated)

string

Deprecated. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file.

hyper_parameters

DistillationHyperParameters

Optional. Hyperparameters for Distillation.

student_model
(deprecated)

string

The student model that is being tuned, e.g., "google/gemma-2b-1.1-it". Deprecated. Use base_model instead.

pipeline_root_directory
(deprecated)

string

Deprecated. A path in a Cloud Storage bucket, which will be treated as the root output directory of the distillation pipeline. It is used by the system to generate the paths of output artifacts.

Union field teacher_model. The teacher model that is being distilled from, e.g., "gemini-1.5-pro-002". teacher_model can be only one of the following:
base_teacher_model

string

The base teacher model that is being distilled, e.g., "gemini-1.0-pro-002".

tuned_teacher_model_source

string

The resource name of the Tuned teacher model. Format: projects/{project}/locations/{location}/models/{model}.

validation_dataset_uri

string

Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file.

DoubleArray

A list of double values.

Fields
values[]

double

A list of double values.

DynamicRetrievalConfig

Describes the options to customize dynamic retrieval.

Fields
mode

Mode

The mode of the predictor to be used in dynamic retrieval.

dynamic_threshold

float

Optional. The threshold to be used in dynamic retrieval. If not set, a system default value is used.

Mode

The mode of the predictor to be used in dynamic retrieval.

Enums
MODE_UNSPECIFIED Always trigger retrieval.
MODE_DYNAMIC Run retrieval only when system decides it is necessary.

EncryptionSpec

Represents a customer-managed encryption key spec that can be applied to a top-level resource.

Fields
kms_key_name

string

Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

Endpoint

Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.

Fields
name

string

Output only. The resource name of the Endpoint.

display_name

string

Required. The display name of the Endpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

The description of the Endpoint.

deployed_models[]

DeployedModel

Output only. The models deployed in this Endpoint. To add or remove DeployedModels use EndpointService.DeployModel and EndpointService.UndeployModel respectively.

traffic_split

map<string, int32>

A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel.

If a DeployedModel's ID is not listed in this map, then it receives no traffic.

The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your Endpoints.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

create_time

Timestamp

Output only. Timestamp when this Endpoint was created.

update_time

Timestamp

Output only. Timestamp when this Endpoint was last updated.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub-resources of this Endpoint will be secured by this key.

network

string

Optional. The full name of the Google Compute Engine network to which the Endpoint should be peered.

Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network.

Only one of the fields, network or enable_private_service_connect, can be set.

Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is network name.

enable_private_service_connect
(deprecated)

bool

Deprecated: If true, expose the Endpoint via private service connect.

Only one of the fields, network or enable_private_service_connect, can be set.

private_service_connect_config

PrivateServiceConnectConfig

Optional. Configuration for private service connect.

network and private_service_connect_config are mutually exclusive.

model_deployment_monitoring_job

string

Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by JobService.CreateModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}

predict_request_response_logging_config

PredictRequestResponseLoggingConfig

Configures the request-response logging for online prediction.

dedicated_endpoint_enabled

bool

If true, the endpoint will be exposed through a dedicated DNS [Endpoint.dedicated_endpoint_dns]. Your request to the dedicated DNS will be isolated from other users' traffic and will have better performance and reliability. Note: Once you enabled dedicated endpoint, you won't be able to send request to the shared DNS {region}-aiplatform.googleapis.com. The limitation will be removed soon.

dedicated_endpoint_dns

string

Output only. DNS of the dedicated endpoint. Will only be populated if dedicated_endpoint_enabled is true. Format: https://{endpoint_id}.{region}-{project_number}.prediction.vertexai.goog.

client_connection_config

ClientConnectionConfig

Configurations that are applied to the endpoint for online prediction.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

EntityIdSelector

Selector for entityId. Getting ids from the given source.

Fields
entity_id_field

string

Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id.

Union field EntityIdsSource. Details about the source data, including the location of the storage and the format. EntityIdsSource can be only one of the following:
csv_source

CsvSource

Source of Csv

EntityType

An entity type is a type of object in a system that needs to be modeled and have stored information about. For example, driver is an entity type, and driver0 is an instance of an entity type driver.

Fields
name

string

Immutable. Name of the EntityType. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}

The last part entity_type is assigned by the client. The entity_type can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z and underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given a featurestore.

description

string

Optional. Description of the EntityType.

create_time

Timestamp

Output only. Timestamp when this EntityType was created.

update_time

Timestamp

Output only. Timestamp when this EntityType was most recently updated.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your EntityTypes.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one EntityType (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

etag

string

Optional. Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

monitoring_config

FeaturestoreMonitoringConfig

Optional. The default monitoring configuration for all Features with value type (Feature.ValueType) BOOL, STRING, DOUBLE or INT64 under this EntityType.

If this is populated with [FeaturestoreMonitoringConfig.monitoring_interval] specified, snapshot analysis monitoring is enabled. Otherwise, snapshot analysis monitoring is disabled.

offline_storage_ttl_days

int32

Optional. Config for data retention policy in offline storage. TTL in days for feature values that will be stored in offline storage. The Feature Store offline storage periodically removes obsolete feature values older than offline_storage_ttl_days since the feature generation time. If unset (or explicitly set to 0), default to 4000 days TTL.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

EnvVar

Represents an environment variable present in a Container or Python Module.

Fields
name

string

Required. Name of the environment variable. Must be a valid C identifier.

value

string

Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.

ErrorAnalysisAnnotation

Model error analysis for each annotation.

Fields
attributed_items[]

AttributedItem

Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.

query_type

QueryType

The query type used for finding the attributed items.

outlier_score

double

The outlier score of this annotated item. Usually defined as the min of all distances from attributed items.

outlier_threshold

double

The threshold used to determine if this annotation is an outlier or not.

AttributedItem

Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.

Fields
annotation_resource_name

string

The unique ID for each annotation. Used by FE to allocate the annotation in DB.

distance

double

The distance of this item to the annotation.

QueryType

The query type used for finding the attributed items.

Enums
QUERY_TYPE_UNSPECIFIED Unspecified query type for model error analysis.
ALL_SIMILAR Query similar samples across all classes in the dataset.
SAME_CLASS_SIMILAR Query similar samples from the same class of the input sample.
SAME_CLASS_DISSIMILAR Query dissimilar samples from the same class of the input sample.

EvaluateInstancesRequest

Request message for EvaluationService.EvaluateInstances.

Fields
location

string

Required. The resource name of the Location to evaluate the instances. Format: projects/{project}/locations/{location}

Union field metric_inputs. Instances and specs for evaluation metric_inputs can be only one of the following:
exact_match_input

ExactMatchInput

Auto metric instances. Instances and metric spec for exact match metric.

bleu_input

BleuInput

Instances and metric spec for bleu metric.

rouge_input

RougeInput

Instances and metric spec for rouge metric.

fluency_input

FluencyInput

LLM-based metric instance. General text generation metrics, applicable to other categories. Input for fluency metric.

coherence_input

CoherenceInput

Input for coherence metric.

safety_input

SafetyInput

Input for safety metric.

groundedness_input

GroundednessInput

Input for groundedness metric.

fulfillment_input

FulfillmentInput

Input for fulfillment metric.

summarization_quality_input

SummarizationQualityInput

Input for summarization quality metric.

pairwise_summarization_quality_input

PairwiseSummarizationQualityInput

Input for pairwise summarization quality metric.

summarization_helpfulness_input

SummarizationHelpfulnessInput

Input for summarization helpfulness metric.

summarization_verbosity_input

SummarizationVerbosityInput

Input for summarization verbosity metric.

question_answering_quality_input

QuestionAnsweringQualityInput

Input for question answering quality metric.

pairwise_question_answering_quality_input

PairwiseQuestionAnsweringQualityInput

Input for pairwise question answering quality metric.

question_answering_relevance_input

QuestionAnsweringRelevanceInput

Input for question answering relevance metric.

question_answering_helpfulness_input

QuestionAnsweringHelpfulnessInput

Input for question answering helpfulness metric.

question_answering_correctness_input

QuestionAnsweringCorrectnessInput

Input for question answering correctness metric.

pointwise_metric_input

PointwiseMetricInput

Input for pointwise metric.

pairwise_metric_input

PairwiseMetricInput

Input for pairwise metric.

tool_call_valid_input

ToolCallValidInput

Tool call metric instances. Input for tool call valid metric.

tool_name_match_input

ToolNameMatchInput

Input for tool name match metric.

tool_parameter_key_match_input

ToolParameterKeyMatchInput

Input for tool parameter key match metric.

tool_parameter_kv_match_input

ToolParameterKVMatchInput

Input for tool parameter key value match metric.

comet_input

CometInput

Translation metrics. Input for Comet metric.

trajectory_exact_match_input

TrajectoryExactMatchInput

Input for trajectory exact match metric.

trajectory_in_order_match_input

TrajectoryInOrderMatchInput

Input for trajectory in order match metric.

trajectory_any_order_match_input

TrajectoryAnyOrderMatchInput

Input for trajectory match any order metric.

trajectory_precision_input

TrajectoryPrecisionInput

Input for trajectory precision metric.

trajectory_recall_input

TrajectoryRecallInput

Input for trajectory recall metric.

trajectory_single_tool_use_input

TrajectorySingleToolUseInput

Input for trajectory single tool use metric.

EvaluateInstancesResponse

Response message for EvaluationService.EvaluateInstances.

Fields
Union field evaluation_results. Evaluation results will be served in the same order as presented in EvaluationRequest.instances. evaluation_results can be only one of the following:
exact_match_results

ExactMatchResults

Auto metric evaluation results. Results for exact match metric.

bleu_results

BleuResults

Results for bleu metric.

rouge_results

RougeResults

Results for rouge metric.

fluency_result

FluencyResult

LLM-based metric evaluation result. General text generation metrics, applicable to other categories. Result for fluency metric.

coherence_result

CoherenceResult

Result for coherence metric.

safety_result

SafetyResult

Result for safety metric.

groundedness_result

GroundednessResult

Result for groundedness metric.

fulfillment_result

FulfillmentResult

Result for fulfillment metric.

summarization_quality_result

SummarizationQualityResult

Summarization only metrics. Result for summarization quality metric.

pairwise_summarization_quality_result

PairwiseSummarizationQualityResult

Result for pairwise summarization quality metric.

summarization_helpfulness_result

SummarizationHelpfulnessResult

Result for summarization helpfulness metric.

summarization_verbosity_result

SummarizationVerbosityResult

Result for summarization verbosity metric.

question_answering_quality_result

QuestionAnsweringQualityResult

Question answering only metrics. Result for question answering quality metric.

pairwise_question_answering_quality_result

PairwiseQuestionAnsweringQualityResult

Result for pairwise question answering quality metric.

question_answering_relevance_result

QuestionAnsweringRelevanceResult

Result for question answering relevance metric.

question_answering_helpfulness_result

QuestionAnsweringHelpfulnessResult

Result for question answering helpfulness metric.

question_answering_correctness_result

QuestionAnsweringCorrectnessResult

Result for question answering correctness metric.

pointwise_metric_result

PointwiseMetricResult

Generic metrics. Result for pointwise metric.

pairwise_metric_result

PairwiseMetricResult

Result for pairwise metric.

tool_call_valid_results

ToolCallValidResults

Tool call metrics. Results for tool call valid metric.

tool_name_match_results

ToolNameMatchResults

Results for tool name match metric.

tool_parameter_key_match_results

ToolParameterKeyMatchResults

Results for tool parameter key match metric.

tool_parameter_kv_match_results

ToolParameterKVMatchResults

Results for tool parameter key value match metric.

comet_result

CometResult

Translation metrics. Result for Comet metric.

trajectory_exact_match_results

TrajectoryExactMatchResults

Result for trajectory exact match metric.

trajectory_in_order_match_results

TrajectoryInOrderMatchResults

Result for trajectory in order match metric.

trajectory_any_order_match_results

TrajectoryAnyOrderMatchResults

Result for trajectory any order match metric.

trajectory_precision_results

TrajectoryPrecisionResults

Result for trajectory precision metric.

trajectory_recall_results

TrajectoryRecallResults

Results for trajectory recall metric.

trajectory_single_tool_use_results

TrajectorySingleToolUseResults

Results for trajectory single tool use metric.

EvaluatedAnnotation

True positive, false positive, or false negative.

EvaluatedAnnotation is only available under ModelEvaluationSlice with slice of annotationSpec dimension.

Fields
type

EvaluatedAnnotationType

Output only. Type of the EvaluatedAnnotation.

predictions[]

Value

Output only. The model predicted annotations.

For true positive, there is one and only one prediction, which matches the only one ground truth annotation in ground_truths.

For false positive, there is one and only one prediction, which doesn't match any ground truth annotation of the corresponding data_item_view_id.

For false negative, there are zero or more predictions which are similar to the only ground truth annotation in ground_truths but not enough for a match.

The schema of the prediction is stored in [ModelEvaluation.annotation_schema_uri][]

ground_truths[]

Value

Output only. The ground truth Annotations, i.e. the Annotations that exist in the test data the Model is evaluated on.

For true positive, there is one and only one ground truth annotation, which matches the only prediction in predictions.

For false positive, there are zero or more ground truth annotations that are similar to the only prediction in predictions, but not enough for a match.

For false negative, there is one and only one ground truth annotation, which doesn't match any predictions created by the model.

The schema of the ground truth is stored in [ModelEvaluation.annotation_schema_uri][]

data_item_payload

Value

Output only. The data item payload that the Model predicted this EvaluatedAnnotation on.

evaluated_data_item_view_id

string

Output only. ID of the EvaluatedDataItemView under the same ancestor ModelEvaluation. The EvaluatedDataItemView consists of all ground truths and predictions on data_item_payload.

explanations[]

EvaluatedAnnotationExplanation

Explanations of predictions. Each element of the explanations indicates the explanation for one explanation Method.

The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list.

error_analysis_annotations[]

ErrorAnalysisAnnotation

Annotations of model error analysis results.

EvaluatedAnnotationType

Describes the type of the EvaluatedAnnotation. The type is determined

Enums
EVALUATED_ANNOTATION_TYPE_UNSPECIFIED Invalid value.
TRUE_POSITIVE The EvaluatedAnnotation is a true positive. It has a prediction created by the Model and a ground truth Annotation which the prediction matches.
FALSE_POSITIVE The EvaluatedAnnotation is false positive. It has a prediction created by the Model which does not match any ground truth annotation.
FALSE_NEGATIVE The EvaluatedAnnotation is false negative. It has a ground truth annotation which is not matched by any of the model created predictions.

EvaluatedAnnotationExplanation

Explanation result of the prediction produced by the Model.

Fields
explanation_type

string

Explanation type.

For AutoML Image Classification models, possible values are:

  • image-integrated-gradients
  • image-xrai
explanation

Explanation

Explanation attribution response details.

Event

An edge describing the relationship between an Artifact and an Execution in a lineage graph.

Fields
artifact

string

Required. The relative resource name of the Artifact in the Event.

execution

string

Output only. The relative resource name of the Execution in the Event.

event_time

Timestamp

Output only. Time the Event occurred.

type

Type

Required. The type of the Event.

labels

map<string, string>

The labels with user-defined metadata to annotate Events.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Event (System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

Type

Describes whether an Event's Artifact is the Execution's input or output.

Enums
TYPE_UNSPECIFIED Unspecified whether input or output of the Execution.
INPUT An input of the Execution.
OUTPUT An output of the Execution.

ExactMatchInput

Input for exact match metric.

Fields
metric_spec

ExactMatchSpec

Required. Spec for exact match metric.

instances[]

ExactMatchInstance

Required. Repeated exact match instances.

ExactMatchInstance

Spec for exact match instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Required. Ground truth used to compare against the prediction.

ExactMatchMetricValue

Exact match metric value for an instance.

Fields
score

float

Output only. Exact match score.

ExactMatchResults

Results for exact match metric.

Fields
exact_match_metric_values[]

ExactMatchMetricValue

Output only. Exact match metric values.

ExactMatchSpec

This type has no fields.

Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0.

Examples

Example-based explainability that returns the nearest neighbors from the provided dataset.

Fields
gcs_source

GcsSource

The Cloud Storage locations that contain the instances to be indexed for approximate nearest neighbor search.

neighbor_count

int32

The number of neighbors to return when querying for examples.

Union field source.

source can be only one of the following:

example_gcs_source

ExampleGcsSource

The Cloud Storage input instances.

Union field config.

config can be only one of the following:

nearest_neighbor_search_config

Value

The full configuration for the generated index, the semantics are the same as metadata and should match NearestNeighborSearchConfig.

presets

Presets

Simplified preset configuration, which automatically sets configuration values based on the desired query speed-precision trade-off and modality.

ExampleGcsSource

The Cloud Storage input instances.

Fields
data_format

DataFormat

The format in which instances are given, if not specified, assume it's JSONL format. Currently only JSONL format is supported.

gcs_source

GcsSource

The Cloud Storage location for the input instances.

DataFormat

The format of the input example instances.

Enums
DATA_FORMAT_UNSPECIFIED Format unspecified, used when unset.
JSONL Examples are stored in JSONL files.

ExamplesOverride

Overrides for example-based explanations.

Fields
neighbor_count

int32

The number of neighbors to return.

crowding_count

int32

The number of neighbors to return that have the same crowding tag.

restrictions[]

ExamplesRestrictionsNamespace

Restrict the resulting nearest neighbors to respect these constraints.

return_embeddings

bool

If true, return the embeddings instead of neighbors.

data_format

DataFormat

The format of the data being provided with each call.

DataFormat

Data format enum.

Enums
DATA_FORMAT_UNSPECIFIED Unspecified format. Must not be used.
INSTANCES Provided data is a set of model inputs.
EMBEDDINGS Provided data is a set of embeddings.

ExamplesRestrictionsNamespace

Restrictions namespace for example-based explanations overrides.

Fields
namespace_name

string

The namespace name.

allow[]

string

The list of allowed tags.

deny[]

string

The list of deny tags.

ExecutableCode

Code generated by the model that is meant to be executed, and the result returned to the model.

Generated when using the [FunctionDeclaration] tool and [FunctionCallingConfig] mode is set to [Mode.CODE].

Fields
language

Language

Required. Programming language of the code.

code

string

Required. The code to be executed.

Language

Supported programming languages for the generated code.

Enums
LANGUAGE_UNSPECIFIED Unspecified language. This value should not be used.
PYTHON Python >= 3.10, with numpy and simpy available.

ExecuteExtensionRequest

Request message for ExtensionExecutionService.ExecuteExtension.

Fields
name

string

Required. Name (identifier) of the extension; Format: projects/{project}/locations/{location}/extensions/{extension}

operation_id

string

Required. The desired ID of the operation to be executed in this extension as defined in ExtensionOperation.operation_id.

operation_params

Struct

Optional. Request parameters that will be used for executing this operation.

The struct should be in a form of map with param name as the key and actual param value as the value. E.g. If this operation requires a param "name" to be set to "abc". you can set this to something like {"name": "abc"}.

runtime_auth_config

AuthConfig

Optional. Auth config provided at runtime to override the default value in [Extension.manifest.auth_config][]. The AuthConfig.auth_type should match the value in [Extension.manifest.auth_config][].

ExecuteExtensionResponse

Response message for ExtensionExecutionService.ExecuteExtension.

Fields
content

string

Response content from the extension. The content should be conformant to the response.content schema in the extension's manifest/OpenAPI spec.

Execution

Instance of a general execution.

Fields
name

string

Output only. The resource name of the Execution.

display_name

string

User provided display name of the Execution. May be up to 128 Unicode characters.

state

State

The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.

etag

string

An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your Executions.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).

create_time

Timestamp

Output only. Timestamp when this Execution was created.

update_time

Timestamp

Output only. Timestamp when this Execution was last updated.

schema_title

string

The title of the schema describing the metadata.

Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

schema_version

string

The version of the schema in schema_title to use.

Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.

metadata

Struct

Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.

description

string

Description of the Execution

State

Describes the state of the Execution.

Enums
STATE_UNSPECIFIED Unspecified Execution state
NEW The Execution is new
RUNNING The Execution is running
COMPLETE The Execution has finished running
FAILED The Execution has failed
CACHED The Execution completed through Cache hit.
CANCELLED The Execution was cancelled.

ExplainRequest

Request message for PredictionService.Explain.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances[]

Value

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parameters

Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

explanation_spec_override

ExplanationSpecOverride

If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as: - Explaining top-5 predictions results as opposed to top-1; - Increasing path count or step count of the attribution methods to reduce approximate errors; - Using different baselines for explaining the prediction results.

concurrent_explanation_spec_override

map<string, ExplanationSpecOverride>

Optional. This field is the same as the one above, but supports multiple explanations to occur in parallel. The key can be any string. Each override will be run against the model, then its explanations will be grouped together.

Note - these explanations are run In Addition to the default Explanation in the deployed model.

deployed_model_id

string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.

ExplainResponse

Response message for PredictionService.Explain.

Fields
explanations[]

Explanation

The explanations of the Model's PredictResponse.predictions.

It has the same number of elements as instances to be explained.

concurrent_explanations

map<string, ConcurrentExplanation>

This field stores the results of the explanations run in parallel with The default explanation strategy/method.

deployed_model_id

string

ID of the Endpoint's DeployedModel that served this explanation.

predictions[]

Value

The predictions that are the output of the predictions call. Same as PredictResponse.predictions.

ConcurrentExplanation

This message is a wrapper grouping Concurrent Explanations.

Fields
explanations[]

Explanation

The explanations of the Model's PredictResponse.predictions.

It has the same number of elements as instances to be explained.

Explanation

Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.

Fields
attributions[]

Attribution

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

neighbors[]

Neighbor

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

ExplanationMetadata

Metadata describing the Model's input and output for explanation.

Fields
inputs

map<string, InputMetadata>

Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature.

An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in ExplanationMetadata.inputs. The baseline of the empty feature is chosen by Vertex AI.

For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, featureAttributions are keyed by this key (if not grouped with another feature).

For custom images, the key must match with the key in instance.

outputs

map<string, OutputMetadata>

Required. Map from output names to output metadata.

For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters.

For custom images, keys are the name of the output field in the prediction to be explained.

Currently only one key is allowed.

feature_attributions_schema_uri

string

Points to a YAML file stored on Google Cloud Storage describing the format of the feature attributions. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

latent_space_source

string

Name of the source to generate embeddings for example based explanations.

InputMetadata

Metadata of the input of a feature.

Fields other than InputMetadata.input_baselines are applicable only for Models that are using Vertex AI-provided images for Tensorflow.

Fields
input_baselines[]

Value

Baseline inputs for this feature.

If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in Attribution.feature_attributions.

For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor.

For custom images, the element of the baselines must be in the same format as the feature's input in the instance[]. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

input_tensor_name

string

Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow.

encoding

Encoding

Defines how the feature is encoded into the input tensor. Defaults to IDENTITY.

modality

string

Modality of the feature. Valid values are: numeric, image. Defaults to numeric.

feature_value_domain

FeatureValueDomain

The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized.

indices_tensor_name

string

Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.

dense_shape_tensor_name

string

Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor.

index_feature_mapping[]

string

A list of feature names for each index in the input tensor. Required when the input InputMetadata.encoding is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR.

encoded_tensor_name

string

Encoded tensor is a transformation of the input tensor. Must be provided if choosing Integrated Gradients attribution or XRAI attribution and the input tensor is not differentiable.

An encoded tensor is generated if the input tensor is encoded by a lookup table.

encoded_baselines[]

Value

A list of baselines for the encoded tensor.

The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor.

visualization

Visualization

Visualization configurations for image explanation.

group_name

string

Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in Attribution.feature_attributions, keyed by the group name.

Encoding

Defines how a feature is encoded. Defaults to IDENTITY.

Enums
ENCODING_UNSPECIFIED Default value. This is the same as IDENTITY.
IDENTITY The tensor represents one feature.
BAG_OF_FEATURES

The tensor represents a bag of features where each index maps to a feature. InputMetadata.index_feature_mapping must be provided for this encoding. For example:

input = [27, 6.0, 150]
index_feature_mapping = ["age", "height", "weight"]
BAG_OF_FEATURES_SPARSE

The tensor represents a bag of features where each index maps to a feature. Zero values in the tensor indicates feature being non-existent. InputMetadata.index_feature_mapping must be provided for this encoding. For example:

input = [2, 0, 5, 0, 1]
index_feature_mapping = ["a", "b", "c", "d", "e"]
INDICATOR

The tensor is a list of binaries representing whether a feature exists or not (1 indicates existence). InputMetadata.index_feature_mapping must be provided for this encoding. For example:

input = [1, 0, 1, 0, 1]
index_feature_mapping = ["a", "b", "c", "d", "e"]
COMBINED_EMBEDDING

The tensor is encoded into a 1-dimensional array represented by an encoded tensor. InputMetadata.encoded_tensor_name must be provided for this encoding. For example:

input = ["This", "is", "a", "test", "."]
encoded = [0.1, 0.2, 0.3, 0.4, 0.5]
CONCAT_EMBEDDING

Select this encoding when the input tensor is encoded into a 2-dimensional array represented by an encoded tensor. InputMetadata.encoded_tensor_name must be provided for this encoding. The first dimension of the encoded tensor's shape is the same as the input tensor's shape. For example:

input = ["This", "is", "a", "test", "."]
encoded = [[0.1, 0.2, 0.3, 0.4, 0.5],
           [0.2, 0.1, 0.4, 0.3, 0.5],
           [0.5, 0.1, 0.3, 0.5, 0.4],
           [0.5, 0.3, 0.1, 0.2, 0.4],
           [0.4, 0.3, 0.2, 0.5, 0.1]]

FeatureValueDomain

Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained.

Fields
min_value

float

The minimum permissible value for this feature.

max_value

float

The maximum permissible value for this feature.

original_mean

float

If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization.

original_stddev

float

If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization.

Visualization

Visualization configurations for image explanation.

Fields
type

Type

Type of the image visualization. Only applicable to Integrated Gradients attribution. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.

polarity

Polarity

Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.

color_map

ColorMap

The color scheme used for the highlighted areas.

Defaults to PINK_GREEN for Integrated Gradients attribution, which shows positive attributions in green and negative in pink.

Defaults to VIRIDIS for XRAI attribution, which highlights the most influential regions in yellow and the least influential in blue.

clip_percent_upperbound

float

Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.

clip_percent_lowerbound

float

Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.

overlay_type

OverlayType

How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.

ColorMap

The color scheme used for highlighting areas.

Enums
COLOR_MAP_UNSPECIFIED Should not be used.
PINK_GREEN Positive: green. Negative: pink.
VIRIDIS Viridis color map: A perceptually uniform color mapping which is easier to see by those with colorblindness and progresses from yellow to green to blue. Positive: yellow. Negative: blue.
RED Positive: red. Negative: red.
GREEN Positive: green. Negative: green.
RED_GREEN Positive: green. Negative: red.
PINK_WHITE_GREEN PiYG palette.

OverlayType

How the original image is displayed in the visualization.

Enums
OVERLAY_TYPE_UNSPECIFIED Default value. This is the same as NONE.
NONE No overlay.
ORIGINAL The attributions are shown on top of the original image.
GRAYSCALE The attributions are shown on top of grayscaled version of the original image.
MASK_BLACK The attributions are used as a mask to reveal predictive parts of the image and hide the un-predictive parts.

Polarity

Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.

Enums
POLARITY_UNSPECIFIED Default value. This is the same as POSITIVE.
POSITIVE Highlights the pixels/outlines that were most influential to the model's prediction.
NEGATIVE Setting polarity to negative highlights areas that does not lead to the models's current prediction.
BOTH Shows both positive and negative attributions.

Type

Type of the image visualization. Only applicable to Integrated Gradients attribution.

Enums
TYPE_UNSPECIFIED Should not be used.
PIXELS Shows which pixel contributed to the image prediction.
OUTLINES Shows which region contributed to the image prediction by outlining the region.

OutputMetadata

Metadata of the prediction output to be explained.

Fields
output_tensor_name

string

Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow.

Union field display_name_mapping. Defines how to map Attribution.output_index to Attribution.output_display_name.

If neither of the fields are specified, Attribution.output_display_name will not be populated. display_name_mapping can be only one of the following:

index_display_name_mapping

Value

Static mapping between the index and display name.

Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values.

The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The Attribution.output_display_name is populated by locating in the mapping with Attribution.output_index.

display_name_mapping_key

string

Specify a field name in the prediction to look for the display name.

Use this if the prediction contains the display names for the outputs.

The display names in the prediction must have the same shape of the outputs, so that it can be located by Attribution.output_index for a specific output.

ExplanationMetadataOverride

The ExplanationMetadata entries that can be overridden at online explanation time.

Fields
inputs

map<string, InputMetadataOverride>

Required. Overrides the input metadata of the features. The key is the name of the feature to be overridden. The keys specified here must exist in the input metadata to be overridden. If a feature is not specified here, the corresponding feature's input metadata is not overridden.

InputMetadataOverride

The input metadata entries to be overridden.

Fields
input_baselines[]

Value

Baseline inputs for this feature.

This overrides the input_baseline field of the ExplanationMetadata.InputMetadata object of the corresponding feature's input metadata. If it's not specified, the original baselines are not overridden.

ExplanationParameters

Parameters to configure explaining for Model's predictions.

Fields
top_k

int32

If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.

output_indices

ListValue

If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.

If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.

Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

Union field method.

method can be only one of the following:

sampled_shapley_attribution

SampledShapleyAttribution

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

integrated_gradients_attribution

IntegratedGradientsAttribution

An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

xrai_attribution

XraiAttribution

An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.

examples

Examples

Example-based explanations that returns the nearest neighbors from the provided dataset.

ExplanationSpec

Specification of Model explanation.

Fields
parameters

ExplanationParameters

Required. Parameters that configure explaining of the Model's predictions.

metadata

ExplanationMetadata

Optional. Metadata describing the Model's input and output for explanation.

ExplanationSpecOverride

The ExplanationSpec entries that can be overridden at online explanation time.

Fields
parameters

ExplanationParameters

The parameters to be overridden. Note that the attribution method cannot be changed. If not specified, no parameter is overridden.

metadata

ExplanationMetadataOverride

The metadata to be overridden. If not specified, no metadata is overridden.

examples_override

ExamplesOverride

The example-based explanations parameter overrides.

ExportDataConfig

Describes what part of the Dataset is to be exported, the destination of the export and how to export.

Fields
annotations_filter

string

An expression for filtering what part of the Dataset is to be exported. Only Annotations that match this filter will be exported. The filter syntax is the same as in ListAnnotations.

Union field destination. The destination of the output. destination can be only one of the following:
gcs_destination

GcsDestination

The Google Cloud Storage location where the output is to be written to. In the given directory a new directory will be created with name: export-data-<dataset-display-name>-<timestamp-of-export-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All export output will be written into that directory. Inside that directory, annotations with the same schema will be grouped into sub directories which are named with the corresponding annotations' schema title. Inside these sub directories, a schema.yaml will be created to describe the output format.

Union field split. The instructions how the export data should be split between the training, validation and test sets. split can be only one of the following:
fraction_split

ExportFractionSplit

Split based on fractions defining the size of each set.

ExportDataOperationMetadata

Runtime operation information for DatasetService.ExportData.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

gcs_output_directory

string

A Google Cloud Storage directory which path ends with '/'. The exported data is stored in the directory.

ExportDataRequest

Request message for DatasetService.ExportData.

Fields
name

string

Required. The name of the Dataset resource. Format: projects/{project}/locations/{location}/datasets/{dataset}

export_config

ExportDataConfig

Required. The desired output location.

ExportDataResponse

Response message for DatasetService.ExportData.

Fields
exported_files[]

string

All of the files that are exported in this export operation. For custom code training export, only three (training, validation and test) Cloud Storage paths in wildcard format are populated (for example, gs://.../training-*).

ExportFeatureValuesOperationMetadata

Details of operations that exports Features values.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Featurestore export Feature values.

ExportFeatureValuesRequest

Request message for FeaturestoreService.ExportFeatureValues.

Fields
entity_type

string

Required. The resource name of the EntityType from which to export Feature values. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}

destination

FeatureValueDestination

Required. Specifies destination location and format.

feature_selector

FeatureSelector

Required. Selects Features to export values of.

settings[]

DestinationFeatureSetting

Per-Feature export settings.

Union field mode. Required. The mode in which Feature values are exported. mode can be only one of the following:
snapshot_export

SnapshotExport

Exports the latest Feature values of all entities of the EntityType within a time range.

full_export

FullExport

Exports all historical values of all entities of the EntityType within a time range

FullExport

Describes exporting all historical Feature values of all entities of the EntityType between [start_time, end_time].

Fields
start_time

Timestamp

Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.

end_time

Timestamp

Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision.

SnapshotExport

Describes exporting the latest Feature values of all entities of the EntityType between [start_time, snapshot_time].

Fields
snapshot_time

Timestamp

Exports Feature values as of this timestamp. If not set, retrieve values as of now. Timestamp, if present, must not have higher than millisecond precision.

start_time

Timestamp

Excludes Feature values with feature generation timestamp before this timestamp. If not set, retrieve oldest values kept in Feature Store. Timestamp, if present, must not have higher than millisecond precision.

ExportFeatureValuesResponse

This type has no fields.

Response message for FeaturestoreService.ExportFeatureValues.

ExportFractionSplit

Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction, validation_fraction and test_fraction may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.

Fields
training_fraction

double

The fraction of the input data that is to be used to train the Model.

validation_fraction

double

The fraction of the input data that is to be used to validate the Model.

test_fraction

double

The fraction of the input data that is to be used to evaluate the Model.

ExportModelOperationMetadata

Details of ModelService.ExportModel operation.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

output_info

OutputInfo

Output only. Information further describing the output of this Model export.

OutputInfo

Further describes the output of the ExportModel. Supplements ExportModelRequest.OutputConfig.

Fields
artifact_output_uri

string

Output only. If the Model artifact is being exported to Google Cloud Storage this is the full path of the directory created, into which the Model files are being written to.

image_output_uri

string

Output only. If the Model image is being exported to Google Container Registry or Artifact Registry this is the full path of the image created.

ExportModelRequest

Request message for ModelService.ExportModel.

Fields
name

string

Required. The resource name of the Model to export. The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be exported.

output_config

OutputConfig

Required. The desired output location and configuration.

OutputConfig

Output configuration for the Model export.

Fields
export_format_id

string

The ID of the format in which the Model must be exported. Each Model lists the export formats it supports. If no value is provided here, then the first from the list of the Model's supported formats is used by default.

artifact_destination

GcsDestination

The Cloud Storage location where the Model artifact is to be written to. Under the directory given as the destination a new one with name "model-export-<model-display-name>-<timestamp-of-export-call>", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format, will be created. Inside, the Model and any of its supporting files will be written. This field should only be set when the exportableContent field of the [Model.supported_export_formats] object contains ARTIFACT.

image_destination

ContainerRegistryDestination

The Google Container Registry or Artifact Registry uri where the Model container image will be copied to. This field should only be set when the exportableContent field of the [Model.supported_export_formats] object contains IMAGE.

ExportModelResponse

This type has no fields.

Response message of ModelService.ExportModel operation.

ExportPublisherModelResponse

Response message for [ModelGardenService.ExportPublisherModel][].

Fields
publisher_model

string

The name of the PublisherModel resource. Format: publishers/{publisher}/models/{publisher_model}@{version_id}

destination_uri

string

The destination uri of the model weights.

ExportTensorboardTimeSeriesDataRequest

Request message for TensorboardService.ExportTensorboardTimeSeriesData.

Fields
tensorboard_time_series

string

Required. The resource name of the TensorboardTimeSeries to export data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}

filter

string

Exports the TensorboardTimeSeries' data that match the filter expression.

page_size

int32

The maximum number of data points to return per page. The default page_size is 1000. Values must be between 1 and 10000. Values above 10000 are coerced to 10000.

page_token

string

A page token, received from a previous ExportTensorboardTimeSeriesData call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to ExportTensorboardTimeSeriesData must match the call that provided the page token.

order_by

string

Field to use to sort the TensorboardTimeSeries' data. By default, TensorboardTimeSeries' data is returned in a pseudo random order.

ExportTensorboardTimeSeriesDataResponse

Response message for TensorboardService.ExportTensorboardTimeSeriesData.

Fields
time_series_data_points[]

TimeSeriesDataPoint

The returned time series data points.

next_page_token

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

Extension

Extensions are tools for large language models to access external data, run computations, etc.

Fields
name

string

Identifier. The resource name of the Extension.

display_name

string

Required. The display name of the Extension. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

Optional. The description of the Extension.

create_time

Timestamp

Output only. Timestamp when this Extension was created.

update_time

Timestamp

Output only. Timestamp when this Extension was most recently updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

manifest

ExtensionManifest

Required. Manifest of the Extension.

extension_operations[]

ExtensionOperation

Output only. Supported operations.

runtime_config

RuntimeConfig

Optional. Runtime config controlling the runtime behavior of this Extension.

tool_use_examples[]

ToolUseExample

Optional. Examples to illustrate the usage of the extension as a tool.

private_service_connect_config

ExtensionPrivateServiceConnectConfig

Optional. The PrivateServiceConnect config for the extension. If specified, the service endpoints associated with the Extension should be registered with private network access in the provided Service Directory.

If the service contains more than one endpoint with a network, the service will arbitrarilty choose one of the endpoints to use for extension execution.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

ExtensionManifest

Manifest spec of an Extension needed for runtime execution.

Fields
name

string

Required. Extension name shown to the LLM. The name can be up to 128 characters long.

description

string

Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. e.g., if the extension is a data store, you can let the LLM know what data it contains.

api_spec

ApiSpec

Required. Immutable. The API specification shown to the LLM.

auth_config

AuthConfig

Required. Immutable. Type of auth supported by this extension.

ApiSpec

The API specification shown to the LLM.

Fields

Union field api_spec.

api_spec can be only one of the following:

open_api_yaml

string

The API spec in Open API standard and YAML format.

open_api_gcs_uri

string

Cloud Storage URI pointing to the OpenAPI spec.

ExtensionOperation

Operation of an extension.

Fields
operation_id

string

Operation ID that uniquely identifies the operations among the extension. See: "Operation Object" in https://swagger.io/specification/.

This field is parsed from the OpenAPI spec. For HTTP extensions, if it does not exist in the spec, we will generate one from the HTTP method and path.

function_declaration

FunctionDeclaration

Output only. Structured representation of a function declaration as defined by the OpenAPI Spec.

ExtensionPrivateServiceConnectConfig

PrivateExtensionConfig configuration for the extension.

Fields
service_directory

string

Required. The Service Directory resource name in which the service endpoints associated to the extension are registered. Format: projects/{project_id}/locations/{location_id}/namespaces/{namespace_id}/services/{service_id}

Fact

The fact used in grounding.

Fields
query

string

Query that is used to retrieve this fact.

title

string

If present, it refers to the title of this fact.

uri

string

If present, this uri links to the source of the fact.

summary

string

If present, the summary/snippet of the fact.

vector_distance
(deprecated)

double

If present, the distance between the query vector and this fact vector.

score

double

If present, according to the underlying Vector DB and the selected metric type, the score can be either the distance or the similarity between the query and the fact and its range depends on the metric type.

For example, if the metric type is COSINE_DISTANCE, it represents the distance between the query and the fact. The larger the distance, the less relevant the fact is to the query. The range is [0, 2], while 0 means the most relevant and 2 means the least relevant.

FasterDeploymentConfig

Configuration for faster model deployment.

Fields
fast_tryout_enabled

bool

If true, enable fast tryout feature for this deployed model.

Feature

Feature Metadata information. For example, color is a feature that describes an apple.

Fields
name

string

Immutable. Name of the Feature. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}/features/{feature} projects/{project}/locations/{location}/featureGroups/{feature_group}/features/{feature}

The last part feature is assigned by the client. The feature can be up to 64 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscore(_), and ASCII digits 0-9 starting with a letter. The value will be unique given an entity type.

description

string

Description of the Feature.

value_type

ValueType

Immutable. Only applicable for Vertex AI Feature Store (Legacy). Type of Feature value.

create_time

Timestamp

Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was created.

update_time

Timestamp

Output only. Only applicable for Vertex AI Feature Store (Legacy). Timestamp when this EntityType was most recently updated.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your Features.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Feature (System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

monitoring_config
(deprecated)

FeaturestoreMonitoringConfig

Optional. Only applicable for Vertex AI Feature Store (Legacy). Deprecated: The custom monitoring configuration for this Feature, if not set, use the monitoring_config defined for the EntityType this Feature belongs to. Only Features with type (Feature.ValueType) BOOL, STRING, DOUBLE or INT64 can enable monitoring.

If this is populated with [FeaturestoreMonitoringConfig.disabled][] = true, snapshot analysis monitoring is disabled; if [FeaturestoreMonitoringConfig.monitoring_interval][] specified, snapshot analysis monitoring is enabled. Otherwise, snapshot analysis monitoring config is same as the EntityType's this Feature belongs to.

disable_monitoring

bool

Optional. Only applicable for Vertex AI Feature Store (Legacy). If not set, use the monitoring_config defined for the EntityType this Feature belongs to. Only Features with type (Feature.ValueType) BOOL, STRING, DOUBLE or INT64 can enable monitoring.

If set to true, all types of data monitoring are disabled despite the config on EntityType.

monitoring_stats[]

FeatureStatsAnomaly

Output only. Only applicable for Vertex AI Feature Store (Legacy). A list of historical SnapshotAnalysis stats requested by user, sorted by FeatureStatsAnomaly.start_time descending.

monitoring_stats_anomalies[]

MonitoringStatsAnomaly

Output only. Only applicable for Vertex AI Feature Store (Legacy). The list of historical stats and anomalies with specified objectives.

feature_stats_and_anomaly[]

FeatureStatsAndAnomaly

Output only. Only applicable for Vertex AI Feature Store. The list of historical stats and anomalies.

version_column_name

string

Only applicable for Vertex AI Feature Store. The name of the BigQuery Table/View column hosting data for this version. If no value is provided, will use feature_id.

point_of_contact

string

Entity responsible for maintaining this feature. Can be comma separated list of email addresses or URIs.

MonitoringStatsAnomaly

A list of historical SnapshotAnalysis or ImportFeaturesAnalysis stats requested by user, sorted by FeatureStatsAnomaly.start_time descending.

Fields
objective

Objective

Output only. The objective for each stats.

feature_stats_anomaly

FeatureStatsAnomaly

Output only. The stats and anomalies generated at specific timestamp.

Objective

If the objective in the request is both Import Feature Analysis and Snapshot Analysis, this objective could be one of them. Otherwise, this objective should be the same as the objective in the request.

Enums
OBJECTIVE_UNSPECIFIED If it's OBJECTIVE_UNSPECIFIED, monitoring_stats will be empty.
IMPORT_FEATURE_ANALYSIS Stats are generated by Import Feature Analysis.
SNAPSHOT_ANALYSIS Stats are generated by Snapshot Analysis.

ValueType

Only applicable for Vertex AI Legacy Feature Store. An enum representing the value type of a feature.

Enums
VALUE_TYPE_UNSPECIFIED The value type is unspecified.
BOOL Used for Feature that is a boolean.
BOOL_ARRAY Used for Feature that is a list of boolean.
DOUBLE Used for Feature that is double.
DOUBLE_ARRAY Used for Feature that is a list of double.
INT64 Used for Feature that is INT64.
INT64_ARRAY Used for Feature that is a list of INT64.
STRING Used for Feature that is string.
STRING_ARRAY Used for Feature that is a list of String.
BYTES Used for Feature that is bytes.
STRUCT Used for Feature that is struct.

FeatureGroup

Vertex AI Feature Group.

Fields
name

string

Identifier. Name of the FeatureGroup. Format: projects/{project}/locations/{location}/featureGroups/{featureGroup}

create_time

Timestamp

Output only. Timestamp when this FeatureGroup was created.

update_time

Timestamp

Output only. Timestamp when this FeatureGroup was last updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your FeatureGroup.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureGroup(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

description

string

Optional. Description of the FeatureGroup.

service_agent_type

ServiceAgentType

Optional. Service agent type used during jobs under a FeatureGroup. By default, the Vertex AI Service Agent is used. When using an IAM Policy to isolate this FeatureGroup within a project, a separate service account should be provisioned by setting this field to SERVICE_AGENT_TYPE_FEATURE_GROUP. This will generate a separate service account to access the BigQuery source table.

service_account_email

string

Output only. A Service Account unique to this FeatureGroup. The role bigquery.dataViewer should be granted to this service account to allow Vertex AI Feature Store to access source data while running jobs under this FeatureGroup.

Union field source.

source can be only one of the following:

big_query

BigQuery

Indicates that features for this group come from BigQuery Table/View. By default treats the source as a sparse time series source. The BigQuery source table or view must have at least one entity ID column and a column named feature_timestamp.

BigQuery

Input source type for BigQuery Tables and Views.

Fields
big_query_source

BigQuerySource

Required. Immutable. The BigQuery source URI that points to either a BigQuery Table or View.

entity_id_columns[]

string

Optional. Columns to construct entity_id / row keys. If not provided defaults to entity_id.

static_data_source

bool

Optional. Set if the data source is not a time-series.

time_series

TimeSeries

Optional. If the source is a time-series source, this can be set to control how downstream sources (ex: FeatureView ) will treat time-series sources. If not set, will treat the source as a time-series source with feature_timestamp as timestamp column and no scan boundary.

dense

bool

Optional. If set, all feature values will be fetched from a single row per unique entityId including nulls. If not set, will collapse all rows for each unique entityId into a singe row with any non-null values if present, if no non-null values are present will sync null. ex: If source has schema (entity_id, feature_timestamp, f0, f1) and the following rows: (e1, 2020-01-01T10:00:00.123Z, 10, 15) (e1, 2020-02-01T10:00:00.123Z, 20, null) If dense is set, (e1, 20, null) is synced to online stores. If dense is not set, (e1, 20, 15) is synced to online stores.

TimeSeries

Fields
timestamp_column

string

Optional. Column hosting timestamp values for a time-series source. Will be used to determine the latest feature_values for each entity. Optional. If not provided, column named feature_timestamp of type TIMESTAMP will be used.

ServiceAgentType

Service agent type used during jobs under a FeatureGroup.

Enums
SERVICE_AGENT_TYPE_UNSPECIFIED By default, the project-level Vertex AI Service Agent is enabled.
SERVICE_AGENT_TYPE_PROJECT Specifies the project-level Vertex AI Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).
SERVICE_AGENT_TYPE_FEATURE_GROUP Enable a FeatureGroup service account to be created by Vertex AI and output in the field service_account_email. This service account will be used to read from the source BigQuery table during jobs under a FeatureGroup.

FeatureMonitor

Vertex AI Feature Monitor.

Fields
name

string

Identifier. Name of the FeatureMonitor. Format: projects/{project}/locations/{location}/featureGroups/{featureGroup}/featureMonitors/{featureMonitor}

create_time

Timestamp

Output only. Timestamp when this FeatureMonitor was created.

update_time

Timestamp

Output only. Timestamp when this FeatureMonitor was last updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your FeatureMonitor.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureMonitor(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

description

string

Optional. Description of the FeatureMonitor.

schedule_config

ScheduleConfig

Required. Schedule config for the FeatureMonitor.

feature_selection_config

FeatureSelectionConfig

Required. Feature selection config for the FeatureMonitor.

FeatureMonitorJob

Vertex AI Feature Monitor Job.

Fields
name

string

Identifier. Name of the FeatureMonitorJob. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}/featureMonitors/{feature_monitor}/featureMonitorJobs/{feature_monitor_job}.

create_time

Timestamp

Output only. Timestamp when this FeatureMonitorJob was created. Creation of a FeatureMonitorJob means that the job is pending / waiting for sufficient resources but may not have started running yet.

final_status

Status

Output only. Final status of the FeatureMonitorJob.

job_summary

JobSummary

Output only. Summary from the FeatureMonitorJob.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your FeatureMonitorJob.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureMonitor(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

description

string

Optional. Description of the FeatureMonitor.

drift_base_feature_monitor_job_id

int64

Output only. FeatureMonitorJob ID comparing to which the drift is calculated.

drift_base_snapshot_time

Timestamp

Output only. Data snapshot time comparing to which the drift is calculated.

feature_selection_config

FeatureSelectionConfig

Output only. Feature selection config used when creating FeatureMonitorJob.

trigger_type

FeatureMonitorJobTrigger

Output only. Trigger type of the Feature Monitor Job.

FeatureMonitorJobTrigger

Choices of the trigger type.

Enums
FEATURE_MONITOR_JOB_TRIGGER_UNSPECIFIED Trigger type unspecified.
FEATURE_MONITOR_JOB_TRIGGER_PERIODIC Triggered by periodic schedule.
FEATURE_MONITOR_JOB_TRIGGER_ON_DEMAND Triggered on demand by CreateFeatureMonitorJob request.

JobSummary

Summary from the FeatureMonitorJob.

Fields
total_slot_ms

int64

Output only. BigQuery slot milliseconds consumed.

feature_stats_and_anomalies[]

FeatureStatsAndAnomaly

Output only. Features and their stats and anomalies

FeatureNoiseSigma

Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients.

Fields
noise_sigma[]

NoiseSigmaForFeature

Noise sigma per feature. No noise is added to features that are not set.

NoiseSigmaForFeature

Noise sigma for a single feature.

Fields
name

string

The name of the input feature for which noise sigma is provided. The features are defined in explanation metadata inputs.

sigma

float

This represents the standard deviation of the Gaussian kernel that will be used to add noise to the feature prior to computing gradients. Similar to noise_sigma but represents the noise added to the current feature. Defaults to 0.1.

FeatureOnlineStore

Vertex AI Feature Online Store provides a centralized repository for serving ML features and embedding indexes at low latency. The Feature Online Store is a top-level container.

Fields
name

string

Identifier. Name of the FeatureOnlineStore. Format: projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}

create_time

Timestamp

Output only. Timestamp when this FeatureOnlineStore was created.

update_time

Timestamp

Output only. Timestamp when this FeatureOnlineStore was last updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your FeatureOnlineStore.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

state

State

Output only. State of the featureOnlineStore.

dedicated_serving_endpoint

DedicatedServingEndpoint

Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint.

embedding_management
(deprecated)

EmbeddingManagement

Optional. Deprecated: This field is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type.

encryption_spec

EncryptionSpec

Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

Union field storage_type.

storage_type can be only one of the following:

bigtable

Bigtable

Contains settings for the Cloud Bigtable instance that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore.

optimized

Optimized

Contains settings for the Optimized store that will be created to serve featureValues for all FeatureViews under this FeatureOnlineStore. When choose Optimized storage type, need to set PrivateServiceConnectConfig.enable_private_service_connect to use private endpoint. Otherwise will use public endpoint by default.

Bigtable

Fields
auto_scaling

AutoScaling

Required. Autoscaling config applied to Bigtable Instance.

AutoScaling

Fields
min_node_count

int32

Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1.

max_node_count

int32

Required. The maximum number of nodes to scale up to. Must be greater than or equal to min_node_count, and less than or equal to 10 times of 'min_node_count'.

cpu_utilization_target

int32

Optional. A percentage of the cluster's CPU capacity. Can be from 10% to 80%. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set will default to 50%.

DedicatedServingEndpoint

The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default.

Fields
public_endpoint_domain_name

string

Output only. This field will be populated with the domain name to use for this FeatureOnlineStore

private_service_connect_config

PrivateServiceConnectConfig

Optional. Private service connect config. The private service connection is available only for Optimized storage type, not for embedding management now. If PrivateServiceConnectConfig.enable_private_service_connect set to true, customers will use private service connection to send request. Otherwise, the connection will set to public endpoint.

service_attachment

string

Output only. The name of the service attachment resource. Populated if private service connect is enabled and after FeatureViewSync is created.

EmbeddingManagement

Deprecated: This sub message is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. Contains settings for embedding management.

Fields
enabled

bool

Optional. Immutable. Whether to enable embedding management in this FeatureOnlineStore. It's immutable after creation to ensure the FeatureOnlineStore availability.

Optimized

This type has no fields.

Optimized storage type

State

Possible states a featureOnlineStore can have.

Enums
STATE_UNSPECIFIED Default value. This value is unused.
STABLE State when the featureOnlineStore configuration is not being updated and the fields reflect the current configuration of the featureOnlineStore. The featureOnlineStore is usable in this state.
UPDATING The state of the featureOnlineStore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featureOnlineStore. The featureOnlineStore is still usable in this state.

FeatureSelectionConfig

Feature selection configuration for the FeatureMonitor.

Fields
feature_configs[]

FeatureConfig

Optional. A list of features to be monitored and each feature's drift threshold.

FeatureConfig

Feature configuration.

Fields
feature_id

string

Required. The ID of the feature resource. Final component of the Feature's resource name.

drift_threshold

double

Optional. Drift threshold. If calculated difference with baseline data larger than threshold, it will be considered as the feature has drift. If not present, the threshold will be default to 0.3.

FeatureSelector

Selector for Features of an EntityType.

Fields
id_matcher

IdMatcher

Required. Matches Features based on ID.

FeatureStatsAndAnomaly

Stats and Anomaly generated by FeatureMonitorJobs. Anomaly only includes Drift.

Fields
feature_id

string

Feature Id.

feature_stats

Value

Feature stats. e.g. histogram buckets. In the format of tensorflow.metadata.v0.DatasetFeatureStatistics.

distribution_deviation

double

Deviation from the current stats to baseline stats. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence.

drift_detection_threshold

double

This is the threshold used when detecting drifts, which is set in FeatureMonitor.FeatureSelectionConfig.FeatureConfig.drift_threshold

drift_detected

bool

If set to true, indicates current stats is detected as and comparing with baseline stats.

stats_time

Timestamp

The timestamp we take snapshot for feature values to generate stats.

feature_monitor_job_id

int64

The ID of the FeatureMonitorJob that generated this FeatureStatsAndAnomaly.

feature_monitor_id

string

The ID of the FeatureMonitor that this FeatureStatsAndAnomaly generated according to.

FeatureStatsAndAnomalySpec

Defines how to select FeatureStatsAndAnomaly to be populated in response. If set, retrieves FeatureStatsAndAnomaly generated by FeatureMonitors based on this spec.

Fields
stats_time_range

Interval

Optional. If set, return all stats generated between [start_time, end_time). If latest_stats_count is set, return the most recent count of stats within the stats_time_range.

latest_stats_count

int32

Optional. If set, returns the most recent count of stats. Valid value is [0, 100]. If stats_time_range is set, return most recent count of stats within the stats_time_range.

FeatureStatsAnomaly

Stats and Anomaly generated at specific timestamp for specific Feature. The start_time and end_time are used to define the time range of the dataset that current stats belongs to, e.g. prediction traffic is bucketed into prediction datasets by time window. If the Dataset is not defined by time window, start_time = end_time. Timestamp of the stats and anomalies always refers to end_time. Raw stats and anomalies are stored in stats_uri or anomaly_uri in the tensorflow defined protos. Field data_stats contains almost identical information with the raw stats in Vertex AI defined proto, for UI to display.

Fields
score

double

Feature importance score, only populated when cross-feature monitoring is enabled. For now only used to represent feature attribution score within range [0, 1] for ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_SKEW and ModelDeploymentMonitoringObjectiveType.FEATURE_ATTRIBUTION_DRIFT.

stats_uri

string

Path of the stats file for current feature values in Cloud Storage bucket. Format: gs:////stats. Example: gs://monitoring_bucket/feature_name/stats. Stats are stored as binary format with Protobuf message tensorflow.metadata.v0.FeatureNameStatistics.

anomaly_uri

string

Path of the anomaly file for current feature values in Cloud Storage bucket. Format: gs:////anomalies. Example: gs://monitoring_bucket/feature_name/anomalies. Stats are stored as binary format with Protobuf message Anoamlies are stored as binary format with Protobuf message tensorflow.metadata.v0.AnomalyInfo.

distribution_deviation

double

Deviation from the current stats to baseline stats. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence.

anomaly_detection_threshold

double

This is the threshold used when detecting anomalies. The threshold can be changed by user, so this one might be different from ThresholdConfig.value.

start_time

Timestamp

The start timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), start_time is only used to indicate the monitoring intervals, so it always equals to (end_time - monitoring_interval).

end_time

Timestamp

The end timestamp of window where stats were generated. For objectives where time window doesn't make sense (e.g. Featurestore Snapshot Monitoring), end_time indicates the timestamp of the data used to generate stats (e.g. timestamp we take snapshots for feature values).

FeatureValue

Value for a feature.

Fields
metadata

Metadata

Metadata of feature value.

Union field value. Value for the feature. value can be only one of the following:
bool_value

bool

Bool type feature value.

double_value

double

Double type feature value.

int64_value

int64

Int64 feature value.

string_value

string

String feature value.

bool_array_value

BoolArray

A list of bool type feature value.

double_array_value

DoubleArray

A list of double type feature value.

int64_array_value

Int64Array

A list of int64 type feature value.

string_array_value

StringArray

A list of string type feature value.

bytes_value

bytes

Bytes feature value.

struct_value

StructValue

A struct type feature value.

Metadata

Metadata of feature value.

Fields
generate_time

Timestamp

Feature generation timestamp. Typically, it is provided by user at feature ingestion time. If not, feature store will use the system timestamp when the data is ingested into feature store. For streaming ingestion, the time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future.

FeatureValueDestination

A destination location for Feature values and format.

Fields

Union field destination.

destination can be only one of the following:

bigquery_destination

BigQueryDestination

Output in BigQuery format. BigQueryDestination.output_uri in FeatureValueDestination.bigquery_destination must refer to a table.

tfrecord_destination

TFRecordDestination

Output in TFRecord format.

Below are the mapping from Feature value type in Featurestore to Feature value type in TFRecord:

Value type in Featurestore                 | Value type in TFRecord
DOUBLE, DOUBLE_ARRAY                       | FLOAT_LIST
INT64, INT64_ARRAY                         | INT64_LIST
STRING, STRING_ARRAY, BYTES                | BYTES_LIST
true -> byte_string("true"), false -> byte_string("false")
BOOL, BOOL_ARRAY (true, false)             | BYTES_LIST
csv_destination

CsvDestination

Output in CSV format. Array Feature value types are not allowed in CSV format.

FeatureValueList

Container for list of values.

Fields
values[]

FeatureValue

A list of feature values. All of them should be the same data type.

FeatureView

FeatureView is representation of values that the FeatureOnlineStore will serve based on its syncConfig.

Fields
name

string

Identifier. Name of the FeatureView. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

create_time

Timestamp

Output only. Timestamp when this FeatureView was created.

update_time

Timestamp

Output only. Timestamp when this FeatureView was last updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your FeatureViews.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

sync_config

SyncConfig

Configures when data is to be synced/updated for this FeatureView. At the end of the sync the latest featureValues for each entityId of this FeatureView are made ready for online serving.

vector_search_config
(deprecated)

VectorSearchConfig

Optional. Deprecated: please use FeatureView.index_config instead.

index_config

IndexConfig

Optional. Configuration for index preparation for vector search. It contains the required configurations to create an index from source data, so that approximate nearest neighbor (a.k.a ANN) algorithms search can be performed during online serving.

optimized_config

OptimizedConfig

Optional. Configuration for FeatureView created under Optimized FeatureOnlineStore.

service_agent_type

ServiceAgentType

Optional. Service agent type used during data sync. By default, the Vertex AI Service Agent is used. When using an IAM Policy to isolate this FeatureView within a project, a separate service account should be provisioned by setting this field to SERVICE_AGENT_TYPE_FEATURE_VIEW. This will generate a separate service account to access the BigQuery source table.

service_account_email

string

Output only. A Service Account unique to this FeatureView. The role bigquery.dataViewer should be granted to this service account to allow Vertex AI Feature Store to sync data to the online store.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

Union field source.

source can be only one of the following:

big_query_source

BigQuerySource

Optional. Configures how data is supposed to be extracted from a BigQuery source to be loaded onto the FeatureOnlineStore.

feature_registry_source

FeatureRegistrySource

Optional. Configures the features from a Feature Registry source that need to be loaded onto the FeatureOnlineStore.

vertex_rag_source

VertexRagSource

Optional. The Vertex RAG Source that the FeatureView is linked to.

BigQuerySource

Fields
uri

string

Required. The BigQuery view URI that will be materialized on each sync trigger based on FeatureView.SyncConfig.

entity_id_columns[]

string

Required. Columns to construct entity_id / row keys.

FeatureRegistrySource

A Feature Registry source for features that need to be synced to Online Store.

Fields
feature_groups[]

FeatureGroup

Required. List of features that need to be synced to Online Store.

project_number

int64

Optional. The project number of the parent project of the Feature Groups.

FeatureGroup

Features belonging to a single feature group that will be synced to Online Store.

Fields
feature_group_id

string

Required. Identifier of the feature group.

feature_ids[]

string

Required. Identifiers of features under the feature group.

IndexConfig

Configuration for vector indexing.

Fields
embedding_column

string

Optional. Column of embedding. This column contains the source data to create index for vector search. embedding_column must be set when using vector search.

filter_columns[]

string

Optional. Columns of features that're used to filter vector search results.

crowding_column

string

Optional. Column of crowding. This column contains crowding attribute which is a constraint on a neighbor list produced by FeatureOnlineStoreService.SearchNearestEntities to diversify search results. If NearestNeighborQuery.per_crowding_attribute_neighbor_count is set to K in SearchNearestEntitiesRequest, it's guaranteed that no more than K entities of the same crowding attribute are returned in the response.

distance_measure_type

DistanceMeasureType

Optional. The distance measure used in nearest neighbor search.

Union field algorithm_config. The configuration with regard to the algorithms used for efficient search. algorithm_config can be only one of the following:
tree_ah_config

TreeAHConfig

Optional. Configuration options for the tree-AH algorithm (Shallow tree + Asymmetric Hashing). Please refer to this paper for more details: https://arxiv.org/abs/1908.10396

brute_force_config

BruteForceConfig

Optional. Configuration options for using brute force search, which simply implements the standard linear search in the database for each query. It is primarily meant for benchmarking and to generate the ground truth for approximate search.

embedding_dimension

int32

Optional. The number of dimensions of the input embedding.

BruteForceConfig

This type has no fields.

Configuration options for using brute force search.

DistanceMeasureType

The distance measure used in nearest neighbor search.

Enums
DISTANCE_MEASURE_TYPE_UNSPECIFIED Should not be set.
SQUARED_L2_DISTANCE Euclidean (L_2) Distance.
COSINE_DISTANCE

Cosine Distance. Defined as 1 - cosine similarity.

We strongly suggest using DOT_PRODUCT_DISTANCE + UNIT_L2_NORM instead of COSINE distance. Our algorithms have been more optimized for DOT_PRODUCT distance which, when combined with UNIT_L2_NORM, is mathematically equivalent to COSINE distance and results in the same ranking.

DOT_PRODUCT_DISTANCE Dot Product Distance. Defined as a negative of the dot product.

TreeAHConfig

Configuration options for the tree-AH algorithm.

Fields
leaf_node_embedding_count

int64

Optional. Number of embeddings on each leaf node. The default value is 1000 if not set.

OptimizedConfig

Configuration for FeatureViews created in Optimized FeatureOnlineStore.

Fields
automatic_resources

AutomaticResources

Optional. A description of resources that the FeatureView uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2. If max_replica_count is not set, the default value is 6. The max allowed replica count is 1000.

ServiceAgentType

Service agent type used during data sync.

Enums
SERVICE_AGENT_TYPE_UNSPECIFIED By default, the project-level Vertex AI Service Agent is enabled.
SERVICE_AGENT_TYPE_PROJECT Indicates the project-level Vertex AI Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used during sync jobs.
SERVICE_AGENT_TYPE_FEATURE_VIEW Enable a FeatureView service account to be created by Vertex AI and output in the field service_account_email. This service account will be used to read from the source BigQuery table during sync.

SyncConfig

Configuration for Sync. Only one option is set.

Fields
cron

string

Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".

continuous

bool

Optional. If true, syncs the FeatureView in a continuous manner to Online Store.

VectorSearchConfig

Deprecated. Use IndexConfig instead.

Fields
embedding_column

string

Optional. Column of embedding. This column contains the source data to create index for vector search. embedding_column must be set when using vector search.

filter_columns[]

string

Optional. Columns of features that're used to filter vector search results.

crowding_column

string

Optional. Column of crowding. This column contains crowding attribute which is a constraint on a neighbor list produced by FeatureOnlineStoreService.SearchNearestEntities to diversify search results. If NearestNeighborQuery.per_crowding_attribute_neighbor_count is set to K in SearchNearestEntitiesRequest, it's guaranteed that no more than K entities of the same crowding attribute are returned in the response.

distance_measure_type

DistanceMeasureType

Optional. The distance measure used in nearest neighbor search.

Union field algorithm_config. The configuration with regard to the algorithms used for efficient search. algorithm_config can be only one of the following:
tree_ah_config

TreeAHConfig

Optional. Configuration options for the tree-AH algorithm (Shallow tree + Asymmetric Hashing). Please refer to this paper for more details: https://arxiv.org/abs/1908.10396

brute_force_config

BruteForceConfig

Optional. Configuration options for using brute force search, which simply implements the standard linear search in the database for each query. It is primarily meant for benchmarking and to generate the ground truth for approximate search.

embedding_dimension

int32

Optional. The number of dimensions of the input embedding.

BruteForceConfig

This type has no fields.

DistanceMeasureType

Enums
DISTANCE_MEASURE_TYPE_UNSPECIFIED Should not be set.
SQUARED_L2_DISTANCE Euclidean (L_2) Distance.
COSINE_DISTANCE

Cosine Distance. Defined as 1 - cosine similarity.

We strongly suggest using DOT_PRODUCT_DISTANCE + UNIT_L2_NORM instead of COSINE distance. Our algorithms have been more optimized for DOT_PRODUCT distance which, when combined with UNIT_L2_NORM, is mathematically equivalent to COSINE distance and results in the same ranking.

DOT_PRODUCT_DISTANCE Dot Product Distance. Defined as a negative of the dot product.

TreeAHConfig

Fields
leaf_node_embedding_count

int64

Optional. Number of embeddings on each leaf node. The default value is 1000 if not set.

VertexRagSource

A Vertex Rag source for features that need to be synced to Online Store.

Fields
uri

string

Required. The BigQuery view/table URI that will be materialized on each manual sync trigger. The table/view is expected to have the following columns and types at least: - corpus_id (STRING, NULLABLE/REQUIRED) - file_id (STRING, NULLABLE/REQUIRED) - chunk_id (STRING, NULLABLE/REQUIRED) - chunk_data_type (STRING, NULLABLE/REQUIRED) - chunk_data (STRING, NULLABLE/REQUIRED) - embeddings (FLOAT, REPEATED) - file_original_uri (STRING, NULLABLE/REQUIRED)

rag_corpus_id

int64

Optional. The RAG corpus id corresponding to this FeatureView.

FeatureViewDataFormat

Format of the data in the Feature View.

Enums
FEATURE_VIEW_DATA_FORMAT_UNSPECIFIED Not set. Will be treated as the KeyValue format.
KEY_VALUE Return response data in key-value format.
PROTO_STRUCT Return response data in proto Struct format.

FeatureViewDataKey

Lookup key for a feature view.

Fields

Union field key_oneof.

key_oneof can be only one of the following:

key

string

String key to use for lookup.

composite_key

CompositeKey

The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec.

CompositeKey

ID that is comprised from several parts (columns).

Fields
parts[]

string

Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order.

FeatureViewSync

FeatureViewSync is a representation of sync operation which copies data from data source to Feature View in Online Store.

Fields
name

string

Identifier. Name of the FeatureViewSync. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}

create_time

Timestamp

Output only. Time when this FeatureViewSync is created. Creation of a FeatureViewSync means that the job is pending / waiting for sufficient resources but may not have started the actual data transfer yet.

run_time

Interval

Output only. Time when this FeatureViewSync is finished.

final_status

Status

Output only. Final status of the FeatureViewSync.

sync_summary

SyncSummary

Output only. Summary of the sync job.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

SyncSummary

Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync.

Fields
row_synced

int64

Output only. Total number of rows synced.

total_slot

int64

Output only. BigQuery slot milliseconds consumed for the sync job.

system_watermark_time

Timestamp

Lower bound of the system time watermark for the sync job. This is only set for continuously syncing feature views.

Featurestore

Vertex AI Feature Store provides a centralized repository for organizing, storing, and serving ML features. The Featurestore is a top-level container for your features and their values.

Fields
name

string

Output only. Name of the Featurestore. Format: projects/{project}/locations/{location}/featurestores/{featurestore}

create_time

Timestamp

Output only. Timestamp when this Featurestore was created.

update_time

Timestamp

Output only. Timestamp when this Featurestore was last updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize your Featurestore.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one Featurestore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

online_serving_config

OnlineServingConfig

Optional. Config for online storage resources. The field should not co-exist with the field of OnlineStoreReplicationConfig. If both of it and OnlineStoreReplicationConfig are unset, the feature store will not have an online store and cannot be used for online serving.

state

State

Output only. State of the featurestore.

online_storage_ttl_days

int32

Optional. TTL in days for feature values that will be stored in online serving storage. The Feature Store online storage periodically removes obsolete feature values older than online_storage_ttl_days since the feature generation time. Note that online_storage_ttl_days should be less than or equal to offline_storage_ttl_days for each EntityType under a featurestore. If not set, default to 4000 days

encryption_spec

EncryptionSpec

Optional. Customer-managed encryption key spec for data storage. If set, both of the online and offline data storage will be secured by this key.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

OnlineServingConfig

OnlineServingConfig specifies the details for provisioning online serving resources.

Fields
fixed_node_count

int32

The number of nodes for the online store. The number of nodes doesn't scale automatically, but you can manually update the number of nodes. If set to 0, the featurestore will not have an online store and cannot be used for online serving.

scaling

Scaling

Online serving scaling configuration. Only one of fixed_node_count and scaling can be set. Setting one will reset the other.

Scaling

Online serving scaling configuration. If min_node_count and max_node_count are set to the same value, the cluster will be configured with the fixed number of node (no auto-scaling).

Fields
min_node_count

int32

Required. The minimum number of nodes to scale down to. Must be greater than or equal to 1.

max_node_count

int32

The maximum number of nodes to scale up to. Must be greater than min_node_count, and less than or equal to 10 times of 'min_node_count'.

cpu_utilization_target

int32

Optional. The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80. When a cluster's CPU utilization exceeds the target that you have set, Bigtable immediately adds nodes to the cluster. When CPU utilization is substantially lower than the target, Bigtable removes nodes. If not set or set to 0, default to 50.

State

Possible states a featurestore can have.

Enums
STATE_UNSPECIFIED Default value. This value is unused.
STABLE State when the featurestore configuration is not being updated and the fields reflect the current configuration of the featurestore. The featurestore is usable in this state.
UPDATING The state of the featurestore configuration when it is being updated. During an update, the fields reflect either the original configuration or the updated configuration of the featurestore. For example, online_serving_config.fixed_node_count can take minutes to update. While the update is in progress, the featurestore is in the UPDATING state, and the value of fixed_node_count can be the original value or the updated value, depending on the progress of the operation. Until the update completes, the actual number of nodes can still be the original value of fixed_node_count. The featurestore is still usable in this state.

FeaturestoreMonitoringConfig

Configuration of how features in Featurestore are monitored.

Fields
snapshot_analysis

SnapshotAnalysis

The config for Snapshot Analysis Based Feature Monitoring.

import_features_analysis

ImportFeaturesAnalysis

The config for ImportFeatures Analysis Based Feature Monitoring.

numerical_threshold_config

ThresholdConfig

Threshold for numerical features of anomaly detection. This is shared by all objectives of Featurestore Monitoring for numerical features (i.e. Features with type (Feature.ValueType) DOUBLE or INT64).

categorical_threshold_config

ThresholdConfig

Threshold for categorical features of anomaly detection. This is shared by all types of Featurestore Monitoring for categorical features (i.e. Features with type (Feature.ValueType) BOOL or STRING).

ImportFeaturesAnalysis

Configuration of the Featurestore's ImportFeature Analysis Based Monitoring. This type of analysis generates statistics for values of each Feature imported by every ImportFeatureValues operation.

Fields
state

State

Whether to enable / disable / inherite default hebavior for import features analysis.

anomaly_detection_baseline

Baseline

The baseline used to do anomaly detection for the statistics generated by import features analysis.

Baseline

Defines the baseline to do anomaly detection for feature values imported by each ImportFeatureValues operation.

Enums
BASELINE_UNSPECIFIED Should not be used.
LATEST_STATS Choose the later one statistics generated by either most recent snapshot analysis or previous import features analysis. If non of them exists, skip anomaly detection and only generate a statistics.
MOST_RECENT_SNAPSHOT_STATS Use the statistics generated by the most recent snapshot analysis if exists.
PREVIOUS_IMPORT_FEATURES_STATS Use the statistics generated by the previous import features analysis if exists.

State

The state defines whether to enable ImportFeature analysis.

Enums
STATE_UNSPECIFIED Should not be used.
DEFAULT The default behavior of whether to enable the monitoring. EntityType-level config: disabled. Feature-level config: inherited from the configuration of EntityType this Feature belongs to.
ENABLED Explicitly enables import features analysis. EntityType-level config: by default enables import features analysis for all Features under it. Feature-level config: enables import features analysis regardless of the EntityType-level config.
DISABLED Explicitly disables import features analysis. EntityType-level config: by default disables import features analysis for all Features under it. Feature-level config: disables import features analysis regardless of the EntityType-level config.

SnapshotAnalysis

Configuration of the Featurestore's Snapshot Analysis Based Monitoring. This type of analysis generates statistics for each Feature based on a snapshot of the latest feature value of each entities every monitoring_interval.

Fields
disabled

bool

The monitoring schedule for snapshot analysis. For EntityType-level config: unset / disabled = true indicates disabled by default for Features under it; otherwise by default enable snapshot analysis monitoring with monitoring_interval for Features under it. Feature-level config: disabled = true indicates disabled regardless of the EntityType-level config; unset monitoring_interval indicates going with EntityType-level config; otherwise run snapshot analysis monitoring with monitoring_interval regardless of the EntityType-level config. Explicitly Disable the snapshot analysis based monitoring.

monitoring_interval
(deprecated)

Duration

Configuration of the snapshot analysis based monitoring pipeline running interval. The value is rolled up to full day. If both monitoring_interval_days and the deprecated monitoring_interval field are set when creating/updating EntityTypes/Features, monitoring_interval_days will be used.

monitoring_interval_days

int32

Configuration of the snapshot analysis based monitoring pipeline running interval. The value indicates number of days.

staleness_days

int32

Customized export features time window for snapshot analysis. Unit is one day. Default value is 3 weeks. Minimum value is 1 day. Maximum value is 4000 days.

ThresholdConfig

The config for Featurestore Monitoring threshold.

Fields

Union field threshold.

threshold can be only one of the following:

value

double

Specify a threshold value that can trigger the alert. 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.

FetchFeatureValuesRequest

Request message for FeatureOnlineStoreService.FetchFeatureValues. All the features under the requested feature view will be returned.

Fields
feature_view

string

Required. FeatureView resource format projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}/featureViews/{featureView}

data_key

FeatureViewDataKey

Optional. The request key to fetch feature values for.

data_format

FeatureViewDataFormat

Optional. Response data format. If not set, FeatureViewDataFormat.KEY_VALUE will be used.

format
(deprecated)

Format

Specify response data format. If not set, KeyValue format will be used. Deprecated. Use FetchFeatureValuesRequest.data_format.

Union field entity_id. Entity ID to fetch feature values for. Deprecated. Use FetchFeatureValuesRequest.data_key. entity_id can be only one of the following:
id
(deprecated)

string

Simple ID. The whole string will be used as is to identify Entity to fetch feature values for.

Format

Format of the response data.

Enums
FORMAT_UNSPECIFIED Not set. Will be treated as the KeyValue format.
KEY_VALUE Return response data in key-value format.
PROTO_STRUCT Return response data in proto Struct format.

FetchFeatureValuesResponse

Response message for FeatureOnlineStoreService.FetchFeatureValues

Fields
data_key

FeatureViewDataKey

The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs.

Union field format.

format can be only one of the following:

key_values

FeatureNameValuePairList

Feature values in KeyValue format.

proto_struct

Struct

Feature values in proto Struct format.

FeatureNameValuePairList

Response structure in the format of key (feature name) and (feature) value pair.

Fields
features[]

FeatureNameValuePair

List of feature names and values.

FeatureNameValuePair

Feature name & value pair.

Fields
name

string

Feature short name.

Union field data.

data can be only one of the following:

value

FeatureValue

Feature value.

FileData

URI based data.

Fields
mime_type

string

Required. The IANA standard MIME type of the source data.

file_uri

string

Required. URI.

FileStatus

RagFile status.

Fields
state

State

Output only. RagFile state.

error_status

string

Output only. Only when the state field is ERROR.

State

RagFile state.

Enums
STATE_UNSPECIFIED RagFile state is unspecified.
ACTIVE RagFile resource has been created and indexed successfully.
ERROR RagFile resource is in a problematic state. See error_message field for details.

FilterSplit

Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign).

Supported only for unstructured Datasets.

Fields
training_filter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

validation_filter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

test_filter

string

Required. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order.

FluencyInput

Input for fluency metric.

Fields
metric_spec

FluencySpec

Required. Spec for fluency score metric.

instance

FluencyInstance

Required. Fluency instance.

FluencyInstance

Spec for fluency instance.

Fields
prediction

string

Required. Output of the evaluated model.

FluencyResult

Spec for fluency result.

Fields
explanation

string

Output only. Explanation for fluency score.

score

float

Output only. Fluency score.

confidence

float

Output only. Confidence for fluency score.

FluencySpec

Spec for fluency score metric.

Fields
version

int32

Optional. Which version to use for evaluation.

FractionSplit

Assigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction, validation_fraction and test_fraction may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.

Fields
training_fraction

double

The fraction of the input data that is to be used to train the Model.

validation_fraction

double

The fraction of the input data that is to be used to validate the Model.

test_fraction

double

The fraction of the input data that is to be used to evaluate the Model.

FulfillmentInput

Input for fulfillment metric.

Fields
metric_spec

FulfillmentSpec

Required. Spec for fulfillment score metric.

instance

FulfillmentInstance

Required. Fulfillment instance.

FulfillmentInstance

Spec for fulfillment instance.

Fields
prediction

string

Required. Output of the evaluated model.

instruction

string

Required. Inference instruction prompt to compare prediction with.

FulfillmentResult

Spec for fulfillment result.

Fields
explanation

string

Output only. Explanation for fulfillment score.

score

float

Output only. Fulfillment score.

confidence

float

Output only. Confidence for fulfillment score.

FulfillmentSpec

Spec for fulfillment metric.

Fields
version

int32

Optional. Which version to use for evaluation.

FunctionCall

A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values.

Fields
name

string

Required. The name of the function to call. Matches [FunctionDeclaration.name].

args

Struct

Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.

FunctionCallingConfig

Function calling config.

Fields
mode

Mode

Optional. Function calling mode.

allowed_function_names[]

string

Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.

Mode

Function calling mode.

Enums
MODE_UNSPECIFIED Unspecified function calling mode. This value should not be used.
AUTO Default model behavior, model decides to predict either function calls or natural language response.
ANY Model is constrained to always predicting function calls only. If "allowed_function_names" are set, the predicted function calls will be limited to any one of "allowed_function_names", else the predicted function calls will be any one of the provided "function_declarations".
NONE Model will not predict any function calls. Model behavior is same as when not passing any function declarations.

FunctionDeclaration

Structured representation of a function declaration as defined by the OpenAPI 3.0 specification. Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.

Fields
name

string

Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.

description

string

Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.

parameters

Schema

Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1

response

Schema

Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.

FunctionResponse

The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction.

Fields
name

string

Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].

response

Struct

Required. The function response in JSON object format. Use "output" key to specify function output and "error" key to specify error details (if any). If "output" and "error" keys are not specified, then whole "response" is treated as function output.

GcsDestination

The Google Cloud Storage location where the output is to be written to.

Fields
output_uri_prefix

string

Required. Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.

GcsSource

The Google Cloud Storage location for the input content.

Fields
uris[]

string

Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.

GenerateContentRequest

Request message for [PredictionService.GenerateContent].

Fields
model

string

Required. The fully qualified name of the publisher model or tuned model endpoint to use.

Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*

Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}

contents[]

Content

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.

cached_content

string

Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: projects/{project}/locations/{location}/cachedContents/{cachedContent}

tools[]

Tool

Optional. A list of Tools the model may use to generate the next response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

tool_config

ToolConfig

Optional. Tool config. This config is shared for all tools provided in the request.

labels

map<string, string>

Optional. The labels with user-defined metadata for the request. It is used for billing and reporting only.

Label keys and values can be no longer than 63 characters (Unicode codepoints) and can only contain lowercase letters, numeric characters, underscores, and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter.

safety_settings[]

SafetySetting

Optional. Per request settings for blocking unsafe content. Enforced on GenerateContentResponse.candidates.

generation_config

GenerationConfig

Optional. Generation config.

system_instruction

Content

Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.

GenerateContentResponse

Response message for [PredictionService.GenerateContent].

Fields
candidates[]

Candidate

Output only. Generated candidates.

model_version

string

Output only. The model version used to generate the response.

prompt_feedback

PromptFeedback

Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.

usage_metadata

UsageMetadata

Usage metadata about the response(s).

PromptFeedback

Content filter results for a prompt sent in the request.

Fields
block_reason

BlockedReason

Output only. Blocked reason.

safety_ratings[]

SafetyRating

Output only. Safety ratings.

block_reason_message

string

Output only. A readable block reason message.

BlockedReason

Blocked reason enumeration.

Enums
BLOCKED_REASON_UNSPECIFIED Unspecified blocked reason.
SAFETY Candidates blocked due to safety.
OTHER Candidates blocked due to other reason.
BLOCKLIST Candidates blocked due to the terms which are included from the terminology blocklist.
PROHIBITED_CONTENT Candidates blocked due to prohibited content.

UsageMetadata

Usage metadata about response(s).

Fields
prompt_token_count

int32

Number of tokens in the request. When cached_content is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.

candidates_token_count

int32

Number of tokens in the response(s).

total_token_count

int32

Total token count for prompt and response candidates.

cached_content_token_count

int32

Output only. Number of tokens in the cached part in the input (the cached content).

GenerateVideoResponse

Generate video response.

Fields
generated_samples[]

string

The cloud storage uris of the generated videos.

rai_media_filtered_reasons[]

string

Returns rai failure reasons if any.

rai_media_filtered_count

int32

Returns if any videos were filtered due to RAI policies.

GenerationConfig

Generation config.

Fields
stop_sequences[]

string

Optional. Stop sequences.

response_mime_type

string

Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain: (default) Text output. - application/json: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.

response_modalities[]

Modality

Optional. The modalities of the response.

temperature

float

Optional. Controls the randomness of predictions.

top_p

float

Optional. If specified, nucleus sampling will be used.

top_k

float

Optional. If specified, top-k sampling will be used.

candidate_count

int32

Optional. Number of candidates to generate.

max_output_tokens

int32

Optional. The maximum number of output tokens to generate per message.

response_logprobs

bool

Optional. If true, export the logprobs results in response.

logprobs

int32

Optional. Logit probabilities.

presence_penalty

float

Optional. Positive penalties.

frequency_penalty

float

Optional. Frequency penalties.

seed

int32

Optional. Seed.

response_schema

Schema

Optional. The Schema object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an OpenAPI 3.0 schema object. If set, a compatible response_mime_type must also be set. Compatible mimetypes: application/json: Schema for JSON response.

routing_config

RoutingConfig

Optional. Routing configuration.

audio_timestamp

bool

Optional. If enabled, audio timestamp will be included in the request to the model.

media_resolution

MediaResolution

Optional. If specified, the media resolution specified will be used.

MediaResolution

Media resolution for the input media.

Enums
MEDIA_RESOLUTION_UNSPECIFIED Media resolution has not been set.
MEDIA_RESOLUTION_LOW Media resolution set to low (64 tokens).
MEDIA_RESOLUTION_MEDIUM Media resolution set to medium (256 tokens).
MEDIA_RESOLUTION_HIGH Media resolution set to high (zoomed reframing with 256 tokens).

Modality

The modalities of the response.

Enums
MODALITY_UNSPECIFIED Unspecified modality. Will be processed as text.
TEXT Text modality.
IMAGE Image modality.
AUDIO Audio modality.

RoutingConfig

The configuration for routing the request to a specific model.

Fields
Union field routing_config. Routing mode. routing_config can be only one of the following:
auto_mode

AutoRoutingMode

Automated routing.

manual_mode

ManualRoutingMode

Manual routing.

AutoRoutingMode

When automated routing is specified, the routing will be determined by the pretrained routing model and customer provided model routing preference.

Fields
model_routing_preference

ModelRoutingPreference

The model routing preference.

ModelRoutingPreference

The model routing preference.

Enums
UNKNOWN Unspecified model routing preference.
PRIORITIZE_QUALITY Prefer higher quality over low cost.
BALANCED Balanced model routing preference.
PRIORITIZE_COST Prefer lower cost over higher quality.

ManualRoutingMode

When manual routing is set, the specified model will be used directly.

Fields
model_name

string

The model name to use. Only the public LLM models are accepted. e.g. 'gemini-1.5-pro-001'.

GenericOperationMetadata

Generic Metadata shared by all operations.

Fields
partial_failures[]

Status

Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard Google Cloud error details.

create_time

Timestamp

Output only. Time when the operation was created.

update_time

Timestamp

Output only. Time when the operation was updated for the last time. If the operation has finished (successfully or not), this is the finish time.

GenieSource

Contains information about the source of the models generated from Generative AI Studio.

Fields
base_model_uri

string

Required. The public base model URI.

GetAnnotationSpecRequest

Request message for DatasetService.GetAnnotationSpec.

Fields
name

string

Required. The name of the AnnotationSpec resource. Format: projects/{project}/locations/{location}/datasets/{dataset}/annotationSpecs/{annotation_spec}

read_mask

FieldMask

Mask specifying which fields to read.

GetArtifactRequest

Request message for MetadataService.GetArtifact.

Fields
name

string

Required. The resource name of the Artifact to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}

GetBatchPredictionJobRequest

Request message for JobService.GetBatchPredictionJob.

Fields
name

string

Required. The name of the BatchPredictionJob resource. Format: projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}

GetCachedContentRequest

Request message for GenAiCacheService.GetCachedContent.

Fields
name

string

Required. The resource name referring to the cached content

GetContextRequest

Request message for MetadataService.GetContext.

Fields
name

string

Required. The resource name of the Context to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

GetCustomJobRequest

Request message for JobService.GetCustomJob.

Fields
name

string

Required. The name of the CustomJob resource. Format: projects/{project}/locations/{location}/customJobs/{custom_job}

GetDatasetRequest

Request message for DatasetService.GetDataset. Next ID: 4

Fields
name

string

Required. The name of the Dataset resource.

read_mask

FieldMask

Mask specifying which fields to read.

GetDatasetVersionRequest

Request message for DatasetService.GetDatasetVersion. Next ID: 4

Fields
name

string

Required. The resource name of the Dataset version to delete. Format: projects/{project}/locations/{location}/datasets/{dataset}/datasetVersions/{dataset_version}

read_mask

FieldMask

Mask specifying which fields to read.

GetDeploymentResourcePoolRequest

Request message for GetDeploymentResourcePool method.

Fields
name

string

Required. The name of the DeploymentResourcePool to retrieve. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

GetEndpointRequest

Request message for EndpointService.GetEndpoint

Fields
name

string

Required. The name of the Endpoint resource. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

GetEntityTypeRequest

Request message for FeaturestoreService.GetEntityType.

Fields
name

string

Required. The name of the EntityType resource. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}

GetExecutionRequest

Request message for MetadataService.GetExecution.

Fields
name

string

Required. The resource name of the Execution to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}

GetExtensionRequest

Request message for ExtensionRegistryService.GetExtension.

Fields
name

string

Required. The name of the Extension resource. Format: projects/{project}/locations/{location}/extensions/{extension}

GetFeatureGroupRequest

Request message for FeatureRegistryService.GetFeatureGroup.

Fields
name

string

Required. The name of the FeatureGroup resource.

GetFeatureMonitorJobRequest

Request message for FeatureRegistryService.GetFeatureMonitorJob.

Fields
name

string

Required. The name of the FeatureMonitorJob resource. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}/featureMonitors/{feature_monitor}/featureMonitorJobs/{feature_monitor_job}

GetFeatureMonitorRequest

Request message for FeatureRegistryService.GetFeatureMonitor.

Fields
name

string

Required. The name of the FeatureMonitor resource.

GetFeatureOnlineStoreRequest

Request message for FeatureOnlineStoreAdminService.GetFeatureOnlineStore.

Fields
name

string

Required. The name of the FeatureOnlineStore resource.

GetFeatureRequest

Request message for FeaturestoreService.GetFeature. Request message for FeatureRegistryService.GetFeature.

Fields
name

string

Required. The name of the Feature resource. Format for entity_type as parent: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type} Format for feature_group as parent: projects/{project}/locations/{location}/featureGroups/{feature_group}

feature_stats_and_anomaly_spec

FeatureStatsAndAnomalySpec

Optional. Only applicable for Vertex AI Feature Store. If set, retrieves FeatureStatsAndAnomaly generated by FeatureMonitors based on this spec.

GetFeatureViewRequest

Request message for FeatureOnlineStoreAdminService.GetFeatureView.

Fields
name

string

Required. The name of the FeatureView resource. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

GetFeatureViewSyncRequest

Request message for FeatureOnlineStoreAdminService.GetFeatureViewSync.

Fields
name

string

Required. The name of the FeatureViewSync resource. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}

GetFeaturestoreRequest

Request message for FeaturestoreService.GetFeaturestore.

Fields
name

string

Required. The name of the Featurestore resource.

GetHyperparameterTuningJobRequest

Request message for JobService.GetHyperparameterTuningJob.

Fields
name

string

Required. The name of the HyperparameterTuningJob resource. Format: projects/{project}/locations/{location}/hyperparameterTuningJobs/{hyperparameter_tuning_job}

GetIndexEndpointRequest

Request message for IndexEndpointService.GetIndexEndpoint

Fields
name

string

Required. The name of the IndexEndpoint resource. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}

GetIndexRequest

Request message for IndexService.GetIndex

Fields
name

string

Required. The name of the Index resource. Format: projects/{project}/locations/{location}/indexes/{index}

GetMetadataSchemaRequest

Request message for MetadataService.GetMetadataSchema.

Fields
name

string

Required. The resource name of the MetadataSchema to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/metadataSchemas/{metadataschema}

GetMetadataStoreRequest

Request message for MetadataService.GetMetadataStore.

Fields
name

string

Required. The resource name of the MetadataStore to retrieve. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

GetModelDeploymentMonitoringJobRequest

Request message for JobService.GetModelDeploymentMonitoringJob.

Fields
name

string

Required. The resource name of the ModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}

GetModelEvaluationRequest

Request message for ModelService.GetModelEvaluation.

Fields
name

string

Required. The name of the ModelEvaluation resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}

GetModelEvaluationSliceRequest

Request message for ModelService.GetModelEvaluationSlice.

Fields
name

string

Required. The name of the ModelEvaluationSlice resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}/slices/{slice}

GetModelMonitorRequest

Request message for ModelMonitoringService.GetModelMonitor.

Fields
name

string

Required. The name of the ModelMonitor resource. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}

GetModelMonitoringJobRequest

Request message for ModelMonitoringService.GetModelMonitoringJob.

Fields
name

string

Required. The resource name of the ModelMonitoringJob. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job}

GetModelRequest

Request message for ModelService.GetModel.

Fields
name

string

Required. The name of the Model resource. Format: projects/{project}/locations/{location}/models/{model}

In order to retrieve a specific version of the model, also provide the version ID or version alias. Example: projects/{project}/locations/{location}/models/{model}@2 or projects/{project}/locations/{location}/models/{model}@golden If no version ID or alias is specified, the "default" version will be returned. The "default" version alias is created for the first version of the model, and can be moved to other versions later on. There will be exactly one default version.

GetNotebookExecutionJobRequest

Request message for [NotebookService.GetNotebookExecutionJob]

Fields
name

string

Required. The name of the NotebookExecutionJob resource.

view

NotebookExecutionJobView

Optional. The NotebookExecutionJob view. Defaults to BASIC.

GetNotebookRuntimeRequest

Request message for NotebookService.GetNotebookRuntime

Fields
name

string

Required. The name of the NotebookRuntime resource. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.

GetNotebookRuntimeTemplateRequest

Request message for NotebookService.GetNotebookRuntimeTemplate

Fields
name

string

Required. The name of the NotebookRuntimeTemplate resource. Format: projects/{project}/locations/{location}/notebookRuntimeTemplates/{notebook_runtime_template}

GetPersistentResourceRequest

Request message for PersistentResourceService.GetPersistentResource.

Fields
name

string

Required. The name of the PersistentResource resource. Format: projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}

GetPipelineJobRequest

Request message for PipelineService.GetPipelineJob.

Fields
name

string

Required. The name of the PipelineJob resource. Format: projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}

GetPublisherModelRequest

Request message for ModelGardenService.GetPublisherModel

Fields
name

string

Required. The name of the PublisherModel resource. Format: publishers/{publisher}/models/{publisher_model}

language_code

string

Optional. The IETF BCP-47 language code representing the language in which the publisher model's text information should be written in.

view

PublisherModelView

Optional. PublisherModel view specifying which fields to read.

is_hugging_face_model

bool

Optional. Boolean indicates whether the requested model is a Hugging Face model.

hugging_face_token

string

Optional. Token used to access Hugging Face gated models.

GetRagCorpusRequest

Request message for VertexRagDataService.GetRagCorpus

Fields
name

string

Required. The name of the RagCorpus resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}

GetRagFileRequest

Request message for VertexRagDataService.GetRagFile

Fields
name

string

Required. The name of the RagFile resource. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}/ragFiles/{rag_file}

GetReasoningEngineRequest

Request message for ReasoningEngineService.GetReasoningEngine.

Fields
name

string

Required. The name of the ReasoningEngine resource. Format: projects/{project}/locations/{location}/reasoningEngines/{reasoning_engine}

GetScheduleRequest

Request message for ScheduleService.GetSchedule.

Fields
name

string

Required. The name of the Schedule resource. Format: projects/{project}/locations/{location}/schedules/{schedule}

GetSpecialistPoolRequest

Request message for SpecialistPoolService.GetSpecialistPool.

Fields
name

string

Required. The name of the SpecialistPool resource. The form is projects/{project}/locations/{location}/specialistPools/{specialist_pool}.

GetStudyRequest

Request message for VizierService.GetStudy.

Fields
name

string

Required. The name of the Study resource. Format: projects/{project}/locations/{location}/studies/{study}

GetTensorboardExperimentRequest

Request message for TensorboardService.GetTensorboardExperiment.

Fields
name

string

Required. The name of the TensorboardExperiment resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

GetTensorboardRequest

Request message for TensorboardService.GetTensorboard.

Fields
name

string

Required. The name of the Tensorboard resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

GetTensorboardRunRequest

Request message for TensorboardService.GetTensorboardRun.

Fields
name

string

Required. The name of the TensorboardRun resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

GetTensorboardTimeSeriesRequest

Request message for TensorboardService.GetTensorboardTimeSeries.

Fields
name

string

Required. The name of the TensorboardTimeSeries resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}

GetTrainingPipelineRequest

Request message for PipelineService.GetTrainingPipeline.

Fields
name

string

Required. The name of the TrainingPipeline resource. Format: projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}

GetTrialRequest

Request message for VizierService.GetTrial.

Fields
name

string

Required. The name of the Trial resource. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}

GetTuningJobRequest

Request message for GenAiTuningService.GetTuningJob.

Fields
name

string

Required. The name of the TuningJob resource. Format: projects/{project}/locations/{location}/tuningJobs/{tuning_job}

GoogleDriveSource

The Google Drive location for the input content.

Fields
resource_ids[]

ResourceId

Required. Google Drive resource IDs.

ResourceId

The type and ID of the Google Drive resource.

Fields
resource_type

ResourceType

Required. The type of the Google Drive resource.

resource_id

string

Required. The ID of the Google Drive resource.

ResourceType

The type of the Google Drive resource.

Enums
RESOURCE_TYPE_UNSPECIFIED Unspecified resource type.
RESOURCE_TYPE_FILE File resource type.
RESOURCE_TYPE_FOLDER Folder resource type.

GoogleSearchRetrieval

Tool to retrieve public web data for grounding, powered by Google.

Fields
dynamic_retrieval_config

DynamicRetrievalConfig

Specifies the dynamic retrieval configuration for the given source.

GroundednessInput

Input for groundedness metric.

Fields
metric_spec

GroundednessSpec

Required. Spec for groundedness metric.

instance

GroundednessInstance

Required. Groundedness instance.

GroundednessInstance

Spec for groundedness instance.

Fields
prediction

string

Required. Output of the evaluated model.

context

string

Required. Background information provided in context used to compare against the prediction.

GroundednessResult

Spec for groundedness result.

Fields
explanation

string

Output only. Explanation for groundedness score.

score

float

Output only. Groundedness score.

confidence

float

Output only. Confidence for groundedness score.

GroundednessSpec

Spec for groundedness metric.

Fields
version

int32

Optional. Which version to use for evaluation.

GroundingChunk

Grounding chunk.

Fields
Union field chunk_type. Chunk type. chunk_type can be only one of the following:
web

Web

Grounding chunk from the web.

retrieved_context

RetrievedContext

Grounding chunk from context retrieved by the retrieval tools.

RetrievedContext

Chunk from context retrieved by the retrieval tools.

Fields
uri

string

URI reference of the attribution.

title

string

Title of the attribution.

text

string

Text of the attribution.

Web

Chunk from the web.

Fields
uri

string

URI reference of the chunk.

title

string

Title of the chunk.

GroundingMetadata

Metadata returned to client when grounding is enabled.

Fields
web_search_queries[]

string

Optional. Web search queries for the following-up web search.

retrieval_queries[]

string

Optional. Queries executed by the retrieval tools.

grounding_chunks[]

GroundingChunk

List of supporting references retrieved from specified grounding source.

grounding_supports[]

GroundingSupport

Optional. List of grounding support.

search_entry_point

SearchEntryPoint

Optional. Google search entry for the following-up web searches.

retrieval_metadata

RetrievalMetadata

Optional. Output only. Retrieval metadata.

GroundingSupport

Grounding support.

Fields
grounding_chunk_indices[]

int32

A list of indices (into 'grounding_chunk') specifying the citations associated with the claim. For instance [1,3,4] means that grounding_chunk[1], grounding_chunk[3], grounding_chunk[4] are the retrieved content attributed to the claim.

confidence_scores[]

float

Confidence score of the support references. Ranges from 0 to 1. 1 is the most confident. This list must have the same size as the grounding_chunk_indices.

segment

Segment

Segment of the content this support belongs to.

HarmCategory

Harm categories that will block the content.

Enums
HARM_CATEGORY_UNSPECIFIED The harm category is unspecified.
HARM_CATEGORY_HATE_SPEECH The harm category is hate speech.
HARM_CATEGORY_DANGEROUS_CONTENT The harm category is dangerous content.
HARM_CATEGORY_HARASSMENT The harm category is harassment.
HARM_CATEGORY_SEXUALLY_EXPLICIT The harm category is sexually explicit content.
HARM_CATEGORY_CIVIC_INTEGRITY The harm category is civic integrity.

HttpElementLocation

Enum of location an HTTP element can be.

Enums
HTTP_IN_UNSPECIFIED
HTTP_IN_QUERY Element is in the HTTP request query.
HTTP_IN_HEADER Element is in the HTTP request header.
HTTP_IN_PATH Element is in the HTTP request path.
HTTP_IN_BODY Element is in the HTTP request body.

HyperparameterTuningJob

Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.

Fields
name

string

Output only. Resource name of the HyperparameterTuningJob.

display_name

string

Required. The display name of the HyperparameterTuningJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.

study_spec

StudySpec

Required. Study configuration of the HyperparameterTuningJob.

max_trial_count

int32

Required. The desired total number of Trials.

parallel_trial_count

int32

Required. The desired number of Trials to run in parallel.

max_failed_trial_count

int32

The number of failed Trials that need to be seen before failing the HyperparameterTuningJob.

If set to 0, Vertex AI decides how many Trials must fail before the whole job fails.

trial_job_spec

CustomJobSpec

Required. The spec of a trial job. The same spec applies to the CustomJobs created in all the trials.

trials[]

Trial

Output only. Trials of the HyperparameterTuningJob.

state

JobState

Output only. The detailed state of the job.

create_time

Timestamp

Output only. Time when the HyperparameterTuningJob was created.

start_time

Timestamp

Output only. Time when the HyperparameterTuningJob for the first time entered the JOB_STATE_RUNNING state.

end_time

Timestamp

Output only. Time when the HyperparameterTuningJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.

update_time

Timestamp

Output only. Time when the HyperparameterTuningJob was most recently updated.

error

Status

Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

labels

map<string, string>

The labels with user-defined metadata to organize HyperparameterTuningJobs.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

encryption_spec

EncryptionSpec

Customer-managed encryption key options for a HyperparameterTuningJob. If this is set, then all resources created by the HyperparameterTuningJob will be encrypted with the provided encryption key.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

IdMatcher

Matcher for Features of an EntityType by Feature ID.

Fields
ids[]

string

Required. The following are accepted as ids:

  • A single-element list containing only *, which selects all Features in the target EntityType, or
  • A list containing only Feature IDs, which selects only Features with those IDs in the target EntityType.

ImportDataConfig

Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.

Fields
data_item_labels

map<string, string>

Labels that will be applied to newly imported DataItems. If an identical DataItem as one being imported already exists in the Dataset, then these labels will be appended to these of the already existing one, and if labels with identical key is imported before, the old label value will be overwritten. If two DataItems are identical in the same import data operation, the labels will be combined and if key collision happens in this case, one of the values will be picked randomly. Two DataItems are considered identical if their content bytes are identical (e.g. image bytes or pdf bytes). These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri, e.g. jsonl file.

annotation_labels

map<string, string>

Labels that will be applied to newly imported Annotations. If two Annotations are identical, one of them will be deduped. Two Annotations are considered identical if their payload, payload_schema_uri and all of their labels are the same. These labels will be overridden by Annotation labels specified inside index file referenced by import_schema_uri, e.g. jsonl file.

import_schema_uri

string

Required. Points to a YAML file stored on Google Cloud Storage describing the import format. Validation will be done against the schema. The schema is defined as an OpenAPI 3.0.2 Schema Object.

Union field source. The source of the input. source can be only one of the following:
gcs_source

GcsSource

The Google Cloud Storage location for the input content.

ImportDataOperationMetadata

Runtime operation information for DatasetService.ImportData.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

ImportDataRequest

Request message for DatasetService.ImportData.

Fields
name

string

Required. The name of the Dataset resource. Format: projects/{project}/locations/{location}/datasets/{dataset}

import_configs[]

ImportDataConfig

Required. The desired input locations. The contents of all input locations will be imported in one batch.

ImportDataResponse

This type has no fields.

Response message for DatasetService.ImportData.

ImportExtensionOperationMetadata

Details of ExtensionRegistryService.ImportExtension operation.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

ImportExtensionRequest

Request message for ExtensionRegistryService.ImportExtension.

Fields
parent

string

Required. The resource name of the Location to import the Extension in. Format: projects/{project}/locations/{location}

extension

Extension

Required. The Extension to import.

ImportFeatureValuesOperationMetadata

Details of operations that perform import Feature values.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Featurestore import Feature values.

imported_entity_count

int64

Number of entities that have been imported by the operation.

imported_feature_value_count

int64

Number of Feature values that have been imported by the operation.

source_uris[]

string

The source URI from where Feature values are imported.

invalid_row_count

int64

The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources).

timestamp_outside_retention_rows_count

int64

The number rows that weren't ingested due to having timestamps outside the retention boundary.

blocking_operation_ids[]

int64

List of ImportFeatureValues operations running under a single EntityType that are blocking this operation.

ImportFeatureValuesRequest

Request message for FeaturestoreService.ImportFeatureValues.

Fields
entity_type

string

Required. The resource name of the EntityType grouping the Features for which values are being imported. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}

entity_id_field

string

Source column that holds entity IDs. If not provided, entity IDs are extracted from the column named entity_id.

feature_specs[]

FeatureSpec

Required. Specifications defining which Feature values to import from the entity. The request fails if no feature_specs are provided, and having multiple feature_specs for one Feature is not allowed.

disable_online_serving

bool

If set, data will not be imported for online serving. This is typically used for backfilling, where Feature generation timestamps are not in the timestamp range needed for online serving.

worker_count

int32

Specifies the number of workers that are used to write data to the Featurestore. Consider the online serving capacity that you require to achieve the desired import throughput without interfering with online serving. The value must be positive, and less than or equal to 100. If not set, defaults to using 1 worker. The low count ensures minimal impact on online serving performance.

disable_ingestion_analysis

bool

If true, API doesn't start ingestion analysis pipeline.

Union field source. Details about the source data, including the location of the storage and the format. source can be only one of the following:
avro_source

AvroSource

bigquery_source

BigQuerySource

csv_source

CsvSource

Union field feature_time_source. Source of Feature timestamp for all Feature values of each entity. Timestamps must be millisecond-aligned. feature_time_source can be only one of the following:
feature_time_field

string

Source column that holds the Feature timestamp for all Feature values in each entity.

feature_time

Timestamp

Single Feature timestamp for all entities being imported. The timestamp must not have higher than millisecond precision.

FeatureSpec

Defines the Feature value(s) to import.

Fields
id

string

Required. ID of the Feature to import values of. This Feature must exist in the target EntityType, or the request will fail.

source_field

string

Source column to get the Feature values from. If not set, uses the column with the same name as the Feature ID.

ImportFeatureValuesResponse

Response message for FeaturestoreService.ImportFeatureValues.

Fields
imported_entity_count

int64

Number of entities that have been imported by the operation.

imported_feature_value_count

int64

Number of Feature values that have been imported by the operation.

invalid_row_count

int64

The number of rows in input source that weren't imported due to either * Not having any featureValues. * Having a null entityId. * Having a null timestamp. * Not being parsable (applicable for CSV sources).

timestamp_outside_retention_rows_count

int64

The number rows that weren't ingested due to having feature timestamps outside the retention boundary.

ImportModelEvaluationRequest

Request message for ModelService.ImportModelEvaluation

Fields
parent

string

Required. The name of the parent model resource. Format: projects/{project}/locations/{location}/models/{model}

model_evaluation

ModelEvaluation

Required. Model evaluation resource to be imported.

ImportRagFilesConfig

Config for importing RagFiles.

Fields
rag_file_chunking_config
(deprecated)

RagFileChunkingConfig

Specifies the size and overlap of chunks after importing RagFiles.

rag_file_transformation_config

RagFileTransformationConfig

Specifies the transformation config for RagFiles.

rag_file_parsing_config

RagFileParsingConfig

Optional. Specifies the parsing config for RagFiles. RAG will use the default parser if this field is not set.

max_embedding_requests_per_min

int32

Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here. If unspecified, a default value of 1,000 QPM would be used.

Union field import_source. The source of the import. import_source can be only one of the following:
gcs_source

GcsSource

Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats: - gs://bucket_name/my_directory/object_name/my_file.txt - gs://bucket_name/my_directory

google_drive_source

GoogleDriveSource

Google Drive location. Supports importing individual files as well as Google Drive folders.

slack_source

SlackSource

Slack channels with their corresponding access tokens.

jira_source

JiraSource

Jira queries with their corresponding authentication.

share_point_sources

SharePointSources

SharePoint sources.

Union field partial_failure_sink. Optional. If provided, all partial failures are written to the sink. Deprecated. Prefer to use the import_result_sink. partial_failure_sink can be only one of the following:
partial_failure_gcs_sink
(deprecated)

GcsDestination

The Cloud Storage path to write partial failures to. Deprecated. Prefer to use import_result_gcs_sink.

partial_failure_bigquery_sink
(deprecated)

BigQueryDestination

The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use import_result_bq_sink.

ImportRagFilesOperationMetadata

Runtime operation information for VertexRagDataService.ImportRagFiles.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

rag_corpus_id

int64

The resource ID of RagCorpus that this operation is executed on.

import_rag_files_config

ImportRagFilesConfig

Output only. The config that was passed in the ImportRagFilesRequest.

progress_percentage

int32

The progress percentage of the operation. Value is in the range [0, 100]. This percentage is calculated as follows: progress_percentage = 100 * (successes + failures + skips) / total

ImportRagFilesRequest

Request message for VertexRagDataService.ImportRagFiles.

Fields
parent

string

Required. The name of the RagCorpus resource into which to import files. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}

import_rag_files_config

ImportRagFilesConfig

Required. The config for the RagFiles to be synced and imported into the RagCorpus. VertexRagDataService.ImportRagFiles.

ImportRagFilesResponse

Response message for VertexRagDataService.ImportRagFiles.

Fields
imported_rag_files_count

int64

The number of RagFiles that had been imported into the RagCorpus.

failed_rag_files_count

int64

The number of RagFiles that had failed while importing into the RagCorpus.

skipped_rag_files_count

int64

The number of RagFiles that was skipped while importing into the RagCorpus.

Union field partial_failure_sink. The location into which the partial failures were written. partial_failure_sink can be only one of the following:
partial_failures_gcs_path

string

The Google Cloud Storage path into which the partial failures were written.

partial_failures_bigquery_table

string

The BigQuery table into which the partial failures were written.

Index

A representation of a collection of database items organized in a way that allows for approximate nearest neighbor (a.k.a ANN) algorithms search.

Fields
name

string

Output only. The resource name of the Index.

display_name

string

Required. The display name of the Index. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

The description of the Index.

metadata_schema_uri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Index, that is specific to it. Unset if the Index does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

metadata

Value

An additional information about the Index; the schema of the metadata can be found in metadata_schema.

deployed_indexes[]

DeployedIndexRef

Output only. The pointers to DeployedIndexes created from this Index. An Index can be only deleted if all its DeployedIndexes had been undeployed first.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your Indexes.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

create_time

Timestamp

Output only. Timestamp when this Index was created.

update_time

Timestamp

Output only. Timestamp when this Index was most recently updated. This also includes any update to the contents of the Index. Note that Operations working on this Index may have their Operations.metadata.generic_metadata.update_time a little after the value of this timestamp, yet that does not mean their results are not already reflected in the Index. Result of any successfully completed Operation on the Index is reflected in it.

index_stats

IndexStats

Output only. Stats of the index resource.

index_update_method

IndexUpdateMethod

Immutable. The update method to use with this Index. If not set, BATCH_UPDATE will be used by default.

encryption_spec

EncryptionSpec

Immutable. Customer-managed encryption key spec for an Index. If set, this Index and all sub-resources of this Index will be secured by this key.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

IndexUpdateMethod

The update method of an Index.

Enums
INDEX_UPDATE_METHOD_UNSPECIFIED Should not be used.
BATCH_UPDATE BatchUpdate: user can call UpdateIndex with files on Cloud Storage of Datapoints to update.
STREAM_UPDATE StreamUpdate: user can call UpsertDatapoints/DeleteDatapoints to update the Index and the updates will be applied in corresponding DeployedIndexes in nearly real-time.

IndexDatapoint

A datapoint of Index.

Fields
datapoint_id

string

Required. Unique identifier of the datapoint.

feature_vector[]

float

Required. Feature embedding vector for dense index. An array of numbers with the length of [NearestNeighborSearchConfig.dimensions].

sparse_embedding

SparseEmbedding

Optional. Feature embedding vector for sparse index.

restricts[]

Restriction

Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses categorical tokens. See: https://cloud.google.com/vertex-ai/docs/matching-engine/filtering

numeric_restricts[]

NumericRestriction

Optional. List of Restrict of the datapoint, used to perform "restricted searches" where boolean rule are used to filter the subset of the database eligible for matching. This uses numeric comparisons.

crowding_tag

CrowdingTag

Optional. CrowdingTag of the datapoint, the number of neighbors to return in each crowding can be configured during query.

CrowdingTag

Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute.

Fields
crowding_attribute

string

The attribute value used for crowding. The maximum number of neighbors to return per crowding attribute value (per_crowding_attribute_num_neighbors) is configured per-query. This field is ignored if per_crowding_attribute_num_neighbors is larger than the total number of neighbors to return for a given query.

NumericRestriction

This field allows restricts to be based on numeric comparisons rather than categorical tokens.

Fields
namespace

string

The namespace of this restriction. e.g.: cost.

op

Operator

This MUST be specified for queries and must NOT be specified for datapoints.

Union field Value. The type of Value must be consistent for all datapoints with a given namespace name. This is verified at runtime. Value can be only one of the following:
value_int

int64

Represents 64 bit integer.

value_float

float

Represents 32 bit float.

value_double

double

Represents 64 bit float.

Operator

Which comparison operator to use. Should be specified for queries only; specifying this for a datapoint is an error.

Datapoints for which Operator is true relative to the query's Value field will be allowlisted.

Enums
OPERATOR_UNSPECIFIED Default value of the enum.
LESS Datapoints are eligible iff their value is < the query's.
LESS_EQUAL Datapoints are eligible iff their value is <= the query's.
EQUAL Datapoints are eligible iff their value is == the query's.
GREATER_EQUAL Datapoints are eligible iff their value is >= the query's.
GREATER Datapoints are eligible iff their value is > the query's.
NOT_EQUAL Datapoints are eligible iff their value is != the query's.

Restriction

Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).

Fields
namespace

string

The namespace of this restriction. e.g.: color.

allow_list[]

string

The attributes to allow in this namespace. e.g.: 'red'

deny_list[]

string

The attributes to deny in this namespace. e.g.: 'blue'

SparseEmbedding

Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions.

Fields
values[]

float

Required. The list of embedding values of the sparse vector.

dimensions[]

int64

Required. The list of indexes for the embedding values of the sparse vector.

IndexEndpoint

Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.

Fields
name

string

Output only. The resource name of the IndexEndpoint.

display_name

string

Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

The description of the IndexEndpoint.

deployed_indexes[]

DeployedIndex

Output only. The indexes deployed in this endpoint.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your IndexEndpoints.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

create_time

Timestamp

Output only. Timestamp when this IndexEndpoint was created.

update_time

Timestamp

Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.

network

string

Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered.

Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network.

network and private_service_connect_config are mutually exclusive.

Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.

enable_private_service_connect
(deprecated)

bool

Optional. Deprecated: If true, expose the IndexEndpoint via private service connect.

Only one of the fields, network or enable_private_service_connect, can be set.

private_service_connect_config

PrivateServiceConnectConfig

Optional. Configuration for private service connect.

network and private_service_connect_config are mutually exclusive.

public_endpoint_enabled

bool

Optional. If true, the deployed index will be accessible through public endpoint.

public_endpoint_domain_name

string

Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.

encryption_spec

EncryptionSpec

Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

IndexPrivateEndpoints

IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment.

Fields
match_grpc_address

string

Output only. The ip address used to send match gRPC requests.

service_attachment

string

Output only. The name of the service attachment resource. Populated if private service connect is enabled.

psc_automated_endpoints[]

PscAutomatedEndpoints

Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.

IndexStats

Stats of the Index.

Fields
vectors_count

int64

Output only. The number of dense vectors in the Index.

sparse_vectors_count

int64

Output only. The number of sparse vectors in the Index.

shards_count

int32

Output only. The number of shards in the Index.

InputDataConfig

Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.

Fields
dataset_id

string

Required. The ID of the Dataset in the same Project and Location which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline's training_task_definition. For tabular Datasets, all their data is exported to training, to pick and choose from.

annotations_filter

string

Applicable only to Datasets that have DataItems and Annotations.

A filter on Annotations of the Dataset. Only Annotations that both match this filter and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on (for the auto-assigned that role is decided by Vertex AI). A filter with same syntax as the one used in ListAnnotations may be used, but note here it filters across all Annotations of the Dataset, and not just within a single DataItem.

annotation_schema_uri

string

Applicable only to custom training with Datasets that have DataItems and Annotations.

Cloud Storage URI that points to a YAML file describing the annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/ , note that the chosen schema must be consistent with metadata of the Dataset specified by dataset_id.

Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on.

When used in conjunction with annotations_filter, the Annotations used for training are filtered by both annotations_filter and annotation_schema_uri.

saved_query_id

string

Only applicable to Datasets that have SavedQueries.

The ID of a SavedQuery (annotation set) under the Dataset specified by dataset_id used for filtering Annotations for training.

Only Annotations that are associated with this SavedQuery are used in respectively training. When used in conjunction with annotations_filter, the Annotations used for training are filtered by both saved_query_id and annotations_filter.

Only one of saved_query_id and annotation_schema_uri should be specified as both of them represent the same thing: problem type.

persist_ml_use_assignment

bool

Whether to persist the ML use assignment to data item system labels.

Union field split. The instructions how the input data should be split between the training, validation and test sets. If no split type is provided, the fraction_split is used by default. split can be only one of the following:
fraction_split

FractionSplit

Split based on fractions defining the size of each set.

filter_split

FilterSplit

Split based on the provided filters for each set.

predefined_split

PredefinedSplit

Supported only for tabular Datasets.

Split based on a predefined key.

timestamp_split

TimestampSplit

Supported only for tabular Datasets.

Split based on the timestamp of the input data pieces.

stratified_split

StratifiedSplit

Supported only for tabular Datasets.

Split based on the distribution of the specified column.

Union field destination. Only applicable to Custom and Hyperparameter Tuning TrainingPipelines.

The destination of the training data to be written to.

Supported destination file formats: * For non-tabular data: "jsonl". * For tabular data: "csv" and "bigquery".

The following Vertex AI environment variables are passed to containers or python modules of the training task when this field is set:

  • AIP_DATA_FORMAT : Exported data format.
  • AIP_TRAINING_DATA_URI : Sharded exported training data uris.
  • AIP_VALIDATION_DATA_URI : Sharded exported validation data uris.
  • AIP_TEST_DATA_URI : Sharded exported test data uris. destination can be only one of the following:
gcs_destination

GcsDestination

The Cloud Storage location where the training data is to be written to. In the given directory a new directory is created with name: dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All training input data is written into that directory.

The Vertex AI environment variables representing Cloud Storage data URIs are represented in the Cloud Storage wildcard format to support sharded data. e.g.: "gs://.../training-*.jsonl"

  • AIP_DATA_FORMAT = "jsonl" for non-tabular data, "csv" for tabular data
  • AIP_TRAINING_DATA_URI = "gcs_destination/dataset---
  • AIP_VALIDATION_DATA_URI = "gcs_destination/dataset---

  • AIP_TEST_DATA_URI = "gcs_destination/dataset---

bigquery_destination

BigQueryDestination

Only applicable to custom training with tabular Dataset with BigQuery source.

The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_<dataset-id>_<annotation-type>_<timestamp-of-training-call> where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data is written into that dataset. In the dataset three tables are created, training, validation and test.

  • AIP_DATA_FORMAT = "bigquery".
  • AIP_TRAINING_DATA_URI = "bigquery_destination.dataset___
  • AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset___

  • AIP_TEST_DATA_URI = "bigquery_destination.dataset___

Int64Array

A list of int64 values.

Fields
values[]

int64

A list of int64 values.

IntegratedGradientsAttribution

An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

Fields
step_count

int32

Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is within the desired error range.

Valid range of its value is [1, 100], inclusively.

smooth_grad_config

SmoothGradConfig

Config for SmoothGrad approximation of gradients.

When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf

blur_baseline_config

BlurBaselineConfig

Config for IG with blur baseline.

When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383

JiraSource

The Jira source for the ImportRagFilesRequest.

Fields
jira_queries[]

JiraQueries

Required. The Jira queries.

JiraQueries

JiraQueries contains the Jira queries and corresponding authentication.

Fields
projects[]

string

A list of Jira projects to import in their entirety.

custom_queries[]

string

A list of custom Jira queries to import. For information about JQL (Jira Query Language), see https://support.atlassian.com/jira-service-management-cloud/docs/use-advanced-search-with-jira-query-language-jql/

email

string

Required. The Jira email address.

server_uri

string

Required. The Jira server URI.

api_key_config

ApiKeyConfig

Required. The SecretManager secret version resource name (e.g. projects/{project}/secrets/{secret}/versions/{version}) storing the Jira API key. See Manage API tokens for your Atlassian account.

JobState

Describes the state of a job.

Enums
JOB_STATE_UNSPECIFIED The job state is unspecified.
JOB_STATE_QUEUED The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING The service is preparing to run the job.
JOB_STATE_RUNNING The job is in progress.
JOB_STATE_SUCCEEDED The job completed successfully.
JOB_STATE_FAILED The job failed.
JOB_STATE_CANCELLING The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED, JOB_STATE_FAILED or JOB_STATE_CANCELLED.
JOB_STATE_CANCELLED The job has been cancelled.
JOB_STATE_PAUSED The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED The job has expired.
JOB_STATE_UPDATING The job is being updated. Only jobs in the RUNNING state can be updated. After updating, the job goes back to the RUNNING state.
JOB_STATE_PARTIALLY_SUCCEEDED The job is partially succeeded, some results may be missing due to errors.

LargeModelReference

Contains information about the Large Model.

Fields
name

string

Required. The unique name of the large Foundation or pre-built model. Like "chat-bison", "text-bison". Or model name with version ID, like "chat-bison@001", "text-bison@005", etc.

LineageSubgraph

A subgraph of the overall lineage graph. Event edges connect Artifact and Execution nodes.

Fields
artifacts[]

Artifact

The Artifact nodes in the subgraph.

executions[]

Execution

The Execution nodes in the subgraph.

events[]

Event

The Event edges between Artifacts and Executions in the subgraph.

ListAnnotationsRequest

Request message for DatasetService.ListAnnotations.

Fields
parent

string

Required. The resource name of the DataItem to list Annotations from. Format: projects/{project}/locations/{location}/datasets/{dataset}/dataItems/{data_item}

filter

string

The standard list filter.

page_size

int32

The standard list page size.

page_token

string

The standard list page token.

read_mask

FieldMask

Mask specifying which fields to read.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

ListAnnotationsResponse

Response message for DatasetService.ListAnnotations.

Fields
annotations[]

Annotation

A list of Annotations that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListArtifactsRequest

Request message for MetadataService.ListArtifacts.

Fields
parent

string

Required. The MetadataStore whose Artifacts should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

page_size

int32

The maximum number of Artifacts to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.

page_token

string

A page token, received from a previous MetadataService.ListArtifacts call. Provide this to retrieve the subsequent page.

When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)

filter

string

Filter specifying the boolean condition for the Artifacts to satisfy in order to be part of the result set. The syntax to define filter query is based on https://google.aip.dev/160. The supported set of filters include the following:

  • Attribute filtering: For example: display_name = "test". Supported fields include: name, display_name, uri, state, schema_title, create_time, and update_time. Time fields, such as create_time and update_time, require values specified in RFC-3339 format. For example: create_time = "2020-11-19T11:30:00-04:00"
  • Metadata field: To filter on metadata fields use traversal operation as follows: metadata.<field_name>.<type_value>. For example: metadata.field_1.number_value = 10.0 In case the field name contains special characters (such as colon), one can embed it inside double quote. For example: metadata."field:1".number_value = 10.0
  • Context based filtering: To filter Artifacts based on the contexts to which they belong, use the function operator with the full resource name in_context(<context-name>). For example: in_context("projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context-id>")

Each of the above supported filter types can be combined together using logical operators (AND & OR). Maximum nested expression depth allowed is 5.

For example: display_name = "test" AND metadata.field1.bool_value = true.

order_by

string

How the list of messages is ordered. Specify the values to order by and an ordering operation. The default sorting order is ascending. To specify descending order for a field, users append a " desc" suffix; for example: "foo desc, bar". Subfields are specified with a . character, such as foo.bar. see https://google.aip.dev/132#ordering for more details.

ListArtifactsResponse

Response message for MetadataService.ListArtifacts.

Fields
artifacts[]

Artifact

The Artifacts retrieved from the MetadataStore.

next_page_token

string

A token, which can be sent as ListArtifactsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

ListBatchPredictionJobsRequest

Request message for JobService.ListBatchPredictionJobs.

Fields
parent

string

Required. The resource name of the Location to list the BatchPredictionJobs from. Format: projects/{project}/locations/{location}

filter

string

The standard list filter.

Supported fields:

  • display_name supports =, != comparisons, and : wildcard.
  • model_display_name supports =, != comparisons.
  • state supports =, != comparisons.
  • create_time supports =, !=,<, <=,>, >= comparisons. create_time must be in RFC 3339 format.
  • labels supports general map functions that is: labels.key=value - key:value equality `labels.key:* - key existence

Some examples of using the filter are:

  • state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
  • state!="JOB_STATE_FAILED" OR display_name="my_job"
  • NOT display_name="my_job"
  • create_time>"2021-05-18T00:00:00Z"
  • labels.keyA=valueA
  • labels.keyB:*
page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListBatchPredictionJobsResponse.next_page_token of the previous JobService.ListBatchPredictionJobs call.

read_mask

FieldMask

Mask specifying which fields to read.

ListBatchPredictionJobsResponse

Response message for JobService.ListBatchPredictionJobs

Fields
batch_prediction_jobs[]

BatchPredictionJob

List of BatchPredictionJobs in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListBatchPredictionJobsRequest.page_token to obtain that page.

ListCachedContentsRequest

Request to list CachedContents.

Fields
parent

string

Required. The parent, which owns this collection of cached contents.

page_size

int32

Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.

page_token

string

Optional. A page token, received from a previous ListCachedContents call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to ListCachedContents must match the call that provided the page token.

ListCachedContentsResponse

Response with a list of CachedContents.

Fields
cached_contents[]

CachedContent

List of cached contents.

next_page_token

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListContextsRequest

Request message for MetadataService.ListContexts

Fields
parent

string

Required. The MetadataStore whose Contexts should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

page_size

int32

The maximum number of Contexts to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.

page_token

string

A page token, received from a previous MetadataService.ListContexts call. Provide this to retrieve the subsequent page.

When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)

filter

string

Filter specifying the boolean condition for the Contexts to satisfy in order to be part of the result set. The syntax to define filter query is based on https://google.aip.dev/160. Following are the supported set of filters:

  • Attribute filtering: For example: display_name = "test". Supported fields include: name, display_name, schema_title, create_time, and update_time. Time fields, such as create_time and update_time, require values specified in RFC-3339 format. For example: create_time = "2020-11-19T11:30:00-04:00".
  • Metadata field: To filter on metadata fields use traversal operation as follows: metadata.<field_name>.<type_value>. For example: metadata.field_1.number_value = 10.0. In case the field name contains special characters (such as colon), one can embed it inside double quote. For example: metadata."field:1".number_value = 10.0
  • Parent Child filtering: To filter Contexts based on parent-child relationship use the HAS operator as follows:

parent_contexts: "projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context_id>" child_contexts: "projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context_id>"

Each of the above supported filters can be combined together using logical operators (AND & OR). Maximum nested expression depth allowed is 5.

For example: display_name = "test" AND metadata.field1.bool_value = true.

order_by

string

How the list of messages is ordered. Specify the values to order by and an ordering operation. The default sorting order is ascending. To specify descending order for a field, users append a " desc" suffix; for example: "foo desc, bar". Subfields are specified with a . character, such as foo.bar. see https://google.aip.dev/132#ordering for more details.

ListContextsResponse

Response message for MetadataService.ListContexts.

Fields
contexts[]

Context

The Contexts retrieved from the MetadataStore.

next_page_token

string

A token, which can be sent as ListContextsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

ListCustomJobsRequest

Request message for JobService.ListCustomJobs.

Fields
parent

string

Required. The resource name of the Location to list the CustomJobs from. Format: projects/{project}/locations/{location}

filter

string

The standard list filter.

Supported fields:

  • display_name supports =, != comparisons, and : wildcard.
  • state supports =, != comparisons.
  • create_time supports =, !=,<, <=,>, >= comparisons. create_time must be in RFC 3339 format.
  • labels supports general map functions that is: labels.key=value - key:value equality `labels.key:* - key existence

Some examples of using the filter are:

  • state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
  • state!="JOB_STATE_FAILED" OR display_name="my_job"
  • NOT display_name="my_job"
  • create_time>"2021-05-18T00:00:00Z"
  • labels.keyA=valueA
  • labels.keyB:*
page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListCustomJobsResponse.next_page_token of the previous JobService.ListCustomJobs call.

read_mask

FieldMask

Mask specifying which fields to read.

ListCustomJobsResponse

Response message for JobService.ListCustomJobs

Fields
custom_jobs[]

CustomJob

List of CustomJobs in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListCustomJobsRequest.page_token to obtain that page.

ListDataItemsRequest

Request message for DatasetService.ListDataItems.

Fields
parent

string

Required. The resource name of the Dataset to list DataItems from. Format: projects/{project}/locations/{location}/datasets/{dataset}

filter

string

The standard list filter.

page_size

int32

The standard list page size.

page_token

string

The standard list page token.

read_mask

FieldMask

Mask specifying which fields to read.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

ListDataItemsResponse

Response message for DatasetService.ListDataItems.

Fields
data_items[]

DataItem

A list of DataItems that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListDatasetVersionsRequest

Request message for DatasetService.ListDatasetVersions.

Fields
parent

string

Required. The resource name of the Dataset to list DatasetVersions from. Format: projects/{project}/locations/{location}/datasets/{dataset}

filter

string

Optional. The standard list filter.

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token.

read_mask

FieldMask

Optional. Mask specifying which fields to read.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

ListDatasetVersionsResponse

Response message for DatasetService.ListDatasetVersions.

Fields
dataset_versions[]

DatasetVersion

A list of DatasetVersions that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListDatasetsRequest

Request message for DatasetService.ListDatasets.

Fields
parent

string

Required. The name of the Dataset's parent resource. Format: projects/{project}/locations/{location}

filter

string

An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • display_name: supports = and !=
  • metadata_schema_uri: supports = and !=
  • labels supports general map functions that is:
    • labels.key=value - key:value equality
    • `labels.key:* or labels:key - key existence
    • A key including a space must be quoted. labels."a key".

Some examples:

  • displayName="myDisplayName"
  • labels.myKey="myValue"
page_size

int32

The standard list page size.

page_token

string

The standard list page token.

read_mask

FieldMask

Mask specifying which fields to read.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:

  • display_name
  • create_time
  • update_time

ListDatasetsResponse

Response message for DatasetService.ListDatasets.

Fields
datasets[]

Dataset

A list of Datasets that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListDeploymentResourcePoolsRequest

Request message for ListDeploymentResourcePools method.

Fields
parent

string

Required. The parent Location which owns this collection of DeploymentResourcePools. Format: projects/{project}/locations/{location}

page_size

int32

The maximum number of DeploymentResourcePools to return. The service may return fewer than this value.

page_token

string

A page token, received from a previous ListDeploymentResourcePools call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to ListDeploymentResourcePools must match the call that provided the page token.

ListDeploymentResourcePoolsResponse

Response message for ListDeploymentResourcePools method.

Fields
deployment_resource_pools[]

DeploymentResourcePool

The DeploymentResourcePools from the specified location.

next_page_token

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListEndpointsRequest

Request message for EndpointService.ListEndpoints.

Fields
parent

string

Required. The resource name of the Location from which to list the Endpoints. Format: projects/{project}/locations/{location}

filter

string

Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • endpoint supports = and !=. endpoint represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name.
  • display_name supports = and !=.
  • labels supports general map functions that is:
    • labels.key=value - key:value equality
    • labels.key:* or labels:key - key existence
    • A key including a space must be quoted. labels."a key".
  • base_model_name only supports =.

Some examples:

  • endpoint=1
  • displayName="myDisplayName"
  • labels.myKey="myValue"
  • baseModelName="text-bison"
page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListEndpointsResponse.next_page_token of the previous EndpointService.ListEndpoints call.

read_mask

FieldMask

Optional. Mask specifying which fields to read.

ListEndpointsResponse

Response message for EndpointService.ListEndpoints.

Fields
endpoints[]

Endpoint

List of Endpoints in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListEndpointsRequest.page_token to obtain that page.

ListEntityTypesRequest

Request message for FeaturestoreService.ListEntityTypes.

Fields
parent

string

Required. The resource name of the Featurestore to list EntityTypes. Format: projects/{project}/locations/{location}/featurestores/{featurestore}

filter

string

Lists the EntityTypes that match the filter expression. The following filters are supported:

  • create_time: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.
  • labels: Supports key-value equality as well as key presence.

Examples:

  • create_time > \"2020-01-31T15:30:00.000000Z\" OR update_time > \"2020-01-31T15:30:00.000000Z\" --> EntityTypes created or updated after 2020-01-31T15:30:00.000000Z.
  • labels.active = yes AND labels.env = prod --> EntityTypes having both (active: yes) and (env: prod) labels.
  • labels.env: * --> Any EntityType which has a label with 'env' as the key.
page_size

int32

The maximum number of EntityTypes to return. The service may return fewer than this value. If unspecified, at most 1000 EntityTypes will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.

page_token

string

A page token, received from a previous FeaturestoreService.ListEntityTypes call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeaturestoreService.ListEntityTypes must match the call that provided the page token.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

Supported fields:

  • entity_type_id
  • create_time
  • update_time
read_mask

FieldMask

Mask specifying which fields to read.

ListEntityTypesResponse

Response message for FeaturestoreService.ListEntityTypes.

Fields
entity_types[]

EntityType

The EntityTypes matching the request.

next_page_token

string

A token, which can be sent as ListEntityTypesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListExecutionsRequest

Request message for MetadataService.ListExecutions.

Fields
parent

string

Required. The MetadataStore whose Executions should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

page_size

int32

The maximum number of Executions to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.

page_token

string

A page token, received from a previous MetadataService.ListExecutions call. Provide this to retrieve the subsequent page.

When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with an INVALID_ARGUMENT error.)

filter

string

Filter specifying the boolean condition for the Executions to satisfy in order to be part of the result set. The syntax to define filter query is based on https://google.aip.dev/160. Following are the supported set of filters:

  • Attribute filtering: For example: display_name = "test". Supported fields include: name, display_name, state, schema_title, create_time, and update_time. Time fields, such as create_time and update_time, require values specified in RFC-3339 format. For example: create_time = "2020-11-19T11:30:00-04:00".
  • Metadata field: To filter on metadata fields use traversal operation as follows: metadata.<field_name>.<type_value> For example: metadata.field_1.number_value = 10.0 In case the field name contains special characters (such as colon), one can embed it inside double quote. For example: metadata."field:1".number_value = 10.0
  • Context based filtering: To filter Executions based on the contexts to which they belong use the function operator with the full resource name: in_context(<context-name>). For example: in_context("projects/<project_number>/locations/<location>/metadataStores/<metadatastore_name>/contexts/<context-id>")

Each of the above supported filters can be combined together using logical operators (AND & OR). Maximum nested expression depth allowed is 5.

For example: display_name = "test" AND metadata.field1.bool_value = true.

order_by

string

How the list of messages is ordered. Specify the values to order by and an ordering operation. The default sorting order is ascending. To specify descending order for a field, users append a " desc" suffix; for example: "foo desc, bar". Subfields are specified with a . character, such as foo.bar. see https://google.aip.dev/132#ordering for more details.

ListExecutionsResponse

Response message for MetadataService.ListExecutions.

Fields
executions[]

Execution

The Executions retrieved from the MetadataStore.

next_page_token

string

A token, which can be sent as ListExecutionsRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

ListExtensionsRequest

Request message for ExtensionRegistryService.ListExtensions.

Fields
parent

string

Required. The resource name of the Location to list the Extensions from. Format: projects/{project}/locations/{location}

filter

string

Optional. The standard list filter. Supported fields: * display_name * create_time * update_time

More detail in AIP-160.

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: * display_name * create_time * update_time

Example: display_name, create_time desc.

ListExtensionsResponse

Response message for ExtensionRegistryService.ListExtensions

Fields
extensions[]

Extension

List of Extension in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListExtensionsRequest.page_token to obtain that page.

ListFeatureGroupsRequest

Request message for FeatureRegistryService.ListFeatureGroups.

Fields
parent

string

Required. The resource name of the Location to list FeatureGroups. Format: projects/{project}/locations/{location}

filter

string

Lists the FeatureGroups that match the filter expression. The following fields are supported:

  • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • labels: Supports key-value equality and key presence.

Examples:

  • create_time > "2020-01-01" OR update_time > "2020-01-01" FeatureGroups created or updated after 2020-01-01.
  • labels.env = "prod" FeatureGroups with label "env" set to "prod".
page_size

int32

The maximum number of FeatureGroups to return. The service may return fewer than this value. If unspecified, at most 100 FeatureGroups will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.

page_token

string

A page token, received from a previous FeatureRegistryService.ListFeatureGroups call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeatureRegistryService.ListFeatureGroups must match the call that provided the page token.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:

  • create_time
  • update_time

ListFeatureGroupsResponse

Response message for FeatureRegistryService.ListFeatureGroups.

Fields
feature_groups[]

FeatureGroup

The FeatureGroups matching the request.

next_page_token

string

A token, which can be sent as ListFeatureGroupsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListFeatureMonitorJobsRequest

Request message for FeatureRegistryService.ListFeatureMonitorJobs.

Fields
parent

string

Required. The resource name of the FeatureMonitor to list FeatureMonitorJobs. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}/featureMonitors/{feature_monitor}

filter

string

Optional. Lists the FeatureMonitorJobs that match the filter expression. The following fields are supported:

  • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be

Examples:

  • create_time > "2020-01-01" FeatureMonitorJobs created after 2020-01-01.
page_size

int32

Optional. The maximum number of FeatureMonitorJobs to return. The service may return fewer than this value. If unspecified, at most 100 FeatureMonitorJobs will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.

page_token

string

Optional. A page token, received from a previous FeatureRegistryService.ListFeatureMonitorJobs call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeatureRegistryService.ListFeatureMonitorJobs must match the call that provided the page token.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:

  • create_time

ListFeatureMonitorJobsResponse

Response message for FeatureRegistryService.ListFeatureMonitorJobs.

Fields
feature_monitor_jobs[]

FeatureMonitorJob

The FeatureMonitorJobs matching the request.

next_page_token

string

A token, which can be sent as ListFeatureMonitorJobsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListFeatureMonitorsRequest

Request message for FeatureRegistryService.ListFeatureMonitors.

Fields
parent

string

Required. The resource name of the FeatureGroup to list FeatureMonitors. Format: projects/{project}/locations/{location}/featureGroups/{featureGroup}

filter

string

Optional. Lists the FeatureMonitors that match the filter expression. The following fields are supported:

  • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • labels: Supports key-value equality and key presence.

Examples:

  • create_time > "2020-01-01" OR update_time > "2020-01-01" FeatureMonitors created or updated after 2020-01-01.
  • labels.env = "prod" FeatureGroups with label "env" set to "prod".
page_size

int32

Optional. The maximum number of FeatureGroups to return. The service may return fewer than this value. If unspecified, at most 100 FeatureMonitors will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.

page_token

string

Optional. A page token, received from a previous FeatureRegistryService.ListFeatureMonitors call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeatureRegistryService.ListFeatureMonitors must match the call that provided the page token.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:

  • create_time
  • update_time

ListFeatureMonitorsResponse

Response message for FeatureRegistryService.ListFeatureMonitors.

Fields
feature_monitors[]

FeatureMonitor

The FeatureMonitors matching the request.

next_page_token

string

A token, which can be sent as ListFeatureMonitorsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListFeatureOnlineStoresRequest

Request message for FeatureOnlineStoreAdminService.ListFeatureOnlineStores.

Fields
parent

string

Required. The resource name of the Location to list FeatureOnlineStores. Format: projects/{project}/locations/{location}

filter

string

Lists the FeatureOnlineStores that match the filter expression. The following fields are supported:

  • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • labels: Supports key-value equality and key presence.

Examples:

  • create_time > "2020-01-01" OR update_time > "2020-01-01" FeatureOnlineStores created or updated after 2020-01-01.
  • labels.env = "prod" FeatureOnlineStores with label "env" set to "prod".
page_size

int32

The maximum number of FeatureOnlineStores to return. The service may return fewer than this value. If unspecified, at most 100 FeatureOnlineStores will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.

page_token

string

A page token, received from a previous FeatureOnlineStoreAdminService.ListFeatureOnlineStores call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeatureOnlineStoreAdminService.ListFeatureOnlineStores must match the call that provided the page token.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:

  • create_time
  • update_time

ListFeatureOnlineStoresResponse

Response message for FeatureOnlineStoreAdminService.ListFeatureOnlineStores.

Fields
feature_online_stores[]

FeatureOnlineStore

The FeatureOnlineStores matching the request.

next_page_token

string

A token, which can be sent as ListFeatureOnlineStoresRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListFeatureViewSyncsRequest

Request message for FeatureOnlineStoreAdminService.ListFeatureViewSyncs.

Fields
parent

string

Required. The resource name of the FeatureView to list FeatureViewSyncs. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

filter

string

Lists the FeatureViewSyncs that match the filter expression. The following filters are supported:

  • create_time: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.

Examples:

  • create_time > \"2020-01-31T15:30:00.000000Z\" --> FeatureViewSyncs created after 2020-01-31T15:30:00.000000Z.
page_size

int32

The maximum number of FeatureViewSyncs to return. The service may return fewer than this value. If unspecified, at most 1000 FeatureViewSyncs will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.

page_token

string

A page token, received from a previous FeatureOnlineStoreAdminService.ListFeatureViewSyncs call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeatureOnlineStoreAdminService.ListFeatureViewSyncs must match the call that provided the page token.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

Supported fields:

  • create_time

ListFeatureViewSyncsResponse

Response message for FeatureOnlineStoreAdminService.ListFeatureViewSyncs.

Fields
feature_view_syncs[]

FeatureViewSync

The FeatureViewSyncs matching the request.

next_page_token

string

A token, which can be sent as ListFeatureViewSyncsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListFeatureViewsRequest

Request message for FeatureOnlineStoreAdminService.ListFeatureViews.

Fields
parent

string

Required. The resource name of the FeatureOnlineStore to list FeatureViews. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}

filter

string

Lists the FeatureViews that match the filter expression. The following filters are supported:

  • create_time: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.
  • labels: Supports key-value equality as well as key presence.

Examples:

  • create_time > \"2020-01-31T15:30:00.000000Z\" OR update_time > \"2020-01-31T15:30:00.000000Z\" --> FeatureViews created or updated after 2020-01-31T15:30:00.000000Z.
  • labels.active = yes AND labels.env = prod --> FeatureViews having both (active: yes) and (env: prod) labels.
  • labels.env: * --> Any FeatureView which has a label with 'env' as the key.
page_size

int32

The maximum number of FeatureViews to return. The service may return fewer than this value. If unspecified, at most 1000 FeatureViews will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.

page_token

string

A page token, received from a previous FeatureOnlineStoreAdminService.ListFeatureViews call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeatureOnlineStoreAdminService.ListFeatureViews must match the call that provided the page token.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

Supported fields:

  • feature_view_id
  • create_time
  • update_time

ListFeatureViewsResponse

Response message for FeatureOnlineStoreAdminService.ListFeatureViews.

Fields
feature_views[]

FeatureView

The FeatureViews matching the request.

next_page_token

string

A token, which can be sent as ListFeatureViewsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListFeaturesRequest

Request message for FeaturestoreService.ListFeatures. Request message for FeatureRegistryService.ListFeatures.

Fields
parent

string

Required. The resource name of the Location to list Features. Format for entity_type as parent: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type} Format for feature_group as parent: projects/{project}/locations/{location}/featureGroups/{feature_group}

filter

string

Lists the Features that match the filter expression. The following filters are supported:

  • value_type: Supports = and != comparisons.
  • create_time: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, >=, and <= comparisons. Values must be in RFC 3339 format.
  • labels: Supports key-value equality as well as key presence.

Examples:

  • value_type = DOUBLE --> Features whose type is DOUBLE.
  • create_time > \"2020-01-31T15:30:00.000000Z\" OR update_time > \"2020-01-31T15:30:00.000000Z\" --> EntityTypes created or updated after 2020-01-31T15:30:00.000000Z.
  • labels.active = yes AND labels.env = prod --> Features having both (active: yes) and (env: prod) labels.
  • labels.env: * --> Any Feature which has a label with 'env' as the key.
page_size

int32

The maximum number of Features to return. The service may return fewer than this value. If unspecified, at most 1000 Features will be returned. The maximum value is 1000; any value greater than 1000 will be coerced to 1000.

page_token

string

A page token, received from a previous FeaturestoreService.ListFeatures call or FeatureRegistryService.ListFeatures call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeaturestoreService.ListFeatures or FeatureRegistryService.ListFeatures must match the call that provided the page token.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:

  • feature_id
  • value_type (Not supported for FeatureRegistry Feature)
  • create_time
  • update_time
read_mask

FieldMask

Mask specifying which fields to read.

latest_stats_count

int32

Only applicable for Vertex AI Feature Store (Legacy). If set, return the most recent ListFeaturesRequest.latest_stats_count of stats for each Feature in response. Valid value is [0, 10]. If number of stats exists < ListFeaturesRequest.latest_stats_count, return all existing stats.

ListFeaturesResponse

Response message for FeaturestoreService.ListFeatures. Response message for FeatureRegistryService.ListFeatures.

Fields
features[]

Feature

The Features matching the request.

next_page_token

string

A token, which can be sent as ListFeaturesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListFeaturestoresRequest

Request message for FeaturestoreService.ListFeaturestores.

Fields
parent

string

Required. The resource name of the Location to list Featurestores. Format: projects/{project}/locations/{location}

filter

string

Lists the featurestores that match the filter expression. The following fields are supported:

  • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • online_serving_config.fixed_node_count: Supports =, !=, <, >, <=, and >= comparisons.
  • labels: Supports key-value equality and key presence.

Examples:

  • create_time > "2020-01-01" OR update_time > "2020-01-01" Featurestores created or updated after 2020-01-01.
  • labels.env = "prod" Featurestores with label "env" set to "prod".
page_size

int32

The maximum number of Featurestores to return. The service may return fewer than this value. If unspecified, at most 100 Featurestores will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.

page_token

string

A page token, received from a previous FeaturestoreService.ListFeaturestores call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeaturestoreService.ListFeaturestores must match the call that provided the page token.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported Fields:

  • create_time
  • update_time
  • online_serving_config.fixed_node_count
read_mask

FieldMask

Mask specifying which fields to read.

ListFeaturestoresResponse

Response message for FeaturestoreService.ListFeaturestores.

Fields
featurestores[]

Featurestore

The Featurestores matching the request.

next_page_token

string

A token, which can be sent as ListFeaturestoresRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListHyperparameterTuningJobsRequest

Request message for JobService.ListHyperparameterTuningJobs.

Fields
parent

string

Required. The resource name of the Location to list the HyperparameterTuningJobs from. Format: projects/{project}/locations/{location}

filter

string

The standard list filter.

Supported fields:

  • display_name supports =, != comparisons, and : wildcard.
  • state supports =, != comparisons.
  • create_time supports =, !=,<, <=,>, >= comparisons. create_time must be in RFC 3339 format.
  • labels supports general map functions that is: labels.key=value - key:value equality `labels.key:* - key existence

Some examples of using the filter are:

  • state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
  • state!="JOB_STATE_FAILED" OR display_name="my_job"
  • NOT display_name="my_job"
  • create_time>"2021-05-18T00:00:00Z"
  • labels.keyA=valueA
  • labels.keyB:*
page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListHyperparameterTuningJobsResponse.next_page_token of the previous JobService.ListHyperparameterTuningJobs call.

read_mask

FieldMask

Mask specifying which fields to read.

ListHyperparameterTuningJobsResponse

Response message for JobService.ListHyperparameterTuningJobs

Fields
hyperparameter_tuning_jobs[]

HyperparameterTuningJob

List of HyperparameterTuningJobs in the requested page. HyperparameterTuningJob.trials of the jobs will be not be returned.

next_page_token

string

A token to retrieve the next page of results. Pass to ListHyperparameterTuningJobsRequest.page_token to obtain that page.

ListIndexEndpointsRequest

Request message for IndexEndpointService.ListIndexEndpoints.

Fields
parent

string

Required. The resource name of the Location from which to list the IndexEndpoints. Format: projects/{project}/locations/{location}

filter

string

Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • index_endpoint supports = and !=. index_endpoint represents the IndexEndpoint ID, ie. the last segment of the IndexEndpoint's resourcename.
  • display_name supports =, != and regex() (uses re2 syntax)
  • labels supports general map functions that is: labels.key=value - key:value equality labels.key:* or labels:key - key existence A key including a space must be quoted.labels."a key"`.

Some examples: * index_endpoint="1" * display_name="myDisplayName" * regex(display_name, "^A") -> The display name starts with an A. *labels.myKey="myValue"`

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListIndexEndpointsResponse.next_page_token of the previous IndexEndpointService.ListIndexEndpoints call.

read_mask

FieldMask

Optional. Mask specifying which fields to read.

ListIndexEndpointsResponse

Response message for IndexEndpointService.ListIndexEndpoints.

Fields
index_endpoints[]

IndexEndpoint

List of IndexEndpoints in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListIndexEndpointsRequest.page_token to obtain that page.

ListIndexesRequest

Request message for IndexService.ListIndexes.

Fields
parent

string

Required. The resource name of the Location from which to list the Indexes. Format: projects/{project}/locations/{location}

filter

string

The standard list filter.

page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListIndexesResponse.next_page_token of the previous IndexService.ListIndexes call.

read_mask

FieldMask

Mask specifying which fields to read.

ListIndexesResponse

Response message for IndexService.ListIndexes.

Fields
indexes[]

Index

List of indexes in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListIndexesRequest.page_token to obtain that page.

ListMetadataSchemasRequest

Request message for MetadataService.ListMetadataSchemas.

Fields
parent

string

Required. The MetadataStore whose MetadataSchemas should be listed. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

page_size

int32

The maximum number of MetadataSchemas to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.

page_token

string

A page token, received from a previous MetadataService.ListMetadataSchemas call. Provide this to retrieve the next page.

When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)

filter

string

A query to filter available MetadataSchemas for matching results.

ListMetadataSchemasResponse

Response message for MetadataService.ListMetadataSchemas.

Fields
metadata_schemas[]

MetadataSchema

The MetadataSchemas found for the MetadataStore.

next_page_token

string

A token, which can be sent as ListMetadataSchemasRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

ListMetadataStoresRequest

Request message for MetadataService.ListMetadataStores.

Fields
parent

string

Required. The Location whose MetadataStores should be listed. Format: projects/{project}/locations/{location}

page_size

int32

The maximum number of Metadata Stores to return. The service may return fewer. Must be in range 1-1000, inclusive. Defaults to 100.

page_token

string

A page token, received from a previous MetadataService.ListMetadataStores call. Provide this to retrieve the subsequent page.

When paginating, all other provided parameters must match the call that provided the page token. (Otherwise the request will fail with INVALID_ARGUMENT error.)

ListMetadataStoresResponse

Response message for MetadataService.ListMetadataStores.

Fields
metadata_stores[]

MetadataStore

The MetadataStores found for the Location.

next_page_token

string

A token, which can be sent as ListMetadataStoresRequest.page_token to retrieve the next page. If this field is not populated, there are no subsequent pages.

ListModelDeploymentMonitoringJobsRequest

Request message for JobService.ListModelDeploymentMonitoringJobs.

Fields
parent

string

Required. The parent of the ModelDeploymentMonitoringJob. Format: projects/{project}/locations/{location}

filter

string

The standard list filter.

Supported fields:

  • display_name supports =, != comparisons, and : wildcard.
  • state supports =, != comparisons.
  • create_time supports =, !=,<, <=,>, >= comparisons. create_time must be in RFC 3339 format.
  • labels supports general map functions that is: labels.key=value - key:value equality `labels.key:* - key existence

Some examples of using the filter are:

  • state="JOB_STATE_SUCCEEDED" AND display_name:"my_job_*"
  • state!="JOB_STATE_FAILED" OR display_name="my_job"
  • NOT display_name="my_job"
  • create_time>"2021-05-18T00:00:00Z"
  • labels.keyA=valueA
  • labels.keyB:*
page_size

int32

The standard list page size.

page_token

string

The standard list page token.

read_mask

FieldMask

Mask specifying which fields to read

ListModelDeploymentMonitoringJobsResponse

Response message for JobService.ListModelDeploymentMonitoringJobs.

Fields
model_deployment_monitoring_jobs[]

ModelDeploymentMonitoringJob

A list of ModelDeploymentMonitoringJobs that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListModelEvaluationSlicesRequest

Request message for ModelService.ListModelEvaluationSlices.

Fields
parent

string

Required. The resource name of the ModelEvaluation to list the ModelEvaluationSlices from. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}

filter

string

The standard list filter.

  • slice.dimension - for =.
page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListModelEvaluationSlicesResponse.next_page_token of the previous ModelService.ListModelEvaluationSlices call.

read_mask

FieldMask

Mask specifying which fields to read.

ListModelEvaluationSlicesResponse

Response message for ModelService.ListModelEvaluationSlices.

Fields
model_evaluation_slices[]

ModelEvaluationSlice

List of ModelEvaluations in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListModelEvaluationSlicesRequest.page_token to obtain that page.

ListModelEvaluationsRequest

Request message for ModelService.ListModelEvaluations.

Fields
parent

string

Required. The resource name of the Model to list the ModelEvaluations from. Format: projects/{project}/locations/{location}/models/{model}

filter

string

The standard list filter.

page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListModelEvaluationsResponse.next_page_token of the previous ModelService.ListModelEvaluations call.

read_mask

FieldMask

Mask specifying which fields to read.

ListModelEvaluationsResponse

Response message for ModelService.ListModelEvaluations.

Fields
model_evaluations[]

ModelEvaluation

List of ModelEvaluations in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListModelEvaluationsRequest.page_token to obtain that page.

ListModelMonitoringJobsRequest

Request message for ModelMonitoringService.ListModelMonitoringJobs.

Fields
parent

string

Required. The parent of the ModelMonitoringJob. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}

filter

string

The standard list filter. More detail in AIP-160.

page_size

int32

The standard list page size.

page_token

string

The standard list page token.

read_mask

FieldMask

Mask specifying which fields to read

ListModelMonitoringJobsResponse

Response message for ModelMonitoringService.ListModelMonitoringJobs.

Fields
model_monitoring_jobs[]

ModelMonitoringJob

A list of ModelMonitoringJobs that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListModelMonitorsRequest

Request message for ModelMonitoringService.ListModelMonitors.

Fields
parent

string

Required. The resource name of the Location to list the ModelMonitors from. Format: projects/{project}/locations/{location}

filter

string

The standard list filter. More detail in AIP-160.

page_size

int32

The standard list page size.

page_token

string

The standard list page token.

read_mask

FieldMask

Mask specifying which fields to read.

ListModelMonitorsResponse

Response message for ModelMonitoringService.ListModelMonitors

Fields
model_monitors[]

ModelMonitor

List of ModelMonitor in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListModelMonitorsRequest.page_token to obtain that page.

ListModelVersionsRequest

Request message for ModelService.ListModelVersions.

Fields
name

string

Required. The name of the model to list versions for.

page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via next_page_token of the previous ListModelVersions call.

filter

string

An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • labels supports general map functions that is:
    • labels.key=value - key:value equality
    • `labels.key:* or labels:key - key existence
    • A key including a space must be quoted. labels."a key".

Some examples:

  • labels.myKey="myValue"
read_mask

FieldMask

Mask specifying which fields to read.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:

  • create_time
  • update_time

Example: update_time asc, create_time desc.

ListModelVersionsResponse

Response message for ModelService.ListModelVersions

Fields
models[]

Model

List of Model versions in the requested page. In the returned Model name field, version ID instead of regvision tag will be included.

next_page_token

string

A token to retrieve the next page of results. Pass to ListModelVersionsRequest.page_token to obtain that page.

ListModelsRequest

Request message for ModelService.ListModels.

Fields
parent

string

Required. The resource name of the Location to list the Models from. Format: projects/{project}/locations/{location}

filter

string

An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • model supports = and !=. model represents the Model ID, i.e. the last segment of the Model's resource name.
  • display_name supports = and !=
  • labels supports general map functions that is:
    • labels.key=value - key:value equality
    • `labels.key:* or labels:key - key existence
    • A key including a space must be quoted. labels."a key".
  • base_model_name only supports =

Some examples:

  • model=1234
  • displayName="myDisplayName"
  • labels.myKey="myValue"
  • baseModelName="text-bison"
page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListModelsResponse.next_page_token of the previous ModelService.ListModels call.

read_mask

FieldMask

Mask specifying which fields to read.

ListModelsResponse

Response message for ModelService.ListModels

Fields
models[]

Model

List of Models in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListModelsRequest.page_token to obtain that page.

ListNotebookExecutionJobsRequest

Request message for [NotebookService.ListNotebookExecutionJobs]

Fields
parent

string

Required. The resource name of the Location from which to list the NotebookExecutionJobs. Format: projects/{project}/locations/{location}

filter

string

Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • notebookExecutionJob supports = and !=. notebookExecutionJob represents the NotebookExecutionJob ID.
  • displayName supports = and != and regex.
  • schedule supports = and != and regex.

Some examples: * notebookExecutionJob="123" * notebookExecutionJob="my-execution-job" * displayName="myDisplayName" and displayName=~"myDisplayNameRegex"

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListNotebookExecutionJobsResponse.next_page_token of the previous NotebookService.ListNotebookExecutionJobs call.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:

  • display_name
  • create_time
  • update_time

Example: display_name, create_time desc.

view

NotebookExecutionJobView

Optional. The NotebookExecutionJob view. Defaults to BASIC.

ListNotebookExecutionJobsResponse

Response message for [NotebookService.CreateNotebookExecutionJob]

Fields
notebook_execution_jobs[]

NotebookExecutionJob

List of NotebookExecutionJobs in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListNotebookExecutionJobsRequest.page_token to obtain that page.

ListNotebookRuntimeTemplatesRequest

Request message for NotebookService.ListNotebookRuntimeTemplates.

Fields
parent

string

Required. The resource name of the Location from which to list the NotebookRuntimeTemplates. Format: projects/{project}/locations/{location}

filter

string

Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • notebookRuntimeTemplate supports = and !=. notebookRuntimeTemplate represents the NotebookRuntimeTemplate ID, i.e. the last segment of the NotebookRuntimeTemplate's resource name.
  • display_name supports = and !=
  • labels supports general map functions that is:
    • labels.key=value - key:value equality
    • `labels.key:* or labels:key - key existence
    • A key including a space must be quoted. labels."a key".
  • notebookRuntimeType supports = and !=. notebookRuntimeType enum: [USER_DEFINED, ONE_CLICK].
  • machineType supports = and !=.
  • acceleratorType supports = and !=.

Some examples:

  • notebookRuntimeTemplate=notebookRuntimeTemplate123
  • displayName="myDisplayName"
  • labels.myKey="myValue"
  • notebookRuntimeType=USER_DEFINED
  • machineType=e2-standard-4
  • acceleratorType=NVIDIA_TESLA_T4
page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListNotebookRuntimeTemplatesResponse.next_page_token of the previous NotebookService.ListNotebookRuntimeTemplates call.

read_mask

FieldMask

Optional. Mask specifying which fields to read.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:

  • display_name
  • create_time
  • update_time

Example: display_name, create_time desc.

ListNotebookRuntimeTemplatesResponse

Response message for NotebookService.ListNotebookRuntimeTemplates.

Fields
notebook_runtime_templates[]

NotebookRuntimeTemplate

List of NotebookRuntimeTemplates in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListNotebookRuntimeTemplatesRequest.page_token to obtain that page.

ListNotebookRuntimesRequest

Request message for NotebookService.ListNotebookRuntimes.

Fields
parent

string

Required. The resource name of the Location from which to list the NotebookRuntimes. Format: projects/{project}/locations/{location}

filter

string

Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

  • notebookRuntime supports = and !=. notebookRuntime represents the NotebookRuntime ID, i.e. the last segment of the NotebookRuntime's resource name.
  • displayName supports = and != and regex.
  • notebookRuntimeTemplate supports = and !=. notebookRuntimeTemplate represents the NotebookRuntimeTemplate ID, i.e. the last segment of the NotebookRuntimeTemplate's resource name.
  • healthState supports = and !=. healthState enum: [HEALTHY, UNHEALTHY, HEALTH_STATE_UNSPECIFIED].
  • runtimeState supports = and !=. runtimeState enum: [RUNTIME_STATE_UNSPECIFIED, RUNNING, BEING_STARTED, BEING_STOPPED, STOPPED, BEING_UPGRADED, ERROR, INVALID].
  • runtimeUser supports = and !=.
  • API version is UI only: uiState supports = and !=. uiState enum: [UI_RESOURCE_STATE_UNSPECIFIED, UI_RESOURCE_STATE_BEING_CREATED, UI_RESOURCE_STATE_ACTIVE, UI_RESOURCE_STATE_BEING_DELETED, UI_RESOURCE_STATE_CREATION_FAILED].
  • notebookRuntimeType supports = and !=. notebookRuntimeType enum: [USER_DEFINED, ONE_CLICK].
  • machineType supports = and !=.
  • acceleratorType supports = and !=.

Some examples:

  • notebookRuntime="notebookRuntime123"
  • displayName="myDisplayName" and displayName=~"myDisplayNameRegex"
  • notebookRuntimeTemplate="notebookRuntimeTemplate321"
  • healthState=HEALTHY
  • runtimeState=RUNNING
  • runtimeUser="test@google.com"
  • uiState=UI_RESOURCE_STATE_BEING_DELETED
  • notebookRuntimeType=USER_DEFINED
  • machineType=e2-standard-4
  • acceleratorType=NVIDIA_TESLA_T4
page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListNotebookRuntimesResponse.next_page_token of the previous NotebookService.ListNotebookRuntimes call.

read_mask

FieldMask

Optional. Mask specifying which fields to read.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:

  • display_name
  • create_time
  • update_time

Example: display_name, create_time desc.

ListNotebookRuntimesResponse

Response message for NotebookService.ListNotebookRuntimes.

Fields
notebook_runtimes[]

NotebookRuntime

List of NotebookRuntimes in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListNotebookRuntimesRequest.page_token to obtain that page.

ListOptimalTrialsRequest

Request message for VizierService.ListOptimalTrials.

Fields
parent

string

Required. The name of the Study that the optimal Trial belongs to.

ListOptimalTrialsResponse

Response message for VizierService.ListOptimalTrials.

Fields
optimal_trials[]

Trial

The pareto-optimal Trials for multiple objective Study or the optimal trial for single objective Study. The definition of pareto-optimal can be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency

ListPersistentResourcesRequest

Request message for PersistentResourceService.ListPersistentResources.

Fields
parent

string

Required. The resource name of the Location to list the PersistentResources from. Format: projects/{project}/locations/{location}

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListPersistentResourcesResponse.next_page_token of the previous [PersistentResourceService.ListPersistentResource][] call.

ListPersistentResourcesResponse

Response message for PersistentResourceService.ListPersistentResources

Fields
persistent_resources[]

PersistentResource

next_page_token

string

A token to retrieve next page of results. Pass to ListPersistentResourcesRequest.page_token to obtain that page.

ListPipelineJobsRequest

Request message for PipelineService.ListPipelineJobs.

Fields
parent

string

Required. The resource name of the Location to list the PipelineJobs from. Format: projects/{project}/locations/{location}

filter

string

Lists the PipelineJobs that match the filter expression. The following fields are supported:

  • pipeline_name: Supports = and != comparisons.
  • display_name: Supports =, != comparisons, and : wildcard.
  • pipeline_job_user_id: Supports =, != comparisons, and : wildcard. for example, can check if pipeline's display_name contains step by doing display_name:"*step*"
  • state: Supports = and != comparisons.
  • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • update_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • end_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • labels: Supports key-value equality and key presence.
  • template_uri: Supports =, != comparisons, and : wildcard.
  • template_metadata.version: Supports =, != comparisons, and : wildcard.

Filter expressions can be combined together using logical operators (AND & OR). For example: pipeline_name="test" AND create_time>"2020-05-18T13:30:00Z".

The syntax to define filter expression is based on https://google.aip.dev/160.

Examples:

  • create_time>"2021-05-18T00:00:00Z" OR update_time>"2020-05-18T00:00:00Z" PipelineJobs created or updated after 2020-05-18 00:00:00 UTC.
  • labels.env = "prod" PipelineJobs with label "env" set to "prod".
page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListPipelineJobsResponse.next_page_token of the previous PipelineService.ListPipelineJobs call.

order_by

string

A comma-separated list of fields to order by. The default sort order is in ascending order. Use "desc" after a field name for descending. You can have multiple order_by fields provided e.g. "create_time desc, end_time", "end_time, start_time, update_time" For example, using "create_time desc, end_time" will order results by create time in descending order, and if there are multiple jobs having the same create time, order them by the end time in ascending order. if order_by is not specified, it will order by default order is create time in descending order. Supported fields:

  • create_time
  • update_time
  • end_time
  • start_time
read_mask

FieldMask

Mask specifying which fields to read.

ListPipelineJobsResponse

Response message for PipelineService.ListPipelineJobs

Fields
pipeline_jobs[]

PipelineJob

List of PipelineJobs in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListPipelineJobsRequest.page_token to obtain that page.

ListPublisherModelsRequest

Request message for ModelGardenService.ListPublisherModels.

Fields
parent

string

Required. The name of the Publisher from which to list the PublisherModels. Format: publishers/{publisher}

filter

string

Optional. The standard list filter.

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListPublisherModelsResponse.next_page_token of the previous ModelGardenService.ListPublisherModels call.

view

PublisherModelView

Optional. PublisherModel view specifying which fields to read.

order_by

string

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

language_code

string

Optional. The IETF BCP-47 language code representing the language in which the publisher models' text information should be written in. If not set, by default English (en).

list_all_versions

bool

Optional. List all publisher model versions if the flag is set to true.

ListPublisherModelsResponse

Response message for ModelGardenService.ListPublisherModels.

Fields
publisher_models[]

PublisherModel

List of PublisherModels in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to [ListPublisherModels.page_token][] to obtain that page.

ListRagCorporaRequest

Request message for VertexRagDataService.ListRagCorpora.

Fields
parent

string

Required. The resource name of the Location from which to list the RagCorpora. Format: projects/{project}/locations/{location}

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListRagCorporaResponse.next_page_token of the previous VertexRagDataService.ListRagCorpora call.

ListRagCorporaResponse

Response message for VertexRagDataService.ListRagCorpora.

Fields
rag_corpora[]

RagCorpus

List of RagCorpora in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListRagCorporaRequest.page_token to obtain that page.

ListRagFilesRequest

Request message for VertexRagDataService.ListRagFiles.

Fields
parent

string

Required. The resource name of the RagCorpus from which to list the RagFiles. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListRagFilesResponse.next_page_token of the previous VertexRagDataService.ListRagFiles call.

ListRagFilesResponse

Response message for VertexRagDataService.ListRagFiles.

Fields
rag_files[]

RagFile

List of RagFiles in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListRagFilesRequest.page_token to obtain that page.

ListReasoningEnginesRequest

Request message for ReasoningEngineService.ListReasoningEngines.

Fields
parent

string

Required. The resource name of the Location to list the ReasoningEngines from. Format: projects/{project}/locations/{location}

filter

string

Optional. The standard list filter. More detail in AIP-160.

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token.

ListReasoningEnginesResponse

Response message for ReasoningEngineService.ListReasoningEngines

Fields
reasoning_engines[]

ReasoningEngine

List of ReasoningEngines in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListReasoningEnginesRequest.page_token to obtain that page.

ListSavedQueriesRequest

Request message for DatasetService.ListSavedQueries.

Fields
parent

string

Required. The resource name of the Dataset to list SavedQueries from. Format: projects/{project}/locations/{location}/datasets/{dataset}

filter

string

The standard list filter.

page_size

int32

The standard list page size.

page_token

string

The standard list page token.

read_mask

FieldMask

Mask specifying which fields to read.

order_by

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

ListSavedQueriesResponse

Response message for DatasetService.ListSavedQueries.

Fields
saved_queries[]

SavedQuery

A list of SavedQueries that match the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListSchedulesRequest

Request message for ScheduleService.ListSchedules.

Fields
parent

string

Required. The resource name of the Location to list the Schedules from. Format: projects/{project}/locations/{location}

filter

string

Lists the Schedules that match the filter expression. The following fields are supported:

  • display_name: Supports =, != comparisons, and : wildcard.
  • state: Supports = and != comparisons.
  • request: Supports existence of the check. (e.g. create_pipeline_job_request:* --> Schedule has create_pipeline_job_request).
  • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • start_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.
  • end_time: Supports =, !=, <, >, <=, >= comparisons and :* existence check. Values must be in RFC 3339 format.
  • next_run_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.

Filter expressions can be combined together using logical operators (NOT, AND & OR). The syntax to define filter expression is based on https://google.aip.dev/160.

Examples:

  • state="ACTIVE" AND display_name:"my_schedule_*"
  • NOT display_name="my_schedule"
  • create_time>"2021-05-18T00:00:00Z"
  • end_time>"2021-05-18T00:00:00Z" OR NOT end_time:*
  • create_pipeline_job_request:*
page_size

int32

The standard list page size. Default to 100 if not specified.

page_token

string

The standard list page token. Typically obtained via ListSchedulesResponse.next_page_token of the previous ScheduleService.ListSchedules call.

order_by

string

A comma-separated list of fields to order by. The default sort order is in ascending order. Use "desc" after a field name for descending. You can have multiple order_by fields provided.

For example, using "create_time desc, end_time" will order results by create time in descending order, and if there are multiple schedules having the same create time, order them by the end time in ascending order.

If order_by is not specified, it will order by default with create_time in descending order.

Supported fields:

  • create_time
  • start_time
  • end_time
  • next_run_time

ListSchedulesResponse

Response message for ScheduleService.ListSchedules

Fields
schedules[]

Schedule

List of Schedules in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListSchedulesRequest.page_token to obtain that page.

ListSpecialistPoolsRequest

Request message for SpecialistPoolService.ListSpecialistPools.

Fields
parent

string

Required. The name of the SpecialistPool's parent resource. Format: projects/{project}/locations/{location}

page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained by ListSpecialistPoolsResponse.next_page_token of the previous SpecialistPoolService.ListSpecialistPools call. Return first page if empty.

read_mask

FieldMask

Mask specifying which fields to read. FieldMask represents a set of

ListSpecialistPoolsResponse

Response message for SpecialistPoolService.ListSpecialistPools.

Fields
specialist_pools[]

SpecialistPool

A list of SpecialistPools that matches the specified filter in the request.

next_page_token

string

The standard List next-page token.

ListStudiesRequest

Request message for VizierService.ListStudies.

Fields
parent

string

Required. The resource name of the Location to list the Study from. Format: projects/{project}/locations/{location}

page_token

string

Optional. A page token to request the next page of results. If unspecified, there are no subsequent pages.

page_size

int32

Optional. The maximum number of studies to return per "page" of results. If unspecified, service will pick an appropriate default.

ListStudiesResponse

Response message for VizierService.ListStudies.

Fields
studies[]

Study

The studies associated with the project.

next_page_token

string

Passes this token as the page_token field of the request for a subsequent call. If this field is omitted, there are no subsequent pages.

ListTensorboardExperimentsRequest

Request message for TensorboardService.ListTensorboardExperiments.

Fields
parent

string

Required. The resource name of the Tensorboard to list TensorboardExperiments. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

filter

string

Lists the TensorboardExperiments that match the filter expression.

page_size

int32

The maximum number of TensorboardExperiments to return. The service may return fewer than this value. If unspecified, at most 50 TensorboardExperiments are returned. The maximum value is 1000; values above 1000 are coerced to 1000.

page_token

string

A page token, received from a previous TensorboardService.ListTensorboardExperiments call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to TensorboardService.ListTensorboardExperiments must match the call that provided the page token.

order_by

string

Field to use to sort the list.

read_mask

FieldMask

Mask specifying which fields to read.

ListTensorboardExperimentsResponse

Response message for TensorboardService.ListTensorboardExperiments.

Fields
tensorboard_experiments[]

TensorboardExperiment

The TensorboardExperiments mathching the request.

next_page_token

string

A token, which can be sent as ListTensorboardExperimentsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListTensorboardRunsRequest

Request message for TensorboardService.ListTensorboardRuns.

Fields
parent

string

Required. The resource name of the TensorboardExperiment to list TensorboardRuns. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

filter

string

Lists the TensorboardRuns that match the filter expression.

page_size

int32

The maximum number of TensorboardRuns to return. The service may return fewer than this value. If unspecified, at most 50 TensorboardRuns are returned. The maximum value is 1000; values above 1000 are coerced to 1000.

page_token

string

A page token, received from a previous TensorboardService.ListTensorboardRuns call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to TensorboardService.ListTensorboardRuns must match the call that provided the page token.

order_by

string

Field to use to sort the list.

read_mask

FieldMask

Mask specifying which fields to read.

ListTensorboardRunsResponse

Response message for TensorboardService.ListTensorboardRuns.

Fields
tensorboard_runs[]

TensorboardRun

The TensorboardRuns mathching the request.

next_page_token

string

A token, which can be sent as ListTensorboardRunsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListTensorboardTimeSeriesRequest

Request message for TensorboardService.ListTensorboardTimeSeries.

Fields
parent

string

Required. The resource name of the TensorboardRun to list TensorboardTimeSeries. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

filter

string

Lists the TensorboardTimeSeries that match the filter expression.

page_size

int32

The maximum number of TensorboardTimeSeries to return. The service may return fewer than this value. If unspecified, at most 50 TensorboardTimeSeries are returned. The maximum value is 1000; values above 1000 are coerced to 1000.

page_token

string

A page token, received from a previous TensorboardService.ListTensorboardTimeSeries call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to TensorboardService.ListTensorboardTimeSeries must match the call that provided the page token.

order_by

string

Field to use to sort the list.

read_mask

FieldMask

Mask specifying which fields to read.

ListTensorboardTimeSeriesResponse

Response message for TensorboardService.ListTensorboardTimeSeries.

Fields
tensorboard_time_series[]

TensorboardTimeSeries

The TensorboardTimeSeries mathching the request.

next_page_token

string

A token, which can be sent as ListTensorboardTimeSeriesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListTensorboardsRequest

Request message for TensorboardService.ListTensorboards.

Fields
parent

string

Required. The resource name of the Location to list Tensorboards. Format: projects/{project}/locations/{location}

filter

string

Lists the Tensorboards that match the filter expression.

page_size

int32

The maximum number of Tensorboards to return. The service may return fewer than this value. If unspecified, at most 100 Tensorboards are returned. The maximum value is 100; values above 100 are coerced to 100.

page_token

string

A page token, received from a previous TensorboardService.ListTensorboards call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to TensorboardService.ListTensorboards must match the call that provided the page token.

order_by

string

Field to use to sort the list.

read_mask

FieldMask

Mask specifying which fields to read.

ListTensorboardsResponse

Response message for TensorboardService.ListTensorboards.

Fields
tensorboards[]

Tensorboard

The Tensorboards mathching the request.

next_page_token

string

A token, which can be sent as ListTensorboardsRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

ListTrainingPipelinesRequest

Request message for PipelineService.ListTrainingPipelines.

Fields
parent

string

Required. The resource name of the Location to list the TrainingPipelines from. Format: projects/{project}/locations/{location}

filter

string

The standard list filter.

Supported fields:

  • display_name supports =, != comparisons, and : wildcard.
  • state supports =, != comparisons.
  • training_task_definition =, != comparisons, and : wildcard.
  • create_time supports =, !=,<, <=,>, >= comparisons. create_time must be in RFC 3339 format.
  • labels supports general map functions that is: labels.key=value - key:value equality `labels.key:* - key existence

Some examples of using the filter are:

  • state="PIPELINE_STATE_SUCCEEDED" AND display_name:"my_pipeline_*"
  • state!="PIPELINE_STATE_FAILED" OR display_name="my_pipeline"
  • NOT display_name="my_pipeline"
  • create_time>"2021-05-18T00:00:00Z"
  • training_task_definition:"*automl_text_classification*"
page_size

int32

The standard list page size.

page_token

string

The standard list page token. Typically obtained via ListTrainingPipelinesResponse.next_page_token of the previous PipelineService.ListTrainingPipelines call.

read_mask

FieldMask

Mask specifying which fields to read.

ListTrainingPipelinesResponse

Response message for PipelineService.ListTrainingPipelines

Fields
training_pipelines[]

TrainingPipeline

List of TrainingPipelines in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListTrainingPipelinesRequest.page_token to obtain that page.

ListTrialsRequest

Request message for VizierService.ListTrials.

Fields
parent

string

Required. The resource name of the Study to list the Trial from. Format: projects/{project}/locations/{location}/studies/{study}

page_token

string

Optional. A page token to request the next page of results. If unspecified, there are no subsequent pages.

page_size

int32

Optional. The number of Trials to retrieve per "page" of results. If unspecified, the service will pick an appropriate default.

ListTrialsResponse

Response message for VizierService.ListTrials.

Fields
trials[]

Trial

The Trials associated with the Study.

next_page_token

string

Pass this token as the page_token field of the request for a subsequent call. If this field is omitted, there are no subsequent pages.

ListTuningJobsRequest

Request message for GenAiTuningService.ListTuningJobs.

Fields
parent

string

Required. The resource name of the Location to list the TuningJobs from. Format: projects/{project}/locations/{location}

filter

string

Optional. The standard list filter.

page_size

int32

Optional. The standard list page size.

page_token

string

Optional. The standard list page token. Typically obtained via ListTuningJobsResponse.next_page_token of the previous GenAiTuningService.ListTuningJob][] call.

ListTuningJobsResponse

Response message for GenAiTuningService.ListTuningJobs

Fields
tuning_jobs[]

TuningJob

List of TuningJobs in the requested page.

next_page_token

string

A token to retrieve the next page of results. Pass to ListTuningJobsRequest.page_token to obtain that page.

LogprobsResult

Logprobs Result

Fields
top_candidates[]

TopCandidates

Length = total number of decoding steps.

chosen_candidates[]

Candidate

Length = total number of decoding steps. The chosen candidates may or may not be in top_candidates.

Candidate

Candidate for the logprobs token and score.

Fields
token

string

The candidate's token string value.

token_id

int32

The candidate's token id value.

log_probability

float

The candidate's log probability.

TopCandidates

Candidates with top log probabilities at each decoding step.

Fields
candidates[]

Candidate

Sorted by log probability in descending order.

LookupStudyRequest

Request message for VizierService.LookupStudy.

Fields
parent

string

Required. The resource name of the Location to get the Study from. Format: projects/{project}/locations/{location}

display_name

string

Required. The user-defined display name of the Study

MachineSpec

Specification of a single machine.

Fields
machine_type

string

Immutable. The type of the machine.

See the list of machine types supported for prediction

See the list of machine types supported for custom training.

For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.

accelerator_type

AcceleratorType

Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.

accelerator_count

int32

The number of accelerators to attach to the machine.

tpu_topology

string

Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

reservation_affinity

ReservationAffinity

Optional. Immutable. Configuration controlling how this resource pool consumes reservation.

ManualBatchTuningParameters

Manual batch tuning parameters.

Fields
batch_size

int32

Immutable. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 64.

Measurement

A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.

Fields
elapsed_duration

Duration

Output only. Time that the Trial has been running at the point of this Measurement.

step_count

int64

Output only. The number of steps the machine learning model has been trained for. Must be non-negative.

metrics[]

Metric

Output only. A list of metrics got by evaluating the objective functions using suggested Parameter values.

Metric

A message representing a metric in the measurement.

Fields
metric_id

string

Output only. The ID of the Metric. The Metric should be defined in StudySpec's Metrics.

value

double

Output only. The value for this metric.

MergeVersionAliasesRequest

Request message for ModelService.MergeVersionAliases.

Fields
name

string

Required. The name of the model version to merge aliases, with a version ID explicitly included.

Example: projects/{project}/locations/{location}/models/{model}@1234

version_aliases[]

string

Required. The set of version aliases to merge. The alias should be at most 128 characters, and match [a-z][a-zA-Z0-9-]{0,126}[a-z-0-9]. Add the - prefix to an alias means removing that alias from the version. - is NOT counted in the 128 characters. Example: -golden means removing the golden alias from the version.

There is NO ordering in aliases, which means 1) The aliases returned from GetModel API might not have the exactly same order from this MergeVersionAliases API. 2) Adding and deleting the same alias in the request is not recommended, and the 2 operations will be cancelled out.

MetadataSchema

Instance of a general MetadataSchema.

Fields
name

string

Output only. The resource name of the MetadataSchema.

schema_version

string

The version of the MetadataSchema. The version's format must match the following regular expression: ^[0-9]+[.][0-9]+[.][0-9]+$, which would allow to order/compare different versions. Example: 1.0.0, 1.0.1, etc.

schema

string

Required. The raw YAML string representation of the MetadataSchema. The combination of [MetadataSchema.version] and the schema name given by title in [MetadataSchema.schema] must be unique within a MetadataStore.

The schema is defined as an OpenAPI 3.0.2 MetadataSchema Object

schema_type

MetadataSchemaType

The type of the MetadataSchema. This is a property that identifies which metadata types will use the MetadataSchema.

create_time

Timestamp

Output only. Timestamp when this MetadataSchema was created.

description

string

Description of the Metadata Schema

MetadataSchemaType

Describes the type of the MetadataSchema.

Enums
METADATA_SCHEMA_TYPE_UNSPECIFIED Unspecified type for the MetadataSchema.
ARTIFACT_TYPE A type indicating that the MetadataSchema will be used by Artifacts.
EXECUTION_TYPE A typee indicating that the MetadataSchema will be used by Executions.
CONTEXT_TYPE A state indicating that the MetadataSchema will be used by Contexts.

MetadataStore

Instance of a metadata store. Contains a set of metadata that can be queried.

Fields
name

string

Output only. The resource name of the MetadataStore instance.

create_time

Timestamp

Output only. Timestamp when this MetadataStore was created.

update_time

Timestamp

Output only. Timestamp when this MetadataStore was last updated.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key.

description

string

Description of the MetadataStore.

state

MetadataStoreState

Output only. State information of the MetadataStore.

dataplex_config

DataplexConfig

Optional. Dataplex integration settings.

DataplexConfig

Represents Dataplex integration settings.

Fields
enabled_pipelines_lineage

bool

Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines.

MetadataStoreState

Represents state information for a MetadataStore.

Fields
disk_utilization_bytes

int64

The disk utilization of the MetadataStore in bytes.

MigratableResource

Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.

Fields
last_migrate_time

Timestamp

Output only. Timestamp when the last migration attempt on this MigratableResource started. Will not be set if there's no migration attempt on this MigratableResource.

last_update_time

Timestamp

Output only. Timestamp when this MigratableResource was last updated.

Union field resource.

resource can be only one of the following:

ml_engine_model_version

MlEngineModelVersion

Output only. Represents one Version in ml.googleapis.com.

automl_model

AutomlModel

Output only. Represents one Model in automl.googleapis.com.

automl_dataset

AutomlDataset

Output only. Represents one Dataset in automl.googleapis.com.

data_labeling_dataset

DataLabelingDataset

Output only. Represents one Dataset in datalabeling.googleapis.com.

AutomlDataset

Represents one Dataset in automl.googleapis.com.

Fields
dataset

string

Full resource name of automl Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}.

dataset_display_name

string

The Dataset's display name in automl.googleapis.com.

AutomlModel

Represents one Model in automl.googleapis.com.

Fields
model

string

Full resource name of automl Model. Format: projects/{project}/locations/{location}/models/{model}.

model_display_name

string

The Model's display name in automl.googleapis.com.

DataLabelingDataset

Represents one Dataset in datalabeling.googleapis.com.

Fields
dataset

string

Full resource name of data labeling Dataset. Format: projects/{project}/datasets/{dataset}.

dataset_display_name

string

The Dataset's display name in datalabeling.googleapis.com.

data_labeling_annotated_datasets[]

DataLabelingAnnotatedDataset

The migratable AnnotatedDataset in datalabeling.googleapis.com belongs to the data labeling Dataset.

DataLabelingAnnotatedDataset

Represents one AnnotatedDataset in datalabeling.googleapis.com.

Fields
annotated_dataset

string

Full resource name of data labeling AnnotatedDataset. Format: projects/{project}/datasets/{dataset}/annotatedDatasets/{annotated_dataset}.

annotated_dataset_display_name

string

The AnnotatedDataset's display name in datalabeling.googleapis.com.

MlEngineModelVersion

Represents one model Version in ml.googleapis.com.

Fields
endpoint

string

The ml.googleapis.com endpoint that this model Version currently lives in. Example values:

  • ml.googleapis.com
  • us-centrall-ml.googleapis.com
  • europe-west4-ml.googleapis.com
  • asia-east1-ml.googleapis.com
version

string

Full resource name of ml engine model Version. Format: projects/{project}/models/{model}/versions/{version}.

MigrateResourceRequest

Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.

Fields

Union field request.

request can be only one of the following:

migrate_ml_engine_model_version_config

MigrateMlEngineModelVersionConfig

Config for migrating Version in ml.googleapis.com to Vertex AI's Model.

migrate_automl_model_config

MigrateAutomlModelConfig

Config for migrating Model in automl.googleapis.com to Vertex AI's Model.

migrate_automl_dataset_config

MigrateAutomlDatasetConfig

Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.

migrate_data_labeling_dataset_config

MigrateDataLabelingDatasetConfig

Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.

MigrateAutomlDatasetConfig

Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.

Fields
dataset

string

Required. Full resource name of automl Dataset. Format: projects/{project}/locations/{location}/datasets/{dataset}.

dataset_display_name

string

Required. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified.

MigrateAutomlModelConfig

Config for migrating Model in automl.googleapis.com to Vertex AI's Model.

Fields
model

string

Required. Full resource name of automl Model. Format: projects/{project}/locations/{location}/models/{model}.

model_display_name

string

Optional. Display name of the model in Vertex AI. System will pick a display name if unspecified.

MigrateDataLabelingDatasetConfig

Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.

Fields
dataset

string

Required. Full resource name of data labeling Dataset. Format: projects/{project}/datasets/{dataset}.

dataset_display_name

string

Optional. Display name of the Dataset in Vertex AI. System will pick a display name if unspecified.

migrate_data_labeling_annotated_dataset_configs[]

MigrateDataLabelingAnnotatedDatasetConfig

Optional. Configs for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery. The specified AnnotatedDatasets have to belong to the datalabeling Dataset.

MigrateDataLabelingAnnotatedDatasetConfig

Config for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery.

Fields
annotated_dataset

string

Required. Full resource name of data labeling AnnotatedDataset. Format: projects/{project}/datasets/{dataset}/annotatedDatasets/{annotated_dataset}.

MigrateMlEngineModelVersionConfig

Config for migrating version in ml.googleapis.com to Vertex AI's Model.

Fields
endpoint

string

Required. The ml.googleapis.com endpoint that this model version should be migrated from. Example values:

  • ml.googleapis.com

  • us-centrall-ml.googleapis.com

  • europe-west4-ml.googleapis.com

  • asia-east1-ml.googleapis.com

model_version

string

Required. Full resource name of ml engine model version. Format: projects/{project}/models/{model}/versions/{version}.

model_display_name

string

Required. Display name of the model in Vertex AI. System will pick a display name if unspecified.

MigrateResourceResponse

Describes a successfully migrated resource.

Fields
migratable_resource

MigratableResource

Before migration, the identifier in ml.googleapis.com, automl.googleapis.com or datalabeling.googleapis.com.

Union field migrated_resource. After migration, the resource name in Vertex AI. migrated_resource can be only one of the following:
dataset

string

Migrated Dataset's resource name.

model

string

Migrated Model's resource name.

Model

A trained machine learning Model.

Fields
name

string

The resource name of the Model.

version_id

string

Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.

version_aliases[]

string

User provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_alias} instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id}). The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.

version_create_time

Timestamp

Output only. Timestamp when this version was created.

version_update_time

Timestamp

Output only. Timestamp when this version was most recently updated.

display_name

string

Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

The description of the Model.

version_description

string

The description of this version.

predict_schemata

PredictSchemata

The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain.

metadata_schema_uri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

metadata

Value

Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema. Unset if the Model does not have any additional information.

supported_export_formats[]

ExportFormat

Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.

training_pipeline

string

Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.

container_spec

ModelContainerSpec

Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel, and all binaries it contains are copied and stored internally by Vertex AI. Not required for AutoML Models.

artifact_uri

string

Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models.

supported_deployment_resources_types[]

DeploymentResourcesType

Output only. When this Model is deployed, its prediction resources are described by the prediction_resources field of the Endpoint.deployed_models object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint and does not support online predictions (PredictionService.Predict or PredictionService.Explain). Such a Model can serve predictions by using a BatchPredictionJob, if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats.

supported_input_storage_formats[]

string

Output only. The formats this Model supports in BatchPredictionJob.input_config. If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema.

The possible formats are:

  • jsonl The JSON Lines format, where each instance is a single line. Uses GcsSource.

  • csv The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource.

  • tf-record The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource.

  • tf-record-gzip Similar to tf-record, but the file is gzipped. Uses GcsSource.

  • bigquery Each instance is a single row in BigQuery. Uses BigQuerySource.

  • file-list Each line of the file is the location of an instance to process, uses gcs_source field of the InputConfig object.

If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

supported_output_storage_formats[]

string

Output only. The formats this Model supports in BatchPredictionJob.output_config. If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema).

The possible formats are:

  • jsonl The JSON Lines format, where each prediction is a single line. Uses GcsDestination.

  • csv The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination.

  • bigquery Each prediction is a single row in a BigQuery table, uses BigQueryDestination .

If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob. However, if it has supported_deployment_resources_types, it could serve online predictions by using PredictionService.Predict or PredictionService.Explain.

create_time

Timestamp

Output only. Timestamp when this Model was uploaded into Vertex AI.

update_time

Timestamp

Output only. Timestamp when this Model was most recently updated.

deployed_models[]

DeployedModelRef

Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.

explanation_spec

ExplanationSpec

The default explanation specification for this Model.

The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated.

All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model, or explanation_spec of BatchPredictionJob.

If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize your Models.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.

model_source_info

ModelSourceInfo

Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden.

original_model_info

OriginalModelInfo

Output only. If this Model is a copy of another Model, this contains info about the original.

metadata_artifact

string

Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}.

base_model_source

BaseModelSource

Optional. User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

BaseModelSource

User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.

Fields

Union field source.

source can be only one of the following:

model_garden_source

ModelGardenSource

Source information of Model Garden models.

genie_source

GenieSource

Information about the base model of Genie models.

DeploymentResourcesType

Identifies a type of Model's prediction resources.

Enums
DEPLOYMENT_RESOURCES_TYPE_UNSPECIFIED Should not be used.
DEDICATED_RESOURCES Resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
AUTOMATIC_RESOURCES Resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
SHARED_RESOURCES Resources that can be shared by multiple DeployedModels. A pre-configured DeploymentResourcePool is required.

ExportFormat

Represents export format supported by the Model. All formats export to Google Cloud Storage.

Fields
id

string

Output only. The ID of the export format. The possible format IDs are:

  • tflite Used for Android mobile devices.

  • edgetpu-tflite Used for Edge TPU devices.

  • tf-saved-model A tensorflow model in SavedModel format.

  • tf-js A TensorFlow.js model that can be used in the browser and in Node.js using JavaScript.

  • core-ml Used for iOS mobile devices.

  • custom-trained A Model that was uploaded or trained by custom code.

exportable_contents[]

ExportableContent

Output only. The content of this Model that may be exported.

ExportableContent

The Model content that can be exported.

Enums
EXPORTABLE_CONTENT_UNSPECIFIED Should not be used.
ARTIFACT Model artifact and any of its supported files. Will be exported to the location specified by the artifactDestination field of the ExportModelRequest.output_config object.
IMAGE The container image that is to be used when deploying this Model. Will be exported to the location specified by the imageDestination field of the ExportModelRequest.output_config object.

OriginalModelInfo

Contains information about the original Model if this Model is a copy.

Fields
model

string

Output only. The resource name of the Model this Model is a copy of, including the revision. Format: projects/{project}/locations/{location}/models/{model_id}@{version_id}

ModelContainerSpec

Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.

Fields
image_uri

string

Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the Vertex AI Service Agent.

The container image is ingested upon ModelService.UploadModel, stored internally, and this original path is afterwards not used.

To learn about the requirements for the Docker image itself, see Custom container requirements.

You can use the URI to one of Vertex AI's pre-built container images for prediction in this field.

command[]

string

Immutable. Specifies the command that runs when the container starts. This overrides the container's ENTRYPOINT. Specify this field as an array of executable and arguments, similar to a Docker ENTRYPOINT's "exec" form, not its "shell" form.

If you do not specify this field, then the container's ENTRYPOINT runs, in conjunction with the args field or the container's CMD, if either exists. If this field is not specified and the container does not have an ENTRYPOINT, then refer to the Docker documentation about how CMD and ENTRYPOINT interact.

If you specify this field, then you can also specify the args field to provide additional arguments for this command. However, if you specify this field, then the container's CMD is ignored. See the Kubernetes documentation about how the command and args fields interact with a container's ENTRYPOINT and CMD.

In this field, you can reference environment variables set by Vertex AI and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax:

$(VARIABLE_NAME)

Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example:

$$(VARIABLE_NAME)

This field corresponds to the command field of the Kubernetes Containers v1 core API.

args[]

string

Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's CMD. Specify this field as an array of executable and arguments, similar to a Docker CMD's "default parameters" form.

If you don't specify this field but do specify the command field, then the command from the command field runs without any additional arguments. See the Kubernetes documentation about how the command and args fields interact with a container's ENTRYPOINT and CMD.

If you don't specify this field and don't specify the command field, then the container's ENTRYPOINT and CMD determine what runs based on their default behavior. See the Docker documentation about how CMD and ENTRYPOINT interact.

In this field, you can reference environment variables set by Vertex AI and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax:

$(VARIABLE_NAME)

Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example:

$$(VARIABLE_NAME)

This field corresponds to the args field of the Kubernetes Containers v1 core API.

env[]

EnvVar

Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables.

Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable VAR_2 to have the value foo bar:

[
  {
    "name": "VAR_1",
    "value": "foo"
  },
  {
    "name": "VAR_2",
    "value": "$(VAR_1) bar"
  }
]

If you switch the order of the variables in the example, then the expansion does not occur.

This field corresponds to the env field of the Kubernetes Containers v1 core API.

ports[]

Port

Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.

If you do not specify this field, it defaults to following value:

[
  {
    "containerPort": 8080
  }
]

Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API.

predict_route

string

Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response.

For example, if you set this field to /foo, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the /foo path on the port of your container specified by the first value of this ModelContainerSpec's ports field.

If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint:

/v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict

The placeholders in this value are replaced as follows:

health_route

string

Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks.

For example, if you set this field to /bar, then Vertex AI intermittently sends a GET request to the /bar path on the port of your container specified by the first value of this ModelContainerSpec's ports field.

If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint:

/v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict

The placeholders in this value are replaced as follows:

grpc_ports[]

Port

Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port.

If you do not specify this field, gRPC requests to the container will be disabled.

Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API.

deployment_timeout

Duration

Immutable. Deployment timeout. Limit for deployment timeout is 2 hours.

shared_memory_size_mb

int64

Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes.

startup_probe

Probe

Immutable. Specification for Kubernetes startup probe.

health_probe

Probe

Immutable. Specification for Kubernetes readiness probe.

ModelDeploymentMonitoringBigQueryTable

ModelDeploymentMonitoringBigQueryTable specifies the BigQuery table name as well as some information of the logs stored in this table.

Fields
log_source

LogSource

The source of log.

log_type

LogType

The type of log.

bigquery_table_path

string

The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://<project_id>.model_deployment_monitoring_<endpoint_id>.<tolower(log_source)>_<tolower(log_type)>

request_response_logging_schema_version

string

Output only. The schema version of the request/response logging BigQuery table. Default to v1 if unset.

LogSource

Indicates where does the log come from.

Enums
LOG_SOURCE_UNSPECIFIED Unspecified source.
TRAINING Logs coming from Training dataset.
SERVING Logs coming from Serving traffic.

LogType

Indicates what type of traffic does the log belong to.

Enums
LOG_TYPE_UNSPECIFIED Unspecified type.
PREDICT Predict logs.
EXPLAIN Explain logs.

ModelDeploymentMonitoringJob

Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.

Fields
name

string

Output only. Resource name of a ModelDeploymentMonitoringJob.

display_name

string

Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

endpoint

string

Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

state

JobState

Output only. The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.

schedule_state

MonitoringScheduleState

Output only. Schedule state when the monitoring job is in Running state.

latest_monitoring_pipeline_metadata

LatestMonitoringPipelineMetadata

Output only. Latest triggered monitoring pipeline metadata.

model_deployment_monitoring_objective_configs[]

ModelDeploymentMonitoringObjectiveConfig

Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.

model_deployment_monitoring_schedule_config

ModelDeploymentMonitoringScheduleConfig

Required. Schedule config for running the monitoring job.

logging_sampling_strategy

SamplingStrategy

Required. Sample Strategy for logging.

model_monitoring_alert_config

ModelMonitoringAlertConfig

Alert config for model monitoring.

predict_instance_schema_uri

string

YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.

sample_predict_instance

Value

Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.

analysis_instance_schema_uri

string

YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.

If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

bigquery_tables[]

ModelDeploymentMonitoringBigQueryTable

Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response

log_ttl

Duration

The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

labels

map<string, string>

The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

create_time

Timestamp

Output only. Timestamp when this ModelDeploymentMonitoringJob was created.

update_time

Timestamp

Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently.

next_schedule_time

Timestamp

Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.

stats_anomalies_base_directory

GcsDestination

Stats anomalies base folder path.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.

enable_monitoring_pipeline_logs

bool

If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.

error

Status

Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

LatestMonitoringPipelineMetadata

All metadata of most recent monitoring pipelines.

Fields
run_time

Timestamp

The time that most recent monitoring pipelines that is related to this run.

status

Status

The status of the most recent monitoring pipeline.

MonitoringScheduleState

The state to Specify the monitoring pipeline.

Enums
MONITORING_SCHEDULE_STATE_UNSPECIFIED Unspecified state.
PENDING The pipeline is picked up and wait to run.
OFFLINE The pipeline is offline and will be scheduled for next run.
RUNNING The pipeline is running.

ModelDeploymentMonitoringObjectiveConfig

ModelDeploymentMonitoringObjectiveConfig contains the pair of deployed_model_id to ModelMonitoringObjectiveConfig.

Fields
deployed_model_id

string

The DeployedModel ID of the objective config.

objective_config

ModelMonitoringObjectiveConfig

The objective config of for the modelmonitoring job of this deployed model.

ModelDeploymentMonitoringObjectiveType

The Model Monitoring Objective types.

Enums
MODEL_DEPLOYMENT_MONITORING_OBJECTIVE_TYPE_UNSPECIFIED Default value, should not be set.
RAW_FEATURE_SKEW Raw feature values' stats to detect skew between Training-Prediction datasets.
RAW_FEATURE_DRIFT Raw feature values' stats to detect drift between Serving-Prediction datasets.
FEATURE_ATTRIBUTION_SKEW Feature attribution scores to detect skew between Training-Prediction datasets.
FEATURE_ATTRIBUTION_DRIFT Feature attribution scores to detect skew between Prediction datasets collected within different time windows.

ModelDeploymentMonitoringScheduleConfig

The config for scheduling monitoring job.

Fields
monitor_interval

Duration

Required. The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.

monitor_window

Duration

The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.

ModelEvaluation

A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.

Fields
name

string

Output only. The resource name of the ModelEvaluation.

display_name

string

The display name of the ModelEvaluation.

metrics_schema_uri

string

Points to a YAML file stored on Google Cloud Storage describing the metrics of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 Schema Object.

metrics

Value

Evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri

create_time

Timestamp

Output only. Timestamp when this ModelEvaluation was created.

slice_dimensions[]

string

All possible dimensions of ModelEvaluationSlices. The dimensions can be used as the filter of the ModelService.ListModelEvaluationSlices request, in the form of slice.dimension = <dimension>.

model_explanation

ModelExplanation

Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.

explanation_specs[]

ModelEvaluationExplanationSpec

Describes the values of ExplanationSpec that are used for explaining the predicted values on the evaluated data.

metadata

Value

The metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of "pipeline_job_id", "evaluation_dataset_type", "evaluation_dataset_path", "row_based_metrics_path".

bias_configs

BiasConfig

Specify the configuration for bias detection.

BiasConfig

Configuration for bias detection.

Fields
bias_slices

SliceSpec

Specification for how the data should be sliced for bias. It contains a list of slices, with limitation of two slices. The first slice of data will be the slice_a. The second slice in the list (slice_b) will be compared against the first slice. If only a single slice is provided, then slice_a will be compared against "not slice_a". Below are examples with feature "education" with value "low", "medium", "high" in the dataset:

Example 1:

bias_slices = [{'education': 'low'}]

A single slice provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'medium' or 'high'.

Example 2:

bias_slices = [{'education': 'low'},
               {'education': 'high'}]

Two slices provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'high'.

labels[]

string

Positive labels selection on the target field.

ModelEvaluationExplanationSpec

Fields
explanation_type

string

Explanation type.

For AutoML Image Classification models, possible values are:

  • image-integrated-gradients
  • image-xrai
explanation_spec

ExplanationSpec

Explanation spec details.

ModelEvaluationSlice

A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.

Fields
name

string

Output only. The resource name of the ModelEvaluationSlice.

slice

Slice

Output only. The slice of the test data that is used to evaluate the Model.

metrics_schema_uri

string

Output only. Points to a YAML file stored on Google Cloud Storage describing the metrics of this ModelEvaluationSlice. The schema is defined as an OpenAPI 3.0.2 Schema Object.

metrics

Value

Output only. Sliced evaluation metrics of the Model. The schema of the metrics is stored in metrics_schema_uri

create_time

Timestamp

Output only. Timestamp when this ModelEvaluationSlice was created.

model_explanation

ModelExplanation

Output only. Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for tabular Models.

Slice

Definition of a slice.

Fields
dimension

string

Output only. The dimension of the slice. Well-known dimensions are: * annotationSpec: This slice is on the test data that has either ground truth or prediction with AnnotationSpec.display_name equals to value. * slice: This slice is a user customized slice defined by its SliceSpec.

value

string

Output only. The value of the dimension in this slice.

slice_spec

SliceSpec

Output only. Specification for how the data was sliced.

SliceSpec

Specification for how the data should be sliced.

Fields
configs

map<string, SliceConfig>

Mapping configuration for this SliceSpec. The key is the name of the feature. By default, the key will be prefixed by "instance" as a dictionary prefix for Vertex Batch Predictions output format.

Range

A range of values for slice(s). low is inclusive, high is exclusive.

Fields
low

float

Inclusive low value for the range.

high

float

Exclusive high value for the range.

SliceConfig

Specification message containing the config for this SliceSpec. When kind is selected as value and/or range, only a single slice will be computed. When all_values is present, a separate slice will be computed for each possible label/value for the corresponding key in config. Examples, with feature zip_code with values 12345, 23334, 88888 and feature country with values "US", "Canada", "Mexico" in the dataset:

Example 1:

{
  "zip_code": { "value": { "float_value": 12345.0 } }
}

A single slice for any data with zip_code 12345 in the dataset.

Example 2:

{
  "zip_code": { "range": { "low": 12345, "high": 20000 } }
}

A single slice containing data where the zip_codes between 12345 and 20000 For this example, data with the zip_code of 12345 will be in this slice.

Example 3:

{
  "zip_code": { "range": { "low": 10000, "high": 20000 } },
  "country": { "value": { "string_value": "US" } }
}

A single slice containing data where the zip_codes between 10000 and 20000 has the country "US". For this example, data with the zip_code of 12345 and country "US" will be in this slice.

Example 4:

{ "country": {"all_values": { "value": true } } }

Three slices are computed, one for each unique country in the dataset.

Example 5:

{
  "country": { "all_values": { "value": true } },
  "zip_code": { "value": { "float_value": 12345.0 } }
}

Three slices are computed, one for each unique country in the dataset where the zip_code is also 12345. For this example, data with zip_code 12345 and country "US" will be in one slice, zip_code 12345 and country "Canada" in another slice, and zip_code 12345 and country "Mexico" in another slice, totaling 3 slices.

Fields

Union field kind.

kind can be only one of the following:

value

Value

A unique specific value for a given feature. Example: { "value": { "string_value": "12345" } }

range

Range

A range of values for a numerical feature. Example: {"range":{"low":10000.0,"high":50000.0}} will capture 12345 and 23334 in the slice.

all_values

BoolValue

If all_values is set to true, then all possible labels of the keyed feature will have another slice computed. Example: {"all_values":{"value":true}}

Value

Single value that supports strings and floats.

Fields

Union field kind.

kind can be only one of the following:

string_value

string

String type.

float_value

float

Float type.

ModelExplanation

Aggregated explanation metrics for a Model over a set of instances.

Fields
mean_attributions[]

Attribution

Output only. Aggregated attributions explaining the Model's prediction outputs over the set of instances. The attributions are grouped by outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

The baselineOutputValue, instanceOutputValue and featureAttributions fields are averaged over the test data.

NOTE: Currently AutoML tabular classification Models produce only one attribution, which averages attributions over all the classes it predicts. Attribution.approximation_error is not populated.

ModelGardenSource

Contains information about the source of the models generated from Model Garden.

Fields
public_model_name

string

Required. The model garden source model resource name.

ModelMonitor

Vertex AI Model Monitoring Service serves as a central hub for the analysis and visualization of data quality and performance related to models. ModelMonitor stands as a top level resource for overseeing your model monitoring tasks.

Fields
name

string

Immutable. Resource name of the ModelMonitor. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}.

display_name

string

The display name of the ModelMonitor. The name can be up to 128 characters long and can consist of any UTF-8.

model_monitoring_target

ModelMonitoringTarget

The entity that is subject to analysis. Currently only models in Vertex AI Model Registry are supported. If you want to analyze the model which is outside the Vertex AI, you could register a model in Vertex AI Model Registry using just a display name.

training_dataset

ModelMonitoringInput

Optional training dataset used to train the model. It can serve as a reference dataset to identify changes in production.

notification_spec

ModelMonitoringNotificationSpec

Optional default notification spec, it can be overridden in the ModelMonitoringJob notification spec.

output_spec

ModelMonitoringOutputSpec

Optional default monitoring metrics/logs export spec, it can be overridden in the ModelMonitoringJob output spec. If not specified, a default Google Cloud Storage bucket will be created under your project.

explanation_spec

ExplanationSpec

Optional model explanation spec. It is used for feature attribution monitoring.

model_monitoring_schema

ModelMonitoringSchema

Monitoring Schema is to specify the model's features, prediction outputs and ground truth properties. It is used to extract pertinent data from the dataset and to process features based on their properties. Make sure that the schema aligns with your dataset, if it does not, we will be unable to extract data from the dataset. It is required for most models, but optional for Vertex AI AutoML Tables unless the schem information is not available.

create_time

Timestamp

Output only. Timestamp when this ModelMonitor was created.

update_time

Timestamp

Output only. Timestamp when this ModelMonitor was updated most recently.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

Union field default_objective. Optional default monitoring objective, it can be overridden in the ModelMonitoringJob objective spec. default_objective can be only one of the following:
tabular_objective

TabularObjective

Optional default tabular model monitoring objective.

ModelMonitoringTarget

The monitoring target refers to the entity that is subject to analysis. e.g. Vertex AI Model version.

Fields

Union field source.

source can be only one of the following:

vertex_model

VertexModelSource

Model in Vertex AI Model Registry.

VertexModelSource

Model in Vertex AI Model Registry.

Fields
model

string

Model resource name. Format: projects/{project}/locations/{location}/models/{model}.

model_version_id

string

Model version id.

ModelMonitoringAlert

Represents a single monitoring alert. This is currently used in the SearchModelMonitoringAlerts api, thus the alert wrapped in this message belongs to the resource asked in the request.

Fields
stats_name

string

The stats name.

objective_type

string

One of the supported monitoring objectives: raw-feature-drift prediction-output-drift feature-attribution

alert_time

Timestamp

Alert creation time.

anomaly

ModelMonitoringAnomaly

Anomaly details.

ModelMonitoringAlertCondition

Monitoring alert triggered condition.

Fields
Union field condition. Alert triggered condition. condition can be only one of the following:
threshold

double

A condition that compares a stats value against a threshold. Alert will be triggered if value above the threshold.

ModelMonitoringAlertConfig

The alert config for model monitoring.

Fields
enable_logging

bool

Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto ModelMonitoringStatsAnomalies. This can be further synced to Pub/Sub or any other services supported by Cloud Logging.

notification_channels[]

string

Resource names of the NotificationChannels to send alert. Must be of the format projects/<project_id_or_number>/notificationChannels/<channel_id>

Union field alert.

alert can be only one of the following:

email_alert_config

EmailAlertConfig

Email alert config.

EmailAlertConfig

The config for email alert.

Fields
user_emails[]

string

The email addresses to send the alert.

ModelMonitoringAnomaly

Represents a single model monitoring anomaly.

Fields
model_monitoring_job

string

Model monitoring job resource name.

algorithm

string

Algorithm used to calculated the metrics, eg: jensen_shannon_divergence, l_infinity.

Union field anomaly.

anomaly can be only one of the following:

tabular_anomaly

TabularAnomaly

Tabular anomaly.

TabularAnomaly

Tabular anomaly details.

Fields
anomaly_uri

string

Additional anomaly information. e.g. Google Cloud Storage uri.

summary

string

Overview of this anomaly.

anomaly

Value

Anomaly body.

trigger_time

Timestamp

The time the anomaly was triggered.

condition

ModelMonitoringAlertCondition

The alert condition associated with this anomaly.

ModelMonitoringConfig

The model monitoring configuration used for Batch Prediction Job.

Fields
objective_configs[]

ModelMonitoringObjectiveConfig

Model monitoring objective config.

alert_config

ModelMonitoringAlertConfig

Model monitoring alert config.

analysis_instance_schema_uri

string

YAML schema file uri in Cloud Storage describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze.

If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

stats_anomalies_base_directory

GcsDestination

A Google Cloud Storage location for batch prediction model monitoring to dump statistics and anomalies. If not provided, a folder will be created in customer project to hold statistics and anomalies.

ModelMonitoringInput

Model monitoring data input spec.

Fields
Union field dataset. Dataset source. dataset can be only one of the following:
columnized_dataset

ModelMonitoringDataset

Columnized dataset.

batch_prediction_output

BatchPredictionOutput

Vertex AI Batch prediction Job.

vertex_endpoint_logs

VertexEndpointLogs

Vertex AI Endpoint request & response logging.

Union field time_spec. Time specification for the dataset. time_spec can be only one of the following:
time_interval

Interval

The time interval (pair of start_time and end_time) for which results should be returned.

time_offset

TimeOffset

The time offset setting for which results should be returned.

BatchPredictionOutput

Data from Vertex AI Batch prediction job output.

Fields
batch_prediction_job

string

Vertex AI Batch prediction job resource name. The job must match the model version specified in [ModelMonitor].[model_monitoring_target].

ModelMonitoringDataset

Input dataset spec.

Fields
timestamp_field

string

The timestamp field. Usually for serving data.

Union field data_location. Choose one of supported data location for columnized dataset. data_location can be only one of the following:
vertex_dataset

string

Resource name of the Vertex AI managed dataset.

gcs_source

ModelMonitoringGcsSource

Google Cloud Storage data source.

bigquery_source

ModelMonitoringBigQuerySource

BigQuery data source.

ModelMonitoringBigQuerySource

Dataset spec for data sotred in BigQuery.

Fields

Union field connection.

connection can be only one of the following:

table_uri

string

BigQuery URI to a table, up to 2000 characters long. All the columns in the table will be selected. Accepted forms:

  • BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
query

string

Standard SQL to be used instead of the table_uri.

ModelMonitoringGcsSource

Dataset spec for data stored in Google Cloud Storage.

Fields
gcs_uri

string

Google Cloud Storage URI to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.

format

DataFormat

Data format of the dataset.

DataFormat

Supported data format.

Enums
DATA_FORMAT_UNSPECIFIED Data format unspecified, used when this field is unset.
CSV CSV files.
TF_RECORD TfRecord files
JSONL JsonL files.

TimeOffset

Time offset setting.

Fields
offset

string

[offset] is the time difference from the cut-off time. For scheduled jobs, the cut-off time is the scheduled time. For non-scheduled jobs, it's the time when the job was created. Currently we support the following format: 'w|W': Week, 'd|D': Day, 'h|H': Hour E.g. '1h' stands for 1 hour, '2d' stands for 2 days.

window

string

[window] refers to the scope of data selected for analysis. It allows you to specify the quantity of data you wish to examine. Currently we support the following format: 'w|W': Week, 'd|D': Day, 'h|H': Hour E.g. '1h' stands for 1 hour, '2d' stands for 2 days.

VertexEndpointLogs

Data from Vertex AI Endpoint request response logging.

Fields
endpoints[]

string

List of endpoint resource names. The endpoints must enable the logging with the [Endpoint].[request_response_logging_config], and must contain the deployed model corresponding to the model version specified in [ModelMonitor].[model_monitoring_target].

ModelMonitoringJob

Represents a model monitoring job that analyze dataset using different monitoring algorithm.

Fields
name

string

Output only. Resource name of a ModelMonitoringJob. Format: projects/{project_id}/locations/{location_id}/modelMonitors/{model_monitor_id}/modelMonitoringJobs/{model_monitoring_job_id}

display_name

string

The display name of the ModelMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8.

model_monitoring_spec

ModelMonitoringSpec

Monitoring monitoring job spec. It outlines the specifications for monitoring objectives, notifications, and result exports. If left blank, the default monitoring specifications from the top-level resource 'ModelMonitor' will be applied. If provided, we will use the specification defined here rather than the default one.

create_time

Timestamp

Output only. Timestamp when this ModelMonitoringJob was created.

update_time

Timestamp

Output only. Timestamp when this ModelMonitoringJob was updated most recently.

state

JobState

Output only. The state of the monitoring job. * When the job is still creating, the state will be 'JOB_STATE_PENDING'. * Once the job is successfully created, the state will be 'JOB_STATE_RUNNING'. * Once the job is finished, the state will be one of 'JOB_STATE_FAILED', 'JOB_STATE_SUCCEEDED', 'JOB_STATE_PARTIALLY_SUCCEEDED'.

schedule

string

Output only. Schedule resource name. It will only appear when this job is triggered by a schedule.

job_execution_detail

ModelMonitoringJobExecutionDetail

Output only. Execution results for all the monitoring objectives.

schedule_time

Timestamp

Output only. Timestamp when this ModelMonitoringJob was scheduled. It will only appear when this job is triggered by a schedule.

ModelMonitoringJobExecutionDetail

Represent the execution details of the job.

Fields
baseline_datasets[]

ProcessedDataset

Processed baseline datasets.

target_datasets[]

ProcessedDataset

Processed target datasets.

objective_status

map<string, Status>

Status of data processing for each monitoring objective. Key is the objective.

error

Status

Additional job error status.

ProcessedDataset

Processed dataset information.

Fields
location

string

Actual data location of the processed dataset.

time_range

Interval

Dataset time range information if any.

ModelMonitoringNotificationSpec

Notification spec(email, notification channel) for model monitoring statistics/alerts.

Fields
email_config

EmailConfig

Email alert config.

enable_cloud_logging

bool

Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto [google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry][]. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.

notification_channel_configs[]

NotificationChannelConfig

Notification channel config.

EmailConfig

The config for email alerts.

Fields
user_emails[]

string

The email addresses to send the alerts.

NotificationChannelConfig

Google Cloud Notification Channel config.

Fields
notification_channel

string

Resource names of the NotificationChannels. Must be of the format projects/<project_id_or_number>/notificationChannels/<channel_id>

ModelMonitoringObjectiveConfig

The objective configuration for model monitoring, including the information needed to detect anomalies for one particular model.

Fields
training_dataset

TrainingDataset

Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.

training_prediction_skew_detection_config

TrainingPredictionSkewDetectionConfig

The config for skew between training data and prediction data.

prediction_drift_detection_config

PredictionDriftDetectionConfig

The config for drift of prediction data.

explanation_config

ExplanationConfig

The config for integrating with Vertex Explainable AI.

ExplanationConfig

The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.

Fields
enable_feature_attributes

bool

If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.

explanation_baseline

ExplanationBaseline

Predictions generated by the BatchPredictionJob using baseline dataset.

ExplanationBaseline

Output from BatchPredictionJob for Model Monitoring baseline dataset, which can be used to generate baseline attribution scores.

Fields
prediction_format

PredictionFormat

The storage format of the predictions generated BatchPrediction job.

Union field destination. The configuration specifying of BatchExplain job output. This can be used to generate the baseline of feature attribution scores. destination can be only one of the following:
gcs

GcsDestination

Cloud Storage location for BatchExplain output.

bigquery

BigQueryDestination

BigQuery location for BatchExplain output.

PredictionFormat

The storage format of the predictions generated BatchPrediction job.

Enums
PREDICTION_FORMAT_UNSPECIFIED Should not be set.
JSONL Predictions are in JSONL files.
BIGQUERY Predictions are in BigQuery.

PredictionDriftDetectionConfig

The config for Prediction data drift detection.

Fields
drift_thresholds

map<string, ThresholdConfig>

Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.

attribution_score_drift_thresholds

map<string, ThresholdConfig>

Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.

default_drift_threshold

ThresholdConfig

Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.

TrainingDataset

Training Dataset information.

Fields
data_format

string

Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are:

"tf-record" The source file is a TFRecord file.

"csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.

target_field

string

The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.

logging_sampling_strategy

SamplingStrategy

Strategy to sample data from Training Dataset. If not set, we process the whole dataset.

Union field data_source.

data_source can be only one of the following:

dataset

string

The resource name of the Dataset used to train this Model.

gcs_source

GcsSource

The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.

bigquery_source

BigQuerySource

The BigQuery table of the unmanaged Dataset used to train this Model.

TrainingPredictionSkewDetectionConfig

The config for Training & Prediction data skew detection. It specifies the training dataset sources and the skew detection parameters.

Fields
skew_thresholds

map<string, ThresholdConfig>

Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.

attribution_score_skew_thresholds

map<string, ThresholdConfig>

Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.

default_skew_threshold

ThresholdConfig

Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.

ModelMonitoringObjectiveSpec

Monitoring objectives spec.

Fields
explanation_spec

ExplanationSpec

The explanation spec. This spec is required when the objectives spec includes feature attribution objectives.

baseline_dataset

ModelMonitoringInput

Baseline dataset. It could be the training dataset or production serving dataset from a previous period.

target_dataset

ModelMonitoringInput

Target dataset.

Union field objective. The monitoring objective. objective can be only one of the following:
tabular_objective

TabularObjective

Tabular monitoring objective.

DataDriftSpec

Data drift monitoring spec. Data drift measures the distribution distance between the current dataset and a baseline dataset. A typical use case is to detect data drift between the recent production serving dataset and the training dataset, or to compare the recent production dataset with a dataset from a previous period.

Fields
features[]

string

Feature names / Prediction output names interested in monitoring. These should be a subset of the input feature names or prediction output names specified in the monitoring schema. If the field is not specified all features / prediction outputs outlied in the monitoring schema will be used.

categorical_metric_type

string

Supported metrics type: * l_infinity * jensen_shannon_divergence

numeric_metric_type

string

Supported metrics type: * jensen_shannon_divergence

default_categorical_alert_condition

ModelMonitoringAlertCondition

Default alert condition for all the categorical features.

default_numeric_alert_condition

ModelMonitoringAlertCondition

Default alert condition for all the numeric features.

feature_alert_conditions

map<string, ModelMonitoringAlertCondition>

Per feature alert condition will override default alert condition.

FeatureAttributionSpec

Feature attribution monitoring spec.

Fields
features[]

string

Feature names interested in monitoring. These should be a subset of the input feature names specified in the monitoring schema. If the field is not specified all features outlied in the monitoring schema will be used.

default_alert_condition

ModelMonitoringAlertCondition

Default alert condition for all the features.

feature_alert_conditions

map<string, ModelMonitoringAlertCondition>

Per feature alert condition will override default alert condition.

batch_explanation_dedicated_resources

BatchDedicatedResources

The config of resources used by the Model Monitoring during the batch explanation for non-AutoML models. If not set, n1-standard-2 machine type will be used by default.

TabularObjective

Tabular monitoring objective.

Fields
feature_drift_spec

DataDriftSpec

Input feature distribution drift monitoring spec.

prediction_output_drift_spec

DataDriftSpec

Prediction output distribution drift monitoring spec.

feature_attribution_spec

FeatureAttributionSpec

Feature attribution monitoring spec.

ModelMonitoringOutputSpec

Specification for the export destination of monitoring results, including metrics, logs, etc.

Fields
gcs_base_directory

GcsDestination

Google Cloud Storage base folder path for metrics, error logs, etc.

ModelMonitoringSchema

The Model Monitoring Schema definition.

Fields
feature_fields[]

FieldSchema

Feature names of the model. Vertex AI will try to match the features from your dataset as follows: * For 'csv' files, the header names are required, and we will extract the corresponding feature values when the header names align with the feature names. * For 'jsonl' files, we will extract the corresponding feature values if the key names match the feature names. Note: Nested features are not supported, so please ensure your features are flattened. Ensure the feature values are scalar or an array of scalars. * For 'bigquery' dataset, we will extract the corresponding feature values if the column names match the feature names. Note: The column type can be a scalar or an array of scalars. STRUCT or JSON types are not supported. You may use SQL queries to select or aggregate the relevant features from your original table. However, ensure that the 'schema' of the query results meets our requirements. * For the Vertex AI Endpoint Request Response Logging table or Vertex AI Batch Prediction Job results. If the instance_type is an array, ensure that the sequence in feature_fields matches the order of features in the prediction instance. We will match the feature with the array in the order specified in [feature_fields].

prediction_fields[]

FieldSchema

Prediction output names of the model. The requirements are the same as the feature_fields. For AutoML Tables, the prediction output name presented in schema will be: predicted_{target_column}, the target_column is the one you specified when you train the model. For Prediction output drift analysis: * AutoML Classification, the distribution of the argmax label will be analyzed. * AutoML Regression, the distribution of the value will be analyzed.

ground_truth_fields[]

FieldSchema

Target /ground truth names of the model.

FieldSchema

Schema field definition.

Fields
name

string

Field name.

data_type

string

Supported data types are: float integer boolean string categorical

repeated

bool

Describes if the schema field is an array of given data type.

ModelMonitoringSpec

Monitoring monitoring job spec. It outlines the specifications for monitoring objectives, notifications, and result exports.

Fields
objective_spec

ModelMonitoringObjectiveSpec

The monitoring objective spec.

notification_spec

ModelMonitoringNotificationSpec

The model monitoring notification spec.

output_spec

ModelMonitoringOutputSpec

The Output destination spec for metrics, error logs, etc.

ModelMonitoringStats

Represents the collection of statistics for a metric.

Fields

Union field stats.

stats can be only one of the following:

tabular_stats

ModelMonitoringTabularStats

Generated tabular statistics.

ModelMonitoringStatsAnomalies

Statistics and anomalies generated by Model Monitoring.

Fields
objective

ModelDeploymentMonitoringObjectiveType

Model Monitoring Objective those stats and anomalies belonging to.

deployed_model_id

string

Deployed Model ID.

anomaly_count

int32

Number of anomalies within all stats.

feature_stats[]

FeatureHistoricStatsAnomalies

A list of historical Stats and Anomalies generated for all Features.

FeatureHistoricStatsAnomalies

Historical Stats (and Anomalies) for a specific Feature.

Fields
feature_display_name

string

Display Name of the Feature.

threshold

ThresholdConfig

Threshold for anomaly detection.

training_stats

FeatureStatsAnomaly

Stats calculated for the Training Dataset.

prediction_stats[]

FeatureStatsAnomaly

A list of historical stats generated by different time window's Prediction Dataset.

ModelMonitoringStatsDataPoint

Represents a single statistics data point.

Fields
current_stats

TypedValue

Statistics from current dataset.

baseline_stats

TypedValue

Statistics from baseline dataset.

threshold_value

double

Threshold value.

has_anomaly

bool

Indicate if the statistics has anomaly.

model_monitoring_job

string

Model monitoring job resource name.

schedule

string

Schedule resource name.

create_time

Timestamp

Statistics create time.

algorithm

string

Algorithm used to calculated the metrics, eg: jensen_shannon_divergence, l_infinity.

TypedValue

Typed value of the statistics.

Fields
Union field value. The typed value. value can be only one of the following:
double_value

double

Double.

distribution_value

DistributionDataValue

Distribution.

DistributionDataValue

Summary statistics for a population of values.

Fields
distribution

Value

Predictive monitoring drift distribution in tensorflow.metadata.v0.DatasetFeatureStatistics format.

distribution_deviation

double

Distribution distance deviation from the current dataset's statistics to baseline dataset's statistics. * For categorical feature, the distribution distance is calculated by L-inifinity norm or Jensen–Shannon divergence. * For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence.

ModelMonitoringTabularStats

A collection of data points that describes the time-varying values of a tabular metric.

Fields
stats_name

string

The stats name.

objective_type

string

One of the supported monitoring objectives: raw-feature-drift prediction-output-drift feature-attribution

data_points[]

ModelMonitoringStatsDataPoint

The data points of this time series. When listing time series, points are returned in reverse time order.

ModelSourceInfo

Detail description of the source information of the model.

Fields
source_type

ModelSourceType

Type of the model source.

ModelSourceType

Source of the model. Different from objective field, this ModelSourceType enum indicates the source from which the model was accessed or obtained, whereas the objective indicates the overall aim or function of this model.

Enums
MODEL_SOURCE_TYPE_UNSPECIFIED Should not be used.
AUTOML The Model is uploaded by automl training pipeline.
CUSTOM The Model is uploaded by user or custom training pipeline.
BQML The Model is registered and sync'ed from BigQuery ML.
MODEL_GARDEN The Model is saved or tuned from Model Garden.
GENIE The Model is saved or tuned from Genie.
CUSTOM_TEXT_EMBEDDING The Model is uploaded by text embedding finetuning pipeline.
MARKETPLACE The Model is saved or tuned from Marketplace.

MutateDeployedIndexOperationMetadata

Runtime operation information for IndexEndpointService.MutateDeployedIndex.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

deployed_index_id

string

The unique index id specified by user

MutateDeployedIndexRequest

Request message for IndexEndpointService.MutateDeployedIndex.

Fields
index_endpoint

string

Required. The name of the IndexEndpoint resource into which to deploy an Index. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}

deployed_index

DeployedIndex

Required. The DeployedIndex to be updated within the IndexEndpoint. Currently, the updatable fields are DeployedIndex.automatic_resources and DeployedIndex.dedicated_resources

MutateDeployedIndexResponse

Response message for IndexEndpointService.MutateDeployedIndex.

Fields
deployed_index

DeployedIndex

The DeployedIndex that had been updated in the IndexEndpoint.

MutateDeployedModelOperationMetadata

Runtime operation information for EndpointService.MutateDeployedModel.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

MutateDeployedModelRequest

Request message for EndpointService.MutateDeployedModel.

Fields
endpoint

string

Required. The name of the Endpoint resource into which to mutate a DeployedModel. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

deployed_model

DeployedModel

Required. The DeployedModel to be mutated within the Endpoint. Only the following fields can be mutated:

update_mask

FieldMask

Required. The update mask applies to the resource. See google.protobuf.FieldMask.

MutateDeployedModelResponse

Response message for EndpointService.MutateDeployedModel.

Fields
deployed_model

DeployedModel

The DeployedModel that's being mutated.

NearestNeighborQuery

A query to find a number of similar entities.

Fields
neighbor_count

int32

Optional. The number of similar entities to be retrieved from feature view for each query.

string_filters[]

StringFilter

Optional. The list of string filters.

numeric_filters[]

NumericFilter

Optional. The list of numeric filters.

per_crowding_attribute_neighbor_count

int32

Optional. Crowding is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than sper_crowding_attribute_neighbor_count of the k neighbors returned have the same value of crowding_attribute. It's used for improving result diversity.

parameters

Parameters

Optional. Parameters that can be set to tune query on the fly.

Union field instance.

instance can be only one of the following:

entity_id

string

Optional. The entity id whose similar entities should be searched for. If embedding is set, search will use embedding instead of entity_id.

embedding

Embedding

Optional. The embedding vector that be used for similar search.

Embedding

The embedding vector.

Fields
value[]

float

Optional. Individual value in the embedding.

NumericFilter

Numeric filter is used to search a subset of the entities by using boolean rules on numeric columns. For example: Database Point 0: {name: "a" value_int: 42} {name: "b" value_float: 1.0} Database Point 1: {name: "a" value_int: 10} {name: "b" value_float: 2.0} Database Point 2: {name: "a" value_int: -1} {name: "b" value_float: 3.0} Query: {name: "a" value_int: 12 operator: LESS} // Matches Point 1, 2 {name: "b" value_float: 2.0 operator: EQUAL} // Matches Point 1

Fields
name

string

Required. Column name in BigQuery that used as filters.

Union field Value. The type of Value must be consistent for all datapoints with a given name. This is verified at runtime. Value can be only one of the following:
value_int

int64

int value type.

value_float

float

float value type.

value_double

double

double value type.

op

Operator

Optional. This MUST be specified for queries and must NOT be specified for database points.

Operator

Datapoints for which Operator is true relative to the query's Value field will be allowlisted.

Enums
OPERATOR_UNSPECIFIED Unspecified operator.
LESS Entities are eligible if their value is < the query's.
LESS_EQUAL Entities are eligible if their value is <= the query's.
EQUAL Entities are eligible if their value is == the query's.
GREATER_EQUAL Entities are eligible if their value is >= the query's.
GREATER Entities are eligible if their value is > the query's.
NOT_EQUAL Entities are eligible if their value is != the query's.

Parameters

Parameters that can be overrided in each query to tune query latency and recall.

Fields
approximate_neighbor_candidates

int32

Optional. The number of neighbors to find via approximate search before exact reordering is performed; if set, this value must be > neighbor_count.

leaf_nodes_search_fraction

double

Optional. The fraction of the number of leaves to search, set at query time allows user to tune search performance. This value increase result in both search accuracy and latency increase. The value should be between 0.0 and 1.0.

StringFilter

String filter is used to search a subset of the entities by using boolean rules on string columns. For example: if a query specifies string filter with 'name = color, allow_tokens = {red, blue}, deny_tokens = {purple}',' then that query will match entities that are red or blue, but if those points are also purple, then they will be excluded even if they are red/blue. Only string filter is supported for now, numeric filter will be supported in the near future.

Fields
name

string

Required. Column names in BigQuery that used as filters.

allow_tokens[]

string

Optional. The allowed tokens.

deny_tokens[]

string

Optional. The denied tokens.

NearestNeighborSearchOperationMetadata

Runtime operation metadata with regard to Matching Engine Index.

Fields
content_validation_stats[]

ContentValidationStats

The validation stats of the content (per file) to be inserted or updated on the Matching Engine Index resource. Populated if contentsDeltaUri is provided as part of Index.metadata. Please note that, currently for those files that are broken or has unsupported file format, we will not have the stats for those files.

data_bytes_count

int64

The ingested data size in bytes.

ContentValidationStats

Fields
source_gcs_uri

string

Cloud Storage URI pointing to the original file in user's bucket.

valid_record_count

int64

Number of records in this file that were successfully processed.

invalid_record_count

int64

Number of records in this file we skipped due to validate errors.

partial_errors[]

RecordError

The detail information of the partial failures encountered for those invalid records that couldn't be parsed. Up to 50 partial errors will be reported.

valid_sparse_record_count

int64

Number of sparse records in this file that were successfully processed.

invalid_sparse_record_count

int64

Number of sparse records in this file we skipped due to validate errors.

RecordError

Fields
error_type

RecordErrorType

The error type of this record.

error_message

string

A human-readable message that is shown to the user to help them fix the error. Note that this message may change from time to time, your code should check against error_type as the source of truth.

source_gcs_uri

string

Cloud Storage URI pointing to the original file in user's bucket.

embedding_id

string

Empty if the embedding id is failed to parse.

raw_record

string

The original content of this record.

RecordErrorType

Enums
ERROR_TYPE_UNSPECIFIED Default, shall not be used.
EMPTY_LINE The record is empty.
INVALID_JSON_SYNTAX Invalid json format.
INVALID_CSV_SYNTAX Invalid csv format.
INVALID_AVRO_SYNTAX Invalid avro format.
INVALID_EMBEDDING_ID The embedding id is not valid.
EMBEDDING_SIZE_MISMATCH The size of the dense embedding vectors does not match with the specified dimension.
NAMESPACE_MISSING The namespace field is missing.
PARSING_ERROR Generic catch-all error. Only used for validation failure where the root cause cannot be easily retrieved programmatically.
DUPLICATE_NAMESPACE There are multiple restricts with the same namespace value.
OP_IN_DATAPOINT Numeric restrict has operator specified in datapoint.
MULTIPLE_VALUES Numeric restrict has multiple values specified.
INVALID_NUMERIC_VALUE Numeric restrict has invalid numeric value specified.
INVALID_ENCODING File is not in UTF_8 format.
INVALID_SPARSE_DIMENSIONS Error parsing sparse dimensions field.
INVALID_TOKEN_VALUE Token restrict value is invalid.
INVALID_SPARSE_EMBEDDING Invalid sparse embedding.
INVALID_EMBEDDING Invalid dense embedding.

NearestNeighbors

Nearest neighbors for one query.

Fields
neighbors[]

Neighbor

All its neighbors.

Neighbor

A neighbor of the query vector.

Fields
entity_id

string

The id of the similar entity.

distance

double

The distance between the neighbor and the query vector.

entity_key_values

FetchFeatureValuesResponse

The attributes of the neighbor, e.g. filters, crowding and metadata Note that full entities are returned only when "return_full_entity" is set to true. Otherwise, only the "entity_id" and "distance" fields are populated.

Neighbor

Neighbors for example-based explanations.

Fields
neighbor_id

string

Output only. The neighbor id.

neighbor_distance

double

Output only. The neighbor distance.

NetworkSpec

Network spec.

Fields
enable_internet_access

bool

Whether to enable public internet access. Default false.

network

string

The full name of the Google Compute Engine network

subnetwork

string

The name of the subnet that this instance is in. Format: projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}

NfsMount

Represents a mount configuration for Network File System (NFS) to mount.

Fields
server

string

Required. IP address of the NFS server.

path

string

Required. Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path

mount_point

string

Required. Destination mount path. The NFS will be mounted for the user under /mnt/nfs/

NotebookEucConfig

The euc configuration of NotebookRuntimeTemplate.

Fields
euc_disabled

bool

Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate.

bypass_actas_check

bool

Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA.

NotebookExecutionJob

NotebookExecutionJob represents an instance of a notebook execution.

Fields
name

string

Output only. The resource name of this NotebookExecutionJob. Format: projects/{project_id}/locations/{location}/notebookExecutionJobs/{job_id}

display_name

string

The display name of the NotebookExecutionJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.

execution_timeout

Duration

Max running time of the execution job in seconds (default 86400s / 24 hrs).

schedule_resource_name

string

Output only. The Schedule resource name if this job is triggered by one. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}

job_state

JobState

Output only. The state of the NotebookExecutionJob.

status

Status

Output only. Populated when the NotebookExecutionJob is completed. When there is an error during notebook execution, the error details are populated.

create_time

Timestamp

Output only. Timestamp when this NotebookExecutionJob was created.

update_time

Timestamp

Output only. Timestamp when this NotebookExecutionJob was most recently updated.

labels

map<string, string>

The labels with user-defined metadata to organize NotebookExecutionJobs.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

kernel_name

string

The name of the kernel to use during notebook execution. If unset, the default kernel is used.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for the notebook execution job. This field is auto-populated if the NotebookRuntimeTemplate has an encryption spec.

Union field notebook_source. The input notebook. notebook_source can be only one of the following:
dataform_repository_source

DataformRepositorySource

The Dataform Repository pointing to a single file notebook repository.

gcs_notebook_source

GcsNotebookSource

The Cloud Storage url pointing to the ipynb file. Format: gs://bucket/notebook_file.ipynb

direct_notebook_source

DirectNotebookSource

The contents of an input notebook file.

Union field environment_spec. The compute config to use for an execution job. environment_spec can be only one of the following:
notebook_runtime_template_resource_name

string

The NotebookRuntimeTemplate to source compute configuration from.

custom_environment_spec

CustomEnvironmentSpec

The custom compute configuration for an execution job.

Union field execution_sink. The location to store the notebook execution result. execution_sink can be only one of the following:
gcs_output_uri

string

The Cloud Storage location to upload the result to. Format: gs://bucket-name

Union field execution_identity. The identity to run the execution as. execution_identity can be only one of the following:
execution_user

string

The user email to run the execution as. Only supported by Colab runtimes.

service_account

string

The service account to run the execution as.

Union field runtime_environment. Runtime environment for the notebook execution job. If unspecified, the default runtime of Colab is used. runtime_environment can be only one of the following:
workbench_runtime

WorkbenchRuntime

The Workbench runtime configuration to use for the notebook execution.

CustomEnvironmentSpec

Compute configuration to use for an execution job.

Fields
machine_spec

MachineSpec

The specification of a single machine for the execution job.

persistent_disk_spec

PersistentDiskSpec

The specification of a persistent disk to attach for the execution job.

network_spec

NetworkSpec

The network configuration to use for the execution job.

DataformRepositorySource

The Dataform Repository containing the input notebook.

Fields
dataform_repository_resource_name

string

The resource name of the Dataform Repository. Format: projects/{project_id}/locations/{location}/repositories/{repository_id}

commit_sha

string

The commit SHA to read repository with. If unset, the file will be read at HEAD.

DirectNotebookSource

The content of the input notebook in ipynb format.

Fields
content

bytes

The base64-encoded contents of the input notebook file.

GcsNotebookSource

The Cloud Storage uri for the input notebook.

Fields
uri

string

The Cloud Storage uri pointing to the ipynb file. Format: gs://bucket/notebook_file.ipynb

generation

string

The version of the Cloud Storage object to read. If unset, the current version of the object is read. See https://cloud.google.com/storage/docs/metadata#generation-number.

WorkbenchRuntime

This type has no fields.

Configuration for a Workbench Instances-based environment.

NotebookExecutionJobView

Views for Get/List NotebookExecutionJob

Enums
NOTEBOOK_EXECUTION_JOB_VIEW_UNSPECIFIED When unspecified, the API defaults to the BASIC view.
NOTEBOOK_EXECUTION_JOB_VIEW_BASIC Includes all fields except for direct notebook inputs.
NOTEBOOK_EXECUTION_JOB_VIEW_FULL Includes all fields.

NotebookIdleShutdownConfig

The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field.

Fields
idle_timeout

Duration

Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60.

idle_shutdown_disabled

bool

Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate.

NotebookRuntime

A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.

Fields
name

string

Output only. The resource name of the NotebookRuntime.

runtime_user

string

Required. The user email of the NotebookRuntime.

notebook_runtime_template_ref

NotebookRuntimeTemplateRef

Output only. The pointer to NotebookRuntimeTemplate this NotebookRuntime is created from.

proxy_uri

string

Output only. The proxy endpoint used to access the NotebookRuntime.

create_time

Timestamp

Output only. Timestamp when this NotebookRuntime was created.

update_time

Timestamp

Output only. Timestamp when this NotebookRuntime was most recently updated.

health_state

HealthState

Output only. The health state of the NotebookRuntime.

display_name

string

Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

The description of the NotebookRuntime.

service_account

string

Output only. Deprecated: This field is no longer used and the "Vertex AI Notebook Service Account" (service-PROJECT_NUMBER@gcp-sa-aiplatform-vm.iam.gserviceaccount.com) is used for the runtime workload identity. See https://cloud.google.com/iam/docs/service-agents#vertex-ai-notebook-service-account for more details.

The service account that the NotebookRuntime workload runs as.

runtime_state

RuntimeState

Output only. The runtime (instance) state of the NotebookRuntime.

is_upgradable

bool

Output only. Whether NotebookRuntime is upgradable.

labels

map<string, string>

The labels with user-defined metadata to organize your NotebookRuntime.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime:

  • "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id.
  • "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex.
expiration_time

Timestamp

Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade.

version

string

Output only. The VM os image version of NotebookRuntime.

notebook_runtime_type

NotebookRuntimeType

Output only. The type of the notebook runtime.

idle_shutdown_config

NotebookIdleShutdownConfig

Output only. The idle shutdown configuration of the notebook runtime.

euc_config

NotebookEucConfig

Output only. EUC configuration of the notebook runtime.

shielded_vm_config

ShieldedVmConfig

Output only. Runtime Shielded VM spec.

network_tags[]

string

Optional. The Compute Engine tags to add to runtime (see Tagging instances).

encryption_spec

EncryptionSpec

Output only. Customer-managed encryption key spec for the notebook runtime.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

HealthState

The substate of the NotebookRuntime to display health information.

Enums
HEALTH_STATE_UNSPECIFIED Unspecified health state.
HEALTHY NotebookRuntime is in healthy state. Applies to ACTIVE state.
UNHEALTHY NotebookRuntime is in unhealthy state. Applies to ACTIVE state.

RuntimeState

The substate of the NotebookRuntime to display state of runtime. The resource of NotebookRuntime is in ACTIVE state for these sub state.

Enums
RUNTIME_STATE_UNSPECIFIED Unspecified runtime state.
RUNNING NotebookRuntime is in running state.
BEING_STARTED NotebookRuntime is in starting state.
BEING_STOPPED NotebookRuntime is in stopping state.
STOPPED NotebookRuntime is in stopped state.
BEING_UPGRADED NotebookRuntime is in upgrading state. It is in the middle of upgrading process.
ERROR NotebookRuntime was unable to start/stop properly.
INVALID NotebookRuntime is in invalid state. Cannot be recovered.

NotebookRuntimeTemplate

A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.

Fields
name

string

The resource name of the NotebookRuntimeTemplate.

display_name

string

Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

The description of the NotebookRuntimeTemplate.

is_default
(deprecated)

bool

Output only. Deprecated: This field has no behavior. Use notebook_runtime_type = 'ONE_CLICK' instead.

The default template to use if not specified.

machine_spec

MachineSpec

Optional. Immutable. The specification of a single machine for the template.

data_persistent_disk_spec

PersistentDiskSpec

Optional. The specification of [persistent disk][https://cloud.google.com/compute/docs/disks/persistent-disks] attached to the runtime as data disk storage.

network_spec

NetworkSpec

Optional. Network spec.

service_account
(deprecated)

string

Deprecated: This field is ignored and the "Vertex AI Notebook Service Account" (service-PROJECT_NUMBER@gcp-sa-aiplatform-vm.iam.gserviceaccount.com) is used for the runtime workload identity. See https://cloud.google.com/iam/docs/service-agents#vertex-ai-notebook-service-account for more details. For NotebookExecutionJob, use NotebookExecutionJob.service_account instead.

The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance.

If not specified, the Compute Engine default service account is used.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

labels

map<string, string>

The labels with user-defined metadata to organize the NotebookRuntimeTemplates.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

idle_shutdown_config

NotebookIdleShutdownConfig

The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled.

euc_config

NotebookEucConfig

EUC configuration of the NotebookRuntimeTemplate.

create_time

Timestamp

Output only. Timestamp when this NotebookRuntimeTemplate was created.

update_time

Timestamp

Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated.

notebook_runtime_type

NotebookRuntimeType

Optional. Immutable. The type of the notebook runtime template.

shielded_vm_config

ShieldedVmConfig

Optional. Immutable. Runtime Shielded VM spec.

network_tags[]

string

Optional. The Compute Engine tags to add to runtime (see Tagging instances).

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for the notebook runtime.

NotebookRuntimeTemplateRef

Points to a NotebookRuntimeTemplateRef.

Fields
notebook_runtime_template

string

Immutable. A resource name of the NotebookRuntimeTemplate.

NotebookRuntimeType

Represents a notebook runtime type.

Enums
NOTEBOOK_RUNTIME_TYPE_UNSPECIFIED Unspecified notebook runtime type, NotebookRuntimeType will default to USER_DEFINED.
USER_DEFINED runtime or template with coustomized configurations from user.
ONE_CLICK runtime or template with system defined configurations.

PSCAutomationConfig

PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.

Fields
project_id

string

Required. Project id used to create forwarding rule.

network

string

Required. The full name of the Google Compute Engine network. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.

PairwiseChoice

Pairwise prediction autorater preference.

Enums
PAIRWISE_CHOICE_UNSPECIFIED Unspecified prediction choice.
BASELINE Baseline prediction wins
CANDIDATE Candidate prediction wins
TIE Winner cannot be determined

PairwiseMetricInput

Input for pairwise metric.

Fields
metric_spec

PairwiseMetricSpec

Required. Spec for pairwise metric.

instance

PairwiseMetricInstance

Required. Pairwise metric instance.

PairwiseMetricInstance

Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset.

Fields
Union field instance. Instance for pairwise metric. instance can be only one of the following:
json_instance

string

Instance specified as a json string. String key-value pairs are expected in the json_instance to render PairwiseMetricSpec.instance_prompt_template.

PairwiseMetricResult

Spec for pairwise metric result.

Fields
pairwise_choice

PairwiseChoice

Output only. Pairwise metric choice.

explanation

string

Output only. Explanation for pairwise metric score.

PairwiseMetricSpec

Spec for pairwise metric.

Fields
metric_prompt_template

string

Required. Metric prompt template for pairwise metric.

PairwiseQuestionAnsweringQualityInput

Input for pairwise question answering quality metric.

Fields
metric_spec

PairwiseQuestionAnsweringQualitySpec

Required. Spec for pairwise question answering quality score metric.

instance

PairwiseQuestionAnsweringQualityInstance

Required. Pairwise question answering quality instance.

PairwiseQuestionAnsweringQualityInstance

Spec for pairwise question answering quality instance.

Fields
prediction

string

Required. Output of the candidate model.

baseline_prediction

string

Required. Output of the baseline model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Required. Text to answer the question.

instruction

string

Required. Question Answering prompt for LLM.

PairwiseQuestionAnsweringQualityResult

Spec for pairwise question answering quality result.

Fields
pairwise_choice

PairwiseChoice

Output only. Pairwise question answering prediction choice.

explanation

string

Output only. Explanation for question answering quality score.

confidence

float

Output only. Confidence for question answering quality score.

PairwiseQuestionAnsweringQualitySpec

Spec for pairwise question answering quality score metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute question answering quality.

version

int32

Optional. Which version to use for evaluation.

PairwiseSummarizationQualityInput

Input for pairwise summarization quality metric.

Fields
metric_spec

PairwiseSummarizationQualitySpec

Required. Spec for pairwise summarization quality score metric.

instance

PairwiseSummarizationQualityInstance

Required. Pairwise summarization quality instance.

PairwiseSummarizationQualityInstance

Spec for pairwise summarization quality instance.

Fields
prediction

string

Required. Output of the candidate model.

baseline_prediction

string

Required. Output of the baseline model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Required. Text to be summarized.

instruction

string

Required. Summarization prompt for LLM.

PairwiseSummarizationQualityResult

Spec for pairwise summarization quality result.

Fields
pairwise_choice

PairwiseChoice

Output only. Pairwise summarization prediction choice.

explanation

string

Output only. Explanation for summarization quality score.

confidence

float

Output only. Confidence for summarization quality score.

PairwiseSummarizationQualitySpec

Spec for pairwise summarization quality score metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute pairwise summarization quality.

version

int32

Optional. Which version to use for evaluation.

Part

A datatype containing media that is part of a multi-part Content message.

A Part consists of data which has an associated datatype. A Part can only contain one of the accepted types in Part.data.

A Part must have a fixed IANA MIME type identifying the type and subtype of the media if inline_data or file_data field is filled with raw bytes.

Fields
thought

bool

Output only. Indicates if the part is thought from the model.

Union field data.

data can be only one of the following:

text

string

Optional. Text part (can be code).

inline_data

Blob

Optional. Inlined bytes data.

file_data

FileData

Optional. URI based data.

function_call

FunctionCall

Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.

function_response

FunctionResponse

Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.

executable_code

ExecutableCode

Optional. Code generated by the model that is meant to be executed.

code_execution_result

CodeExecutionResult

Optional. Result of executing the [ExecutableCode].

Union field metadata.

metadata can be only one of the following:

video_metadata

VideoMetadata

Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.

PartnerModelTuningSpec

Tuning spec for Partner models.

Fields
training_dataset_uri

string

Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file.

validation_dataset_uri

string

Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file.

hyper_parameters

map<string, Value>

Hyperparameters for tuning. The accepted hyper_parameters and their valid range of values will differ depending on the base model.

PauseModelDeploymentMonitoringJobRequest

Request message for JobService.PauseModelDeploymentMonitoringJob.

Fields
name

string

Required. The resource name of the ModelDeploymentMonitoringJob to pause. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}

PauseScheduleRequest

Request message for ScheduleService.PauseSchedule.

Fields
name

string

Required. The name of the Schedule resource to be paused. Format: projects/{project}/locations/{location}/schedules/{schedule}

PersistentDiskSpec

Represents the spec of [persistent disk][https://cloud.google.com/compute/docs/disks/persistent-disks] options.

Fields
disk_type

string

Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk)

disk_size_gb

int64

Size in GB of the disk (default is 100GB).

PersistentResource

Represents long-lasting resources that are dedicated to users to runs custom workloads. A PersistentResource can have multiple node pools and each node pool can have its own machine spec.

Fields
name

string

Immutable. Resource name of a PersistentResource.

display_name

string

Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.

resource_pools[]

ResourcePool

Required. The spec of the pools of different resources.

state

State

Output only. The detailed state of a Study.

error

Status

Output only. Only populated when persistent resource's state is STOPPING or ERROR.

create_time

Timestamp

Output only. Time when the PersistentResource was created.

start_time

Timestamp

Output only. Time when the PersistentResource for the first time entered the RUNNING state.

update_time

Timestamp

Output only. Time when the PersistentResource was most recently updated.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize PersistentResource.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

network

string

Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name.

To specify this field, you must have already configured VPC Network Peering for Vertex AI.

If this field is left unspecified, the resources aren't peered with any network.

encryption_spec

EncryptionSpec

Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.

resource_runtime_spec

ResourceRuntimeSpec

Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.

resource_runtime

ResourceRuntime

Output only. Runtime information of the Persistent Resource.

reserved_ip_ranges[]

string

Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource.

If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network.

Example: ['vertex-ai-ip-range'].

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

State

Describes the PersistentResource state.

Enums
STATE_UNSPECIFIED Not set.
PROVISIONING The PROVISIONING state indicates the persistent resources is being created.
RUNNING The RUNNING state indicates the persistent resource is healthy and fully usable.
STOPPING The STOPPING state indicates the persistent resource is being deleted.
ERROR The ERROR state indicates the persistent resource may be unusable. Details can be found in the error field.
REBOOTING The REBOOTING state indicates the persistent resource is being rebooted (PR is not available right now but is expected to be ready again later).
UPDATING The UPDATING state indicates the persistent resource is being updated.

PipelineFailurePolicy

Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.

Enums
PIPELINE_FAILURE_POLICY_UNSPECIFIED Default value, and follows fail slow behavior.
PIPELINE_FAILURE_POLICY_FAIL_SLOW Indicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
PIPELINE_FAILURE_POLICY_FAIL_FAST Indicates that the pipeline should stop scheduling new tasks after a task has failed.

PipelineJob

An instance of a machine learning PipelineJob.

Fields
name

string

Output only. The resource name of the PipelineJob.

display_name

string

The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.

create_time

Timestamp

Output only. Pipeline creation time.

start_time

Timestamp

Output only. Pipeline start time.

end_time

Timestamp

Output only. Pipeline end time.

update_time

Timestamp

Output only. Timestamp when this PipelineJob was most recently updated.

pipeline_spec

Struct

The spec of the pipeline.

state

PipelineState

Output only. The detailed state of the job.

job_detail

PipelineJobDetail

Output only. The details of pipeline run. Not available in the list view.

error

Status

Output only. The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.

labels

map<string, string>

The labels with user-defined metadata to organize PipelineJob.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.

runtime_config

RuntimeConfig

Runtime config of the pipeline.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.

service_account

string

The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account

Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.

network

string

The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name.

Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.

reserved_ip_ranges[]

string

A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload.

If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network.

Example: ['vertex-ai-ip-range'].

psc_interface_config

PscInterfaceConfig

Optional. Configuration for PSC-I for PipelineJob.

template_uri

string

A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.

template_metadata

PipelineTemplateMetadata

Output only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.

schedule_name

string

Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API.

preflight_validations

bool

Optional. Whether to do component level validations before job creation.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

original_pipeline_job_id

int64

Optional. The original pipeline job id if this pipeline job is a rerun of a previous pipeline job.

pipeline_task_rerun_configs[]

PipelineTaskRerunConfig

Optional. The rerun configs for each task in the pipeline job. By default, the rerun will: 1. Use the same input artifacts as the original run. 2. Use the same input parameters as the original run. 3. Skip all the tasks that are already succeeded in the original run. 4. Rerun all the tasks that are not succeeded in the original run. By providing this field, users can override the default behavior and specify the rerun config for each task.

RuntimeConfig

The runtime config of a PipelineJob.

Fields
parameters
(deprecated)

map<string, Value>

Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

gcs_output_directory

string

Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.

parameter_values

map<string, Value>

The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.

failure_policy

PipelineFailurePolicy

Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.

input_artifacts

map<string, InputArtifact>

The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.

default_runtime

DefaultRuntime

Optional. The default runtime for the PipelineJob. If not set, Standard Vertex Custom Job(https://cloud.google.com/vertex-ai/docs/training/overview) is used as the runtime. If set, all pipeline tasks will run on the default runtime unless a task is a GCPC custom job component (https://cloud.google.com/vertex-ai/docs/pipelines/customjob-component) based task. If the task is based on a GCPC custom job component, it runs solely according to the component's configuration.

DefaultRuntime

The default runtime for the PipelineJob.

Fields

Union field runtime_detail.

runtime_detail can be only one of the following:

persistent_resource_runtime_detail

PersistentResourceRuntimeDetail

Persistent resource based runtime detail.

InputArtifact

The type of an input artifact.

Fields

Union field kind.

kind can be only one of the following:

artifact_id

string

Artifact resource id from MLMD. Which is the last portion of an artifact resource name: projects/{project}/locations/{location}/metadataStores/default/artifacts/{artifact_id}. The artifact must stay within the same project, location and default metadatastore as the pipeline.

PersistentResourceRuntimeDetail

Persistent resource based runtime detail. For more information about persistent resource, refer to https://cloud.google.com/vertex-ai/docs/training/persistent-resource-overview

Fields
persistent_resource_name

string

Persistent resource name. Format: projects/{project}/locations/{location}/persistentResources/{persistent_resource}

task_resource_unavailable_wait_time_ms

int64

The max time a pipeline task waits for the required CPU, memory, or accelerator resource to become available from the specified persistent resource. Default wait time is 0.

task_resource_unavailable_timeout_behavior

TaskResourceUnavailableTimeoutBehavior

Specifies the behavior to take if the timeout is reached.

TaskResourceUnavailableTimeoutBehavior

An enum that specifies the behavior to take if the timeout is reached.

Enums
TASK_RESOURCE_UNAVAILABLE_TIMEOUT_BEHAVIOR_UNSPECIFIED Unspecified. Behavior is same as FAIL.
FAIL Fail the task if the timeout is reached.
FALL_BACK_TO_ON_DEMAND Fall back to on-demand execution if the timeout is reached.

PipelineJobDetail

The runtime detail of PipelineJob.

Fields
pipeline_context

Context

Output only. The context of the pipeline.

pipeline_run_context

Context

Output only. The context of the current pipeline run.

task_details[]

PipelineTaskDetail

Output only. The runtime details of the tasks under the pipeline.

PipelineState

Describes the state of a pipeline.

Enums
PIPELINE_STATE_UNSPECIFIED The pipeline state is unspecified.
PIPELINE_STATE_QUEUED The pipeline has been created or resumed, and processing has not yet begun.
PIPELINE_STATE_PENDING The service is preparing to run the pipeline.
PIPELINE_STATE_RUNNING The pipeline is in progress.
PIPELINE_STATE_SUCCEEDED The pipeline completed successfully.
PIPELINE_STATE_FAILED The pipeline failed.
PIPELINE_STATE_CANCELLING The pipeline is being cancelled. From this state, the pipeline may only go to either PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED.
PIPELINE_STATE_CANCELLED The pipeline has been cancelled.
PIPELINE_STATE_PAUSED The pipeline has been stopped, and can be resumed.

PipelineTaskDetail

The runtime detail of a task execution.

Fields
task_id

int64

Output only. The system generated ID of the task.

parent_task_id

int64

Output only. The id of the parent task if the task is within a component scope. Empty if the task is at the root level.

task_name

string

Output only. The user specified name of the task that is defined in pipeline_spec.

create_time

Timestamp

Output only. Task create time.

start_time

Timestamp

Output only. Task start time.

end_time

Timestamp

Output only. Task end time.

executor_detail

PipelineTaskExecutorDetail

Output only. The detailed execution info.

state

State

Output only. State of the task.

execution

Execution

Output only. The execution metadata of the task.

error

Status

Output only. The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.

pipeline_task_status[]

PipelineTaskStatus

Output only. A list of task status. This field keeps a record of task status evolving over time.

inputs

map<string, ArtifactList>

Output only. The runtime input artifacts of the task.

outputs

map<string, ArtifactList>

Output only. The runtime output artifacts of the task.

ArtifactList

A list of artifact metadata.

Fields
artifacts[]

Artifact

Output only. A list of artifact metadata.

PipelineTaskStatus

A single record of the task status.

Fields
update_time

Timestamp

Output only. Update time of this status.

state

State

Output only. The state of the task.

error

Status

Output only. The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.

State

Specifies state of TaskExecution

Enums
STATE_UNSPECIFIED Unspecified.
PENDING Specifies pending state for the task.
RUNNING Specifies task is being executed.
SUCCEEDED Specifies task completed successfully.
CANCEL_PENDING Specifies Task cancel is in pending state.
CANCELLING Specifies task is being cancelled.
CANCELLED Specifies task was cancelled.
FAILED Specifies task failed.
SKIPPED Specifies task was skipped due to cache hit.
NOT_TRIGGERED Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the condition field of PipelineJob.pipeline_spec.

PipelineTaskExecutorDetail

The runtime detail of a pipeline executor.

Fields

Union field details.

details can be only one of the following:

container_detail

ContainerDetail

Output only. The detailed info for a container executor.

custom_job_detail

CustomJobDetail

Output only. The detailed info for a custom job executor.

ContainerDetail

The detail of a container execution. It contains the job names of the lifecycle of a container execution.

Fields
main_job

string

Output only. The name of the CustomJob for the main container execution.

pre_caching_check_job

string

Output only. The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.

failed_main_jobs[]

string

Output only. The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.

failed_pre_caching_check_jobs[]

string

Output only. The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.

CustomJobDetail

The detailed info for a custom job executor.

Fields
job

string

Output only. The name of the CustomJob.

failed_jobs[]

string

Output only. The names of the previously failed CustomJob. The list includes the all attempts in chronological order.

PipelineTaskRerunConfig

User provided rerun config to submit a rerun pipelinejob. This includes 1. Which task to rerun 2. User override input parameters and artifacts.

Fields
task_id

int64

Optional. The system generated ID of the task. Retrieved from original run.

task_name

string

Optional. The name of the task.

inputs

Inputs

Optional. The runtime input of the task overridden by the user.

skip_task

bool

Optional. Whether to skip this task. Default value is False.

skip_downstream_tasks

bool

Optional. Whether to skip downstream tasks. Default value is False.

ArtifactList

A list of artifact metadata.

Fields
artifacts[]

RuntimeArtifact

Optional. A list of artifact metadata.

Inputs

Runtime inputs data of the task.

Fields
artifacts

map<string, ArtifactList>

Optional. Input artifacts.

parameter_values

map<string, Value>

Optional. Input parameters.

PipelineTemplateMetadata

Pipeline template metadata if PipelineJob.template_uri is from supported template registry. Currently, the only supported registry is Artifact Registry.

Fields
version

string

The version_name in artifact registry.

Will always be presented in output if the PipelineJob.template_uri is from supported template registry.

Format is "sha256:abcdef123456...".

PointwiseMetricInput

Input for pointwise metric.

Fields
metric_spec

PointwiseMetricSpec

Required. Spec for pointwise metric.

instance

PointwiseMetricInstance

Required. Pointwise metric instance.

PointwiseMetricInstance

Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset.

Fields
Union field instance. Instance for pointwise metric. instance can be only one of the following:
json_instance

string

Instance specified as a json string. String key-value pairs are expected in the json_instance to render PointwiseMetricSpec.instance_prompt_template.

PointwiseMetricResult

Spec for pointwise metric result.

Fields
explanation

string

Output only. Explanation for pointwise metric score.

score

float

Output only. Pointwise metric score.

PointwiseMetricSpec

Spec for pointwise metric.

Fields
metric_prompt_template

string

Required. Metric prompt template for pointwise metric.

Port

Represents a network port in a container.

Fields
container_port

int32

The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.

PredefinedSplit

Assigns input data to training, validation, and test sets based on the value of a provided key.

Supported only for tabular Datasets.

Fields
key

string

Required. The key is a name of one of the Dataset's data columns. The value of the key (either the label's value or value in the column) must be one of {training, validation, test}, and it defines to which set the given piece of data is assigned. If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline.

PredictLongRunningMetadata

This type has no fields.

Metadata for PredictLongRunning long running operations.

PredictLongRunningResponse

Response message for [PredictionService.PredictLongRunning]

Fields
Union field response. The response of the long running operation. response can be only one of the following:
generate_video_response

GenerateVideoResponse

The response of the video generation prediction.

PredictRequest

Request message for PredictionService.Predict.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances[]

Value

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parameters

Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

PredictRequestResponseLoggingConfig

Configuration for logging request-response to a BigQuery table.

Fields
enabled

bool

If logging is enabled or not.

sampling_rate

double

Percentage of requests to be logged, expressed as a fraction in range(0,1].

bigquery_destination

BigQueryDestination

BigQuery table for logging. If only given a project, a new dataset will be created with name logging_<endpoint-display-name>_<endpoint-id> where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores). If no table name is given, a new table will be created with name request_response_logging

PredictResponse

Response message for PredictionService.Predict.

Fields
predictions[]

Value

The predictions that are the output of the predictions call. The schema of any single prediction may be specified via Endpoint's DeployedModels' Model's PredictSchemata's prediction_schema_uri.

deployed_model_id

string

ID of the Endpoint's DeployedModel that served this prediction.

model

string

Output only. The resource name of the Model which is deployed as the DeployedModel that this prediction hits.

model_version_id

string

Output only. The version ID of the Model which is deployed as the DeployedModel that this prediction hits.

model_display_name

string

Output only. The display name of the Model which is deployed as the DeployedModel that this prediction hits.

metadata

Value

Output only. Request-level metadata returned by the model. The metadata type will be dependent upon the model implementation.

PredictSchemata

Contains the schemata used in Model's predictions and explanations via PredictionService.Predict, PredictionService.Explain and BatchPredictionJob.

Fields
instance_schema_uri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in PredictRequest.instances, ExplainRequest.instances and BatchPredictionJob.input_config. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

parameters_schema_uri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via PredictRequest.parameters, ExplainRequest.parameters and BatchPredictionJob.model_parameters. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI, if no parameters are supported, then it is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

prediction_schema_uri

string

Immutable. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via PredictResponse.predictions, ExplainResponse.explanations, and BatchPredictionJob.output_config. The schema is defined as an OpenAPI 3.0.2 Schema Object. AutoML Models always have this field populated by Vertex AI. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

Presets

Preset configuration for example-based explanations

Fields
modality

Modality

The modality of the uploaded model, which automatically configures the distance measurement and feature normalization for the underlying example index and queries. If your model does not precisely fit one of these types, it is okay to choose the closest type.

query

Query

Preset option controlling parameters for speed-precision trade-off when querying for examples. If omitted, defaults to PRECISE.

Modality

Preset option controlling parameters for different modalities

Enums
MODALITY_UNSPECIFIED Should not be set. Added as a recommended best practice for enums
IMAGE IMAGE modality
TEXT TEXT modality
TABULAR TABULAR modality

Query

Preset option controlling parameters for query speed-precision trade-off

Enums
PRECISE More precise neighbors as a trade-off against slower response.
FAST Faster response as a trade-off against less precise neighbors.

PrivateEndpoints

PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment.

Fields
predict_http_uri

string

Output only. Http(s) path to send prediction requests.

explain_http_uri

string

Output only. Http(s) path to send explain requests.

health_http_uri

string

Output only. Http(s) path to send health check requests.

service_attachment

string

Output only. The name of the service attachment resource. Populated if private service connect is enabled.

PrivateServiceConnectConfig

Represents configuration for private service connect.

Fields
enable_private_service_connect

bool

Required. If true, expose the IndexEndpoint via private service connect.

project_allowlist[]

string

A list of Projects from which the forwarding rule will target the service attachment.

enable_secure_private_service_connect

bool

Optional. If set to true, enable secure private service connect with IAM authorization. Otherwise, private service connect will be done without authorization. Note latency will be slightly increased if authorization is enabled.

service_attachment

string

Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.

Probe

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.

Fields
period_seconds

int32

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds.

Maps to Kubernetes probe argument 'periodSeconds'.

timeout_seconds

int32

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds.

Maps to Kubernetes probe argument 'timeoutSeconds'.

Union field probe_type.

probe_type can be only one of the following:

exec

ExecAction

ExecAction probes the health of a container by executing a command.

ExecAction

ExecAction specifies a command to execute.

Fields
command[]

string

Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

PscAutomatedEndpoints

PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.

Fields
project_id

string

Corresponding project_id in pscAutomationConfigs

network

string

Corresponding network in pscAutomationConfigs.

match_address

string

Ip Address created by the automated forwarding rule.

PscInterfaceConfig

Configuration for PSC-I.

Fields
network_attachment

string

Optional. The name of the Compute Engine network attachment to attach to the resource within the region and user project. To specify this field, you must have already created a network attachment. This field is only used for resources using PSC-I.

PublisherModel

A Model Garden Publisher Model.

Fields
name

string

Output only. The resource name of the PublisherModel.

version_id

string

Output only. Immutable. The version ID of the PublisherModel. A new version is committed when a new model version is uploaded under an existing model id. It is an auto-incrementing decimal number in string representation.

open_source_category

OpenSourceCategory

Required. Indicates the open source category of the publisher model.

parent

Parent

Optional. The parent that this model was customized from. E.g., Vision API, Natural Language API, LaMDA, T5, etc. Foundation models don't have parents.

supported_actions

CallToAction

Optional. Supported call-to-action options.

frameworks[]

string

Optional. Additional information about the model's Frameworks.

launch_stage

LaunchStage

Optional. Indicates the launch stage of the model.

version_state

VersionState

Optional. Indicates the state of the model version.

publisher_model_template

string

Optional. Output only. Immutable. Used to indicate this model has a publisher model and provide the template of the publisher model resource name.

predict_schemata

PredictSchemata

Optional. The schemata that describes formats of the PublisherModel's predictions and explanations as given and returned via PredictionService.Predict.

CallToAction

Actions could take on this Publisher Model.

Fields
view_rest_api

ViewRestApi

Optional. To view Rest API docs.

open_notebook

RegionalResourceReferences

Optional. Open notebook of the PublisherModel.

create_application

RegionalResourceReferences

Optional. Create application using the PublisherModel.

open_fine_tuning_pipeline

RegionalResourceReferences

Optional. Open fine-tuning pipeline of the PublisherModel.

open_prompt_tuning_pipeline

RegionalResourceReferences

Optional. Open prompt-tuning pipeline of the PublisherModel.

open_genie

RegionalResourceReferences

Optional. Open Genie / Playground.

deploy

Deploy

Optional. Deploy the PublisherModel to Vertex Endpoint.

deploy_gke

DeployGke

Optional. Deploy PublisherModel to Google Kubernetes Engine.

open_generation_ai_studio

RegionalResourceReferences

Optional. Open in Generation AI Studio.

request_access

RegionalResourceReferences

Optional. Request for access.

open_evaluation_pipeline

RegionalResourceReferences

Optional. Open evaluation pipeline of the PublisherModel.

open_notebooks

OpenNotebooks

Optional. Open notebooks of the PublisherModel.

open_fine_tuning_pipelines

OpenFineTuningPipelines

Optional. Open fine-tuning pipelines of the PublisherModel.

Deploy

Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests.

Fields
model_display_name

string

Optional. Default model display name.

large_model_reference

LargeModelReference

Optional. Large model reference. When this is set, model_artifact_spec is not needed.

container_spec

ModelContainerSpec

Optional. The specification of the container that is to be used when deploying this Model in Vertex AI. Not present for Large Models.

artifact_uri

string

Optional. The path to the directory containing the Model artifact and any of its supporting files.

title

string

Required. The title of the regional resource reference.

public_artifact_uri

string

Optional. The signed URI for ephemeral Cloud Storage access to model artifact.

Union field prediction_resources. The prediction (for example, the machine) resources that the DeployedModel uses. prediction_resources can be only one of the following:
dedicated_resources

DedicatedResources

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

automatic_resources

AutomaticResources

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.

shared_resources

string

The resource name of the shared DeploymentResourcePool to deploy on. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

deploy_task_name

string

Optional. The name of the deploy task (e.g., "text to image generation").

deploy_metadata

DeployMetadata

Optional. Metadata information about this deployment config.

DeployMetadata

Metadata information about the deployment for managing deployment config.

Fields
labels

map<string, string>

Optional. Labels for the deployment config. For managing deployment config like verifying, source of deployment config, etc.

sample_request

string

Optional. Sample request for deployed endpoint.

DeployGke

Configurations for PublisherModel GKE deployment

Fields
gke_yaml_configs[]

string

Optional. GKE deployment configuration in yaml format.

OpenFineTuningPipelines

Open fine tuning pipelines.

Fields
fine_tuning_pipelines[]

RegionalResourceReferences

Required. Regional resource references to fine tuning pipelines.

OpenNotebooks

Open notebooks.

Fields
notebooks[]

RegionalResourceReferences

Required. Regional resource references to notebooks.

RegionalResourceReferences

The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc..

Fields
references

map<string, ResourceReference>

Required.

title

string

Required.

resource_title

string

Optional. Title of the resource.

resource_use_case

string

Optional. Use case (CUJ) of the resource.

resource_description

string

Optional. Description of the resource.

ViewRestApi

Rest API docs.

Fields
documentations[]

Documentation

Required.

title

string

Required. The title of the view rest API.

Documentation

A named piece of documentation.

Fields
title

string

Required. E.g., OVERVIEW, USE CASES, DOCUMENTATION, SDK & SAMPLES, JAVA, NODE.JS, etc..

content

string

Required. Content of this piece of document (in Markdown format).

LaunchStage

An enum representing the launch stage of a PublisherModel.

Enums
LAUNCH_STAGE_UNSPECIFIED The model launch stage is unspecified.
EXPERIMENTAL Used to indicate the PublisherModel is at Experimental launch stage, available to a small set of customers.
PRIVATE_PREVIEW Used to indicate the PublisherModel is at Private Preview launch stage, only available to a small set of customers, although a larger set of customers than an Experimental launch. Previews are the first launch stage used to get feedback from customers.
PUBLIC_PREVIEW Used to indicate the PublisherModel is at Public Preview launch stage, available to all customers, although not supported for production workloads.
GA Used to indicate the PublisherModel is at GA launch stage, available to all customers and ready for production workload.

OpenSourceCategory

An enum representing the open source category of a PublisherModel.

Enums
OPEN_SOURCE_CATEGORY_UNSPECIFIED The open source category is unspecified, which should not be used.
PROPRIETARY Used to indicate the PublisherModel is not open sourced.
GOOGLE_OWNED_OSS_WITH_GOOGLE_CHECKPOINT Used to indicate the PublisherModel is a Google-owned open source model w/ Google checkpoint.
THIRD_PARTY_OWNED_OSS_WITH_GOOGLE_CHECKPOINT Used to indicate the PublisherModel is a 3p-owned open source model w/ Google checkpoint.
GOOGLE_OWNED_OSS Used to indicate the PublisherModel is a Google-owned pure open source model.
THIRD_PARTY_OWNED_OSS Used to indicate the PublisherModel is a 3p-owned pure open source model.

Parent

The information about the parent of a model.

Fields
display_name

string

Required. The display name of the parent. E.g., LaMDA, T5, Vision API, Natural Language API.

reference

ResourceReference

Optional. The Google Cloud resource name or the URI reference.

ResourceReference

Reference to a resource.

Fields

Union field reference.

reference can be only one of the following:

uri

string

The URI of the resource.

resource_name

string

The resource name of the Google Cloud resource.

use_case
(deprecated)

string

Use case (CUJ) of the resource.

description
(deprecated)

string

Description of the resource.

VersionState

An enum representing the state of the PublicModelVersion.

Enums
VERSION_STATE_UNSPECIFIED The version state is unspecified.
VERSION_STATE_STABLE Used to indicate the version is stable.
VERSION_STATE_UNSTABLE Used to indicate the version is unstable.

PublisherModelView

View enumeration of PublisherModel.

Enums
PUBLISHER_MODEL_VIEW_UNSPECIFIED The default / unset value. The API will default to the BASIC view.
PUBLISHER_MODEL_VIEW_BASIC Include basic metadata about the publisher model, but not the full contents.
PUBLISHER_MODEL_VIEW_FULL Include everything.
PUBLISHER_MODEL_VERSION_VIEW_BASIC Include: VersionId, ModelVersionExternalName, and SupportedActions.

PurgeArtifactsMetadata

Details of operations that perform MetadataService.PurgeArtifacts.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for purging Artifacts.

PurgeArtifactsRequest

Request message for MetadataService.PurgeArtifacts.

Fields
parent

string

Required. The metadata store to purge Artifacts from. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

filter

string

Required. A required filter matching the Artifacts to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00.

force

bool

Optional. Flag to indicate to actually perform the purge. If force is set to false, the method will return a sample of Artifact names that would be deleted.

PurgeArtifactsResponse

Response message for MetadataService.PurgeArtifacts.

Fields
purge_count

int64

The number of Artifacts that this request deleted (or, if force is false, the number of Artifacts that will be deleted). This can be an estimate.

purge_sample[]

string

A sample of the Artifact names that will be deleted. Only populated if force is set to false. The maximum number of samples is 100 (it is possible to return fewer).

PurgeContextsMetadata

Details of operations that perform MetadataService.PurgeContexts.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for purging Contexts.

PurgeContextsRequest

Request message for MetadataService.PurgeContexts.

Fields
parent

string

Required. The metadata store to purge Contexts from. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

filter

string

Required. A required filter matching the Contexts to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00.

force

bool

Optional. Flag to indicate to actually perform the purge. If force is set to false, the method will return a sample of Context names that would be deleted.

PurgeContextsResponse

Response message for MetadataService.PurgeContexts.

Fields
purge_count

int64

The number of Contexts that this request deleted (or, if force is false, the number of Contexts that will be deleted). This can be an estimate.

purge_sample[]

string

A sample of the Context names that will be deleted. Only populated if force is set to false. The maximum number of samples is 100 (it is possible to return fewer).

PurgeExecutionsMetadata

Details of operations that perform MetadataService.PurgeExecutions.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for purging Executions.

PurgeExecutionsRequest

Request message for MetadataService.PurgeExecutions.

Fields
parent

string

Required. The metadata store to purge Executions from. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}

filter

string

Required. A required filter matching the Executions to be purged. E.g., update_time <= 2020-11-19T11:30:00-04:00.

force

bool

Optional. Flag to indicate to actually perform the purge. If force is set to false, the method will return a sample of Execution names that would be deleted.

PurgeExecutionsResponse

Response message for MetadataService.PurgeExecutions.

Fields
purge_count

int64

The number of Executions that this request deleted (or, if force is false, the number of Executions that will be deleted). This can be an estimate.

purge_sample[]

string

A sample of the Execution names that will be deleted. Only populated if force is set to false. The maximum number of samples is 100 (it is possible to return fewer).

PythonPackageSpec

The spec of a Python packaged code.

Fields
executor_image_uri

string

Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.

package_uris[]

string

Required. The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.

Authorization requires the following IAM permission on the specified resource packageUris:

  • storage.objects.get
python_module

string

Required. The Python module name to run after installing the packages.

args[]

string

Command line arguments to be passed to the Python task.

env[]

EnvVar

Environment variables to be passed to the python module. Maximum limit is 100.

QueryArtifactLineageSubgraphRequest

Request message for MetadataService.QueryArtifactLineageSubgraph.

Fields
artifact

string

Required. The resource name of the Artifact whose Lineage needs to be retrieved as a LineageSubgraph. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}

The request may error with FAILED_PRECONDITION if the number of Artifacts, the number of Executions, or the number of Events that would be returned for the Context exceeds 1000.

max_hops

int32

Specifies the size of the lineage graph in terms of number of hops from the specified artifact. Negative Value: INVALID_ARGUMENT error is returned 0: Only input artifact is returned. No value: Transitive closure is performed to return the complete graph.

filter

string

Filter specifying the boolean condition for the Artifacts to satisfy in order to be part of the Lineage Subgraph. The syntax to define filter query is based on https://google.aip.dev/160. The supported set of filters include the following:

  • Attribute filtering: For example: display_name = "test" Supported fields include: name, display_name, uri, state, schema_title, create_time, and update_time. Time fields, such as create_time and update_time, require values specified in RFC-3339 format. For example: create_time = "2020-11-19T11:30:00-04:00"
  • Metadata field: To filter on metadata fields use traversal operation as follows: metadata.<field_name>.<type_value>. For example: metadata.field_1.number_value = 10.0 In case the field name contains special characters (such as colon), one can embed it inside double quote. For example: metadata."field:1".number_value = 10.0

Each of the above supported filter types can be combined together using logical operators (AND & OR). Maximum nested expression depth allowed is 5.

For example: display_name = "test" AND metadata.field1.bool_value = true.

QueryContextLineageSubgraphRequest

Request message for MetadataService.QueryContextLineageSubgraph.

Fields
context

string

Required. The resource name of the Context whose Artifacts and Executions should be retrieved as a LineageSubgraph. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

The request may error with FAILED_PRECONDITION if the number of Artifacts, the number of Executions, or the number of Events that would be returned for the Context exceeds 1000.

QueryDeployedModelsRequest

Request message for QueryDeployedModels method.

Fields
deployment_resource_pool

string

Required. The name of the target DeploymentResourcePool to query. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

page_size

int32

The maximum number of DeployedModels to return. The service may return fewer than this value.

page_token

string

A page token, received from a previous QueryDeployedModels call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to QueryDeployedModels must match the call that provided the page token.

QueryDeployedModelsResponse

Response message for QueryDeployedModels method.

Fields
deployed_models[]
(deprecated)

DeployedModel

DEPRECATED Use deployed_model_refs instead.

next_page_token

string

A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

deployed_model_refs[]

DeployedModelRef

References to the DeployedModels that share the specified deploymentResourcePool.

total_deployed_model_count

int32

The total number of DeployedModels on this DeploymentResourcePool.

total_endpoint_count

int32

The total number of Endpoints that have DeployedModels on this DeploymentResourcePool.

QueryExecutionInputsAndOutputsRequest

Request message for MetadataService.QueryExecutionInputsAndOutputs.

Fields
execution

string

Required. The resource name of the Execution whose input and output Artifacts should be retrieved as a LineageSubgraph. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}

QueryExtensionRequest

Request message for ExtensionExecutionService.QueryExtension.

Fields
name

string

Required. Name (identifier) of the extension; Format: projects/{project}/locations/{location}/extensions/{extension}

contents[]

Content

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.

QueryExtensionResponse

Response message for ExtensionExecutionService.QueryExtension.

Fields
steps[]

Content

Steps of extension or LLM interaction, can contain function call, function response, or text response. The last step contains the final response to the query.

failure_message

string

Failure message if any.

QueryReasoningEngineRequest

Request message for [ReasoningEngineExecutionService.Query][].

Fields
name

string

Required. The name of the ReasoningEngine resource to use. Format: projects/{project}/locations/{location}/reasoningEngines/{reasoning_engine}

input

Struct

Optional. Input content provided by users in JSON object format. Examples include text query, function calling parameters, media bytes, etc.

class_method

string

Optional. Class method to be used for the query. It is optional and defaults to "query" if unspecified.

QueryReasoningEngineResponse

Response message for [ReasoningEngineExecutionService.Query][]

Fields
output

Value

Response provided by users in JSON object format.

QuestionAnsweringCorrectnessInput

Input for question answering correctness metric.

Fields
metric_spec

QuestionAnsweringCorrectnessSpec

Required. Spec for question answering correctness score metric.

instance

QuestionAnsweringCorrectnessInstance

Required. Question answering correctness instance.

QuestionAnsweringCorrectnessInstance

Spec for question answering correctness instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Optional. Text provided as context to answer the question.

instruction

string

Required. The question asked and other instruction in the inference prompt.

QuestionAnsweringCorrectnessResult

Spec for question answering correctness result.

Fields
explanation

string

Output only. Explanation for question answering correctness score.

score

float

Output only. Question Answering Correctness score.

confidence

float

Output only. Confidence for question answering correctness score.

QuestionAnsweringCorrectnessSpec

Spec for question answering correctness metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute question answering correctness.

version

int32

Optional. Which version to use for evaluation.

QuestionAnsweringHelpfulnessInput

Input for question answering helpfulness metric.

Fields
metric_spec

QuestionAnsweringHelpfulnessSpec

Required. Spec for question answering helpfulness score metric.

instance

QuestionAnsweringHelpfulnessInstance

Required. Question answering helpfulness instance.

QuestionAnsweringHelpfulnessInstance

Spec for question answering helpfulness instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Optional. Text provided as context to answer the question.

instruction

string

Required. The question asked and other instruction in the inference prompt.

QuestionAnsweringHelpfulnessResult

Spec for question answering helpfulness result.

Fields
explanation

string

Output only. Explanation for question answering helpfulness score.

score

float

Output only. Question Answering Helpfulness score.

confidence

float

Output only. Confidence for question answering helpfulness score.

QuestionAnsweringHelpfulnessSpec

Spec for question answering helpfulness metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute question answering helpfulness.

version

int32

Optional. Which version to use for evaluation.

QuestionAnsweringQualityInput

Input for question answering quality metric.

Fields
metric_spec

QuestionAnsweringQualitySpec

Required. Spec for question answering quality score metric.

instance

QuestionAnsweringQualityInstance

Required. Question answering quality instance.

QuestionAnsweringQualityInstance

Spec for question answering quality instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Required. Text to answer the question.

instruction

string

Required. Question Answering prompt for LLM.

QuestionAnsweringQualityResult

Spec for question answering quality result.

Fields
explanation

string

Output only. Explanation for question answering quality score.

score

float

Output only. Question Answering Quality score.

confidence

float

Output only. Confidence for question answering quality score.

QuestionAnsweringQualitySpec

Spec for question answering quality score metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute question answering quality.

version

int32

Optional. Which version to use for evaluation.

QuestionAnsweringRelevanceInput

Input for question answering relevance metric.

Fields
metric_spec

QuestionAnsweringRelevanceSpec

Required. Spec for question answering relevance score metric.

instance

QuestionAnsweringRelevanceInstance

Required. Question answering relevance instance.

QuestionAnsweringRelevanceInstance

Spec for question answering relevance instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Optional. Text provided as context to answer the question.

instruction

string

Required. The question asked and other instruction in the inference prompt.

QuestionAnsweringRelevanceResult

Spec for question answering relevance result.

Fields
explanation

string

Output only. Explanation for question answering relevance score.

score

float

Output only. Question Answering Relevance score.

confidence

float

Output only. Confidence for question answering relevance score.

QuestionAnsweringRelevanceSpec

Spec for question answering relevance metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute question answering relevance.

version

int32

Optional. Which version to use for evaluation.

RagContexts

Relevant contexts for one query.

Fields
contexts[]

Context

All its contexts.

Context

A context of the query.

Fields
source_uri

string

If the file is imported from Cloud Storage or Google Drive, source_uri will be original file URI in Cloud Storage or Google Drive; if file is uploaded, source_uri will be file display name.

source_display_name

string

The file display name.

text

string

The text chunk.

distance
(deprecated)

double

The distance between the query dense embedding vector and the context text vector.

sparse_distance
(deprecated)

double

The distance between the query sparse embedding vector and the context text vector.

score

double

According to the underlying Vector DB and the selected metric type, the score can be either the distance or the similarity between the query and the context and its range depends on the metric type.

For example, if the metric type is COSINE_DISTANCE, it represents the distance between the query and the context. The larger the distance, the less relevant the context is to the query. The range is [0, 2], while 0 means the most relevant and 2 means the least relevant.

RagCorpus

A RagCorpus is a RagFile container and a project can have multiple RagCorpora.

Fields
name

string

Output only. The resource name of the RagCorpus.

display_name

string

Required. The display name of the RagCorpus. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

Optional. The description of the RagCorpus.

rag_embedding_model_config
(deprecated)

RagEmbeddingModelConfig

Optional. Immutable. The embedding model config of the RagCorpus.

rag_vector_db_config
(deprecated)

RagVectorDbConfig

Optional. Immutable. The Vector DB config of the RagCorpus.

create_time

Timestamp

Output only. Timestamp when this RagCorpus was created.

update_time

Timestamp

Output only. Timestamp when this RagCorpus was last updated.

corpus_status

CorpusStatus

Output only. RagCorpus state.

Union field backend_config. The backend config of the RagCorpus. It can be data store and/or retrieval engine. backend_config can be only one of the following:
vector_db_config

RagVectorDbConfig

Optional. Immutable. The config for the Vector DBs.

vertex_ai_search_config

VertexAiSearchConfig

Optional. Immutable. The config for the Vertex AI Search.

RagEmbeddingModelConfig

Config for the embedding model to use for RAG.

Fields
Union field model_config. The model config to use. model_config can be only one of the following:
vertex_prediction_endpoint

VertexPredictionEndpoint

The Vertex AI Prediction Endpoint that either refers to a publisher model or an endpoint that is hosting a 1P fine-tuned text embedding model. Endpoints hosting non-1P fine-tuned text embedding models are currently not supported. This is used for dense vector search.

VertexPredictionEndpoint

Config representing a model hosted on Vertex Prediction Endpoint.

Fields
endpoint

string

Required. The endpoint resource name. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model} or projects/{project}/locations/{location}/endpoints/{endpoint}

model

string

Output only. The resource name of the model that is deployed on the endpoint. Present only when the endpoint is not a publisher model. Pattern: projects/{project}/locations/{location}/models/{model}

model_version_id

string

Output only. Version ID of the model that is deployed on the endpoint. Present only when the endpoint is not a publisher model.

RagFile

A RagFile contains user data for chunking, embedding and indexing.

Fields
name

string

Output only. The resource name of the RagFile.

display_name

string

Required. The display name of the RagFile. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

Optional. The description of the RagFile.

size_bytes

int64

Output only. The size of the RagFile in bytes.

rag_file_type

RagFileType

Output only. The type of the RagFile.

create_time

Timestamp

Output only. Timestamp when this RagFile was created.

update_time

Timestamp

Output only. Timestamp when this RagFile was last updated.

file_status

FileStatus

Output only. State of the RagFile.

Union field rag_file_source. The origin location of the RagFile if it is imported from Google Cloud Storage or Google Drive. rag_file_source can be only one of the following:
gcs_source

GcsSource

Output only. Google Cloud Storage location of the RagFile. It does not support wildcards in the Cloud Storage uri for now.

google_drive_source

GoogleDriveSource

Output only. Google Drive location. Supports importing individual files as well as Google Drive folders.

direct_upload_source

DirectUploadSource

Output only. The RagFile is encapsulated and uploaded in the UploadRagFile request.

slack_source

SlackSource

The RagFile is imported from a Slack channel.

jira_source

JiraSource

The RagFile is imported from a Jira query.

share_point_sources

SharePointSources

The RagFile is imported from a SharePoint source.

RagFileType

The type of the RagFile.

Enums
RAG_FILE_TYPE_UNSPECIFIED RagFile type is unspecified.
RAG_FILE_TYPE_TXT RagFile type is TXT.
RAG_FILE_TYPE_PDF RagFile type is PDF.

RagFileChunkingConfig

Specifies the size and overlap of chunks for RagFiles.

Fields
chunk_size
(deprecated)

int32

The size of the chunks.

chunk_overlap
(deprecated)

int32

The overlap between chunks.

Union field chunking_config. Specifies the chunking config for RagFiles. chunking_config can be only one of the following:
fixed_length_chunking

FixedLengthChunking

Specifies the fixed length chunking config.

FixedLengthChunking

Specifies the fixed length chunking config.

Fields
chunk_size

int32

The size of the chunks.

chunk_overlap

int32

The overlap between chunks.

RagFileParsingConfig

Specifies the parsing config for RagFiles.

Fields
use_advanced_pdf_parsing
(deprecated)

bool

Whether to use advanced PDF parsing.

Union field parser. The parser to use for RagFiles. parser can be only one of the following:
advanced_parser

AdvancedParser

The Advanced Parser to use for RagFiles.

layout_parser

LayoutParser

The Layout Parser to use for RagFiles.

llm_parser

LlmParser

The LLM Parser to use for RagFiles.

AdvancedParser

Specifies the advanced parsing for RagFiles.

Fields
use_advanced_pdf_parsing

bool

Whether to use advanced PDF parsing.

LayoutParser

Document AI Layout Parser config.

Fields
processor_name

string

The full resource name of a Document AI processor or processor version. The processor must have type LAYOUT_PARSER_PROCESSOR. If specified, the additional_config.parse_as_scanned_pdf field must be false. Format: * projects/{project_id}/locations/{location}/processors/{processor_id} * projects/{project_id}/locations/{location}/processors/{processor_id}/processorVersions/{processor_version_id}

max_parsing_requests_per_min

int32

The maximum number of requests the job is allowed to make to the Document AI processor per minute. Consult https://cloud.google.com/document-ai/quotas and the Quota page for your project to set an appropriate value here. If unspecified, a default value of 120 QPM would be used.

LlmParser

Specifies the advanced parsing for RagFiles.

Fields
model_name

string

The name of a LLM model used for parsing. Format: gemini-1.5-pro-002

max_parsing_requests_per_min

int32

The maximum number of requests the job is allowed to make to the LLM model per minute. Consult https://cloud.google.com/vertex-ai/generative-ai/docs/quotas and your document size to set an appropriate value here. If unspecified, a default value of 5000 QPM would be used.

custom_parsing_prompt

string

The prompt to use for parsing. If not specified, a default prompt will be used.

RagFileTransformationConfig

Specifies the transformation config for RagFiles.

Fields
rag_file_chunking_config

RagFileChunkingConfig

Specifies the chunking config for RagFiles.

RagQuery

A query to retrieve relevant contexts.

Fields
similarity_top_k
(deprecated)

int32

Optional. The number of contexts to retrieve.

ranking
(deprecated)

Ranking

Optional. Configurations for hybrid search results ranking.

rag_retrieval_config

RagRetrievalConfig

Optional. The retrieval config for the query.

Union field query. The query to retrieve contexts. Currently only text query is supported. query can be only one of the following:
text

string

Optional. The query in text format to get relevant contexts.

Ranking

Configurations for hybrid search results ranking.

Fields
alpha

float

Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.

RagRetrievalConfig

Specifies the context retrieval config.

Fields
top_k

int32

Optional. The number of contexts to retrieve.

filter

Filter

Optional. Config for filters.

ranking

Ranking

Optional. Config for ranking and reranking.

Filter

Config for filters.

Fields
metadata_filter

string

Optional. String for metadata filtering.

Union field vector_db_threshold. Filter contexts retrieved from the vector DB based on either vector distance or vector similarity. vector_db_threshold can be only one of the following:
vector_distance_threshold

double

Optional. Only returns contexts with vector distance smaller than the threshold.

vector_similarity_threshold

double

Optional. Only returns contexts with vector similarity larger than the threshold.

HybridSearch

Config for Hybrid Search.

Fields
alpha

float

Optional. Alpha value controls the weight between dense and sparse vector search results. The range is [0, 1], while 0 means sparse vector search only and 1 means dense vector search only. The default value is 0.5 which balances sparse and dense vector search equally.

Ranking

Config for ranking and reranking.

Fields
Union field ranking_config. Config options for ranking. Currently only Rank Service is supported. ranking_config can be only one of the following:
rank_service

RankService

Optional. Config for Rank Service.

llm_ranker

LlmRanker

Optional. Config for LlmRanker.

LlmRanker

Config for LlmRanker.

Fields
model_name

string

Optional. The model name used for ranking. Format: gemini-1.5-pro

RankService

Config for Rank Service.

Fields
model_name

string

Optional. The model name of the rank service. Format: semantic-ranker-512@latest

RagVectorDbConfig

Config for the Vector DB to use for RAG.

Fields
api_auth

ApiAuth

Authentication config for the chosen Vector DB.

rag_embedding_model_config

RagEmbeddingModelConfig

Optional. Immutable. The embedding model config of the Vector DB.

Union field vector_db. The config for the Vector DB. vector_db can be only one of the following:
rag_managed_db

RagManagedDb

The config for the RAG-managed Vector DB.

weaviate

Weaviate

The config for the Weaviate.

pinecone

Pinecone

The config for the Pinecone.

vertex_feature_store

VertexFeatureStore

The config for the Vertex Feature Store.

Pinecone

The config for the Pinecone.

Fields
index_name

string

Pinecone index name. This value cannot be changed after it's set.

RagManagedDb

This type has no fields.

The config for the default RAG-managed Vector DB.

VertexFeatureStore

The config for the Vertex Feature Store.

Fields
feature_view_resource_name

string

The resource name of the FeatureView. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

VertexVectorSearch

The config for the Vertex Vector Search.

Fields
index_endpoint

string

The resource name of the Index Endpoint. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}

index

string

The resource name of the Index. Format: projects/{project}/locations/{location}/indexes/{index}

Weaviate

The config for the Weaviate.

Fields
http_endpoint

string

Weaviate DB instance HTTP endpoint. e.g. 34.56.78.90:8080 Vertex RAG only supports HTTP connection to Weaviate. This value cannot be changed after it's set.

collection_name

string

The corresponding collection this corpus maps to. This value cannot be changed after it's set.

RawPredictRequest

Request message for PredictionService.RawPredict.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

http_body

HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the Model as a DeployedModel to an Endpoint and use the RawPredict method.

RayLogsSpec

Configuration for the Ray OSS Logs.

Fields
disabled

bool

Optional. Flag to disable the export of Ray OSS logs to Cloud Logging.

RayMetricSpec

Configuration for the Ray metrics.

Fields
disabled

bool

Optional. Flag to disable the Ray metrics collection.

RaySpec

Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes.

Fields
image_uri

string

Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.

nfs_mounts[]

NfsMount

Optional. Use if you want to mount to any NFS storages.

resource_pool_images

map<string, string>

Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }

head_node_resource_pool_id

string

Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.

ray_metric_spec

RayMetricSpec

Optional. Ray metrics configurations.

ray_logs_spec

RayLogsSpec

Optional. OSS Ray logging configurations.

ReadFeatureValuesRequest

Request message for FeaturestoreOnlineServingService.ReadFeatureValues.

Fields
entity_type

string

Required. The resource name of the EntityType for the entity being read. Value format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}. For example, for a machine learning model predicting user clicks on a website, an EntityType ID could be user.

entity_id

string

Required. ID for a specific entity. For example, for a machine learning model predicting user clicks on a website, an entity ID could be user_123.

feature_selector

FeatureSelector

Required. Selector choosing Features of the target EntityType.

ReadFeatureValuesResponse

Response message for FeaturestoreOnlineServingService.ReadFeatureValues.

Fields
header

Header

Response header.

entity_view

EntityView

Entity view with Feature values. This may be the entity in the Featurestore if values for all Features were requested, or a projection of the entity in the Featurestore if values for only some Features were requested.

EntityView

Entity view with Feature values.

Fields
entity_id

string

ID of the requested entity.

data[]

Data

Each piece of data holds the k requested values for one requested Feature. If no values for the requested Feature exist, the corresponding cell will be empty. This has the same size and is in the same order as the features from the header ReadFeatureValuesResponse.header.

Data

Container to hold value(s), successive in time, for one Feature from the request.

Fields

Union field data.

data can be only one of the following:

value

FeatureValue

Feature value if a single value is requested.

values

FeatureValueList

Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty.

FeatureDescriptor

Metadata for requested Features.

Fields
id

string

Feature ID.

Response header with metadata for the requested ReadFeatureValuesRequest.entity_type and Features.

Fields
entity_type

string

The resource name of the EntityType from the ReadFeatureValuesRequest. Value format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}.

feature_descriptors[]

FeatureDescriptor

List of Feature metadata corresponding to each piece of ReadFeatureValuesResponse.EntityView.data.

ReadTensorboardBlobDataRequest

Request message for TensorboardService.ReadTensorboardBlobData.

Fields
time_series

string

Required. The resource name of the TensorboardTimeSeries to list Blobs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}

blob_ids[]

string

IDs of the blobs to read.

ReadTensorboardBlobDataResponse

Response message for TensorboardService.ReadTensorboardBlobData.

Fields
blobs[]

TensorboardBlob

Blob messages containing blob bytes.

ReadTensorboardSizeRequest

Request message for TensorboardService.ReadTensorboardSize.

Fields
tensorboard

string

Required. The name of the Tensorboard resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

ReadTensorboardSizeResponse

Response message for TensorboardService.ReadTensorboardSize.

Fields
storage_size_byte

int64

Payload storage size for the TensorBoard

ReadTensorboardTimeSeriesDataRequest

Request message for TensorboardService.ReadTensorboardTimeSeriesData.

Fields
tensorboard_time_series

string

Required. The resource name of the TensorboardTimeSeries to read data from. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}

max_data_points

int32

The maximum number of TensorboardTimeSeries' data to return.

This value should be a positive integer. This value can be set to -1 to return all data.

filter

string

Reads the TensorboardTimeSeries' data that match the filter expression.

ReadTensorboardTimeSeriesDataResponse

Response message for TensorboardService.ReadTensorboardTimeSeriesData.

Fields
time_series_data

TimeSeriesData

The returned time series data.

ReadTensorboardUsageRequest

Request message for TensorboardService.ReadTensorboardUsage.

Fields
tensorboard

string

Required. The name of the Tensorboard resource. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

ReadTensorboardUsageResponse

Response message for TensorboardService.ReadTensorboardUsage.

Fields
monthly_usage_data

map<string, PerMonthUsageData>

Maps year-month (YYYYMM) string to per month usage data.

PerMonthUsageData

Per month usage data

Fields
user_usage_data[]

PerUserUsageData

Usage data for each user in the given month.

PerUserUsageData

Per user usage data.

Fields
username

string

User's username

view_count

int64

Number of times the user has read data within the Tensorboard.

ReasoningEngine

ReasoningEngine provides a customizable runtime for models to determine which actions to take and in which order.

Fields
name

string

Identifier. The resource name of the ReasoningEngine.

display_name

string

Required. The display name of the ReasoningEngine.

description

string

Optional. The description of the ReasoningEngine.

spec

ReasoningEngineSpec

Required. Configurations of the ReasoningEngine

create_time

Timestamp

Output only. Timestamp when this ReasoningEngine was created.

update_time

Timestamp

Output only. Timestamp when this ReasoningEngine was most recently updated.

etag

string

Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

ReasoningEngineSpec

ReasoningEngine configurations

Fields
package_spec

PackageSpec

Required. User provided package spec of the ReasoningEngine.

class_methods[]

Struct

Optional. Declarations for object class methods in OpenAPI specification format.

PackageSpec

User provided package spec like pickled object and package requirements.

Fields
pickle_object_gcs_uri

string

Optional. The Cloud Storage URI of the pickled python object.

dependency_files_gcs_uri

string

Optional. The Cloud Storage URI of the dependency files in tar.gz format.

requirements_gcs_uri

string

Optional. The Cloud Storage URI of the requirements.txt file

python_version

string

Optional. The Python version. Currently support 3.8, 3.9, 3.10, 3.11. If not specified, default value is 3.10.

RebaseTunedModelOperationMetadata

Runtime operation information for GenAiTuningService.RebaseTunedModel.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation generic information.

RebaseTunedModelRequest

Request message for GenAiTuningService.RebaseTunedModel.

Fields
parent

string

Required. The resource name of the Location into which to rebase the Model. Format: projects/{project}/locations/{location}

tuned_model_ref

TunedModelRef

Required. TunedModel reference to retrieve the legacy model information.

tuning_job

TuningJob

Optional. The TuningJob to be updated. Users can use this TuningJob field to overwrite tuning configs.

artifact_destination

GcsDestination

Optional. The Google Cloud Storage location to write the artifacts.

deploy_to_same_endpoint

bool

Optional. By default, bison to gemini migration will always create new model/endpoint, but for gemini-1.0 to gemini-1.5 migration, we default deploy to the same endpoint. See details in this Section.

RebootPersistentResourceOperationMetadata

Details of operations that perform reboot PersistentResource.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for PersistentResource.

progress_message

string

Progress Message for Reboot LRO

RebootPersistentResourceRequest

Request message for PersistentResourceService.RebootPersistentResource.

Fields
name

string

Required. The name of the PersistentResource resource. Format: projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}

RemoveContextChildrenRequest

Request message for [MetadataService.DeleteContextChildrenRequest][].

Fields
context

string

Required. The resource name of the parent Context.

Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

child_contexts[]

string

The resource names of the child Contexts.

RemoveContextChildrenResponse

This type has no fields.

Response message for MetadataService.RemoveContextChildren.

RemoveDatapointsRequest

Request message for IndexService.RemoveDatapoints

Fields
index

string

Required. The name of the Index resource to be updated. Format: projects/{project}/locations/{location}/indexes/{index}

datapoint_ids[]

string

A list of datapoint ids to be deleted.

RemoveDatapointsResponse

This type has no fields.

Response message for IndexService.RemoveDatapoints

ReservationAffinity

A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity.

Fields
reservation_affinity_type

Type

Required. Specifies the reservation affinity type.

key

string

Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value.

values[]

string

Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation.

Type

Identifies a type of reservation affinity.

Enums
TYPE_UNSPECIFIED Default value. This should not be used.
NO_RESERVATION Do not consume from any reserved capacity, only use on-demand.
ANY_RESERVATION Consume any reservation available, falling back to on-demand.
SPECIFIC_RESERVATION Consume from a specific reservation. When chosen, the reservation must be identified via the key and values fields.

ResourcePool

Represents the spec of a group of resources of the same type, for example machine type, disk, and accelerators, in a PersistentResource.

Fields
id

string

Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.

machine_spec

MachineSpec

Required. Immutable. The specification of a single machine.

disk_spec

DiskSpec

Optional. Disk spec for the machine in this node pool.

used_replica_count

int64

Output only. The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.

autoscaling_spec

AutoscalingSpec

Optional. Optional spec to configure GKE or Ray-on-Vertex autoscaling

replica_count

int64

Optional. The total number of machines to use for this resource pool.

AutoscalingSpec

The min/max number of replicas allowed if enabling autoscaling

Fields
min_replica_count

int64

Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error. For autoscaling enabled Ray-on-Vertex, we allow min_replica_count of a resource_pool to be 0 to match the OSS Ray behavior(https://docs.ray.io/en/latest/cluster/vms/user-guides/configuring-autoscaling.html#cluster-config-parameters). As for Persistent Resource, the min_replica_count must be > 0, we added a corresponding validation inside CreatePersistentResourceRequestValidator.java.

max_replica_count

int64

Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error

ResourceRuntime

Persistent Cluster runtime information as output

Fields
access_uris

map<string, string>

Output only. URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }

notebook_runtime_template
(deprecated)

string

Output only. The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"

ResourceRuntimeSpec

Configuration for the runtime on a PersistentResource instance, including but not limited to:

  • Service accounts used to run the workloads.
  • Whether to make it a dedicated Ray Cluster.
Fields
service_account_spec

ServiceAccountSpec

Optional. Configure the use of workload identity on the PersistentResource

ray_spec

RaySpec

Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.

ResourcesConsumed

Statistics information about resource consumption.

Fields
replica_hours

double

Output only. The number of replica hours used. Note that many replicas may run in parallel, and additionally any given work may be queued for some time. Therefore this value is not strictly related to wall time.

RestoreDatasetVersionOperationMetadata

Runtime operation information for DatasetService.RestoreDatasetVersion.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

RestoreDatasetVersionRequest

Request message for DatasetService.RestoreDatasetVersion.

Fields
name

string

Required. The name of the DatasetVersion resource. Format: projects/{project}/locations/{location}/datasets/{dataset}/datasetVersions/{dataset_version}

ResumeModelDeploymentMonitoringJobRequest

Request message for JobService.ResumeModelDeploymentMonitoringJob.

Fields
name

string

Required. The resource name of the ModelDeploymentMonitoringJob to resume. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}

ResumeScheduleRequest

Request message for ScheduleService.ResumeSchedule.

Fields
name

string

Required. The name of the Schedule resource to be resumed. Format: projects/{project}/locations/{location}/schedules/{schedule}

catch_up

bool

Optional. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. This will also update Schedule.catch_up field. Default to false.

Retrieval

Defines a retrieval tool that model can call to access external knowledge.

Fields
disable_attribution
(deprecated)

bool

Optional. Deprecated. This option is no longer supported.

Union field source. The source of the retrieval. source can be only one of the following:
vertex_rag_store

VertexRagStore

Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.

RetrievalMetadata

Metadata related to retrieval in the grounding flow.

Fields
google_search_dynamic_retrieval_score

float

Optional. Score indicating how likely information from Google Search could help answer the prompt. The score is in the range [0, 1], where 0 is the least likely and 1 is the most likely. This score is only populated when Google Search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger Google Search.

RetrieveContextsRequest

Request message for VertexRagService.RetrieveContexts.

Fields
parent

string

Required. The resource name of the Location from which to retrieve RagContexts. The users must have permission to make a call in the project. Format: projects/{project}/locations/{location}.

query

RagQuery

Required. Single RAG retrieve query.

Union field data_source. Data Source to retrieve contexts. data_source can be only one of the following:
vertex_rag_store

VertexRagStore

The data source for Vertex RagStore.

VertexRagStore

The data source for Vertex RagStore.

Fields
rag_corpora[]
(deprecated)

string

Optional. Deprecated. Please use rag_resources to specify the data source.

rag_resources[]

RagResource

Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.

vector_distance_threshold
(deprecated)

double

Optional. Only return contexts with vector distance smaller than the threshold.

RagResource

The definition of the Rag resource.

Fields
rag_corpus

string

Optional. RagCorpora resource name. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}

rag_file_ids[]

string

Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.

RetrieveContextsResponse

Response message for VertexRagService.RetrieveContexts.

Fields
contexts

RagContexts

The contexts of the query.

RougeInput

Input for rouge metric.

Fields
metric_spec

RougeSpec

Required. Spec for rouge score metric.

instances[]

RougeInstance

Required. Repeated rouge instances.

RougeInstance

Spec for rouge instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Required. Ground truth used to compare against the prediction.

RougeMetricValue

Rouge metric value for an instance.

Fields
score

float

Output only. Rouge score.

RougeResults

Results for rouge metric.

Fields
rouge_metric_values[]

RougeMetricValue

Output only. Rouge metric values.

RougeSpec

Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1.

Fields
rouge_type

string

Optional. Supported rouge types are rougen[1-9], rougeL, and rougeLsum.

use_stemmer

bool

Optional. Whether to use stemmer to compute rouge score.

split_summaries

bool

Optional. Whether to split summaries while using rougeLsum.

RuntimeArtifact

The definition of a runtime artifact.

Fields
name

string

The name of an artifact.

type

ArtifactTypeSchema

The type of the artifact.

uri

string

The URI of the artifact.

properties
(deprecated)

map<string, Value>

The properties of the artifact. Deprecated. Use RuntimeArtifact.metadata instead.

custom_properties
(deprecated)

map<string, Value>

The custom properties of the artifact. Deprecated. Use RuntimeArtifact.metadata instead.

metadata

Struct

Properties of the Artifact.

RuntimeConfig

Runtime configuration to run the extension.

Fields
default_params

Struct

Optional. Default parameters that will be set for all the execution of this extension. If specified, the parameter values can be overridden by values in [[ExecuteExtensionRequest.operation_params]] at request time.

The struct should be in a form of map with param name as the key and actual param value as the value. E.g. If this operation requires a param "name" to be set to "abc". you can set this to something like {"name": "abc"}.

Union field GoogleFirstPartyExtensionConfig. Runtime configurations for Google first party extensions. GoogleFirstPartyExtensionConfig can be only one of the following:
code_interpreter_runtime_config

CodeInterpreterRuntimeConfig

Code execution runtime configurations for code interpreter extension.

vertex_ai_search_runtime_config

VertexAISearchRuntimeConfig

Runtime configuration for Vertex AI Search extension.

CodeInterpreterRuntimeConfig

Fields
file_input_gcs_bucket

string

Optional. The Cloud Storage bucket for file input of this Extension. If specified, support input from the Cloud Storage bucket. Vertex Extension Custom Code Service Agent should be granted file reader to this bucket. If not specified, the extension will only accept file contents from request body and reject Cloud Storage file inputs.

file_output_gcs_bucket

string

Optional. The Cloud Storage bucket for file output of this Extension. If specified, write all output files to the Cloud Storage bucket. Vertex Extension Custom Code Service Agent should be granted file writer to this bucket. If not specified, the file content will be output in response body.

VertexAISearchRuntimeConfig

Fields
serving_config_name

string

Optional. Vertex AI Search serving config name. Format: projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}

engine_id

string

Optional. Vertex AI Search engine ID. This is used to construct the search request. By setting this engine_id, API will construct the serving config using the default value to call search API for the user. The engine_id and serving_config_name cannot both be empty at the same time.

SafetyInput

Input for safety metric.

Fields
metric_spec

SafetySpec

Required. Spec for safety metric.

instance

SafetyInstance

Required. Safety instance.

SafetyInstance

Spec for safety instance.

Fields
prediction

string

Required. Output of the evaluated model.

SafetyRating

Safety rating corresponding to the generated content.

Fields
category

HarmCategory

Output only. Harm category.

probability

HarmProbability

Output only. Harm probability levels in the content.

probability_score

float

Output only. Harm probability score.

severity

HarmSeverity

Output only. Harm severity levels in the content.

severity_score

float

Output only. Harm severity score.

blocked

bool

Output only. Indicates whether the content was filtered out because of this rating.

HarmProbability

Harm probability levels in the content.

Enums
HARM_PROBABILITY_UNSPECIFIED Harm probability unspecified.
NEGLIGIBLE Negligible level of harm.
LOW Low level of harm.
MEDIUM Medium level of harm.
HIGH High level of harm.

HarmSeverity

Harm severity levels.

Enums
HARM_SEVERITY_UNSPECIFIED Harm severity unspecified.
HARM_SEVERITY_NEGLIGIBLE Negligible level of harm severity.
HARM_SEVERITY_LOW Low level of harm severity.
HARM_SEVERITY_MEDIUM Medium level of harm severity.
HARM_SEVERITY_HIGH High level of harm severity.

SafetyResult

Spec for safety result.

Fields
explanation

string

Output only. Explanation for safety score.

score

float

Output only. Safety score.

confidence

float

Output only. Confidence for safety score.

SafetySetting

Safety settings.

Fields
category

HarmCategory

Required. Harm category.

threshold

HarmBlockThreshold

Required. The harm block threshold.

method

HarmBlockMethod

Optional. Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score.

HarmBlockMethod

Probability vs severity.

Enums
HARM_BLOCK_METHOD_UNSPECIFIED The harm block method is unspecified.
SEVERITY The harm block method uses both probability and severity scores.
PROBABILITY The harm block method uses the probability score.

HarmBlockThreshold

Probability based thresholds levels for blocking.

Enums
HARM_BLOCK_THRESHOLD_UNSPECIFIED Unspecified harm block threshold.
BLOCK_LOW_AND_ABOVE Block low threshold and above (i.e. block more).
BLOCK_MEDIUM_AND_ABOVE Block medium threshold and above.
BLOCK_ONLY_HIGH Block only high threshold (i.e. block less).
BLOCK_NONE Block none.
OFF Turn off the safety filter.

SafetySpec

Spec for safety metric.

Fields
version

int32

Optional. Which version to use for evaluation.

SampledShapleyAttribution

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.

Fields
path_count

int32

Required. The number of feature permutations to consider when approximating the Shapley values.

Valid range of its value is [1, 50], inclusively.

SamplingStrategy

Sampling Strategy for logging, can be for both training and prediction dataset.

Fields
random_sample_config

RandomSampleConfig

Random sample config. Will support more sampling strategies later.

RandomSampleConfig

Requests are randomly selected.

Fields
sample_rate

double

Sample rate (0, 1]

SavedQuery

A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.

Fields
name

string

Output only. Resource name of the SavedQuery.

display_name

string

Required. The user-defined name of the SavedQuery. The name can be up to 128 characters long and can consist of any UTF-8 characters.

metadata

Value

Some additional information about the SavedQuery.

create_time

Timestamp

Output only. Timestamp when this SavedQuery was created.

update_time

Timestamp

Output only. Timestamp when SavedQuery was last updated.

annotation_filter

string

Output only. Filters on the Annotations in the dataset.

problem_type

string

Required. Problem type of the SavedQuery. Allowed values:

  • IMAGE_CLASSIFICATION_SINGLE_LABEL
  • IMAGE_CLASSIFICATION_MULTI_LABEL
  • IMAGE_BOUNDING_POLY
  • IMAGE_BOUNDING_BOX
  • TEXT_CLASSIFICATION_SINGLE_LABEL
  • TEXT_CLASSIFICATION_MULTI_LABEL
  • TEXT_EXTRACTION
  • TEXT_SENTIMENT
  • VIDEO_CLASSIFICATION
  • VIDEO_OBJECT_TRACKING
annotation_spec_count

int32

Output only. Number of AnnotationSpecs in the context of the SavedQuery.

etag

string

Used to perform a consistent read-modify-write update. If not set, a blind "overwrite" update happens.

support_automl_training

bool

Output only. If the Annotations belonging to the SavedQuery can be used for AutoML training.

Scalar

One point viewable on a scalar metric plot.

Fields
value

double

Value of the point at this step / timestamp.

Schedule

An instance of a Schedule periodically schedules runs to make API calls based on user specified time specification and API request type.

Fields
name

string

Immutable. The resource name of the Schedule.

display_name

string

Required. User provided name of the Schedule. The name can be up to 128 characters long and can consist of any UTF-8 characters.

start_time

Timestamp

Optional. Timestamp after which the first run can be scheduled. Default to Schedule create time if not specified.

end_time

Timestamp

Optional. Timestamp after which no new runs can be scheduled. If specified, The schedule will be completed when either end_time is reached or when scheduled_run_count >= max_run_count. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified.

max_run_count

int64

Optional. Maximum run count of the schedule. If specified, The schedule will be completed when either started_run_count >= max_run_count or when end_time is reached. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified.

started_run_count

int64

Output only. The number of runs started by this schedule.

state

State

Output only. The state of this Schedule.

create_time

Timestamp

Output only. Timestamp when this Schedule was created.

update_time

Timestamp

Output only. Timestamp when this Schedule was updated.

next_run_time

Timestamp

Output only. Timestamp when this Schedule should schedule the next run. Having a next_run_time in the past means the runs are being started behind schedule.

last_pause_time

Timestamp

Output only. Timestamp when this Schedule was last paused. Unset if never paused.

last_resume_time

Timestamp

Output only. Timestamp when this Schedule was last resumed. Unset if never resumed from pause.

max_concurrent_run_count

int64

Required. Maximum number of runs that can be started concurrently for this Schedule. This is the limit for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable).

allow_queueing

bool

Optional. Whether new scheduled runs can be queued when max_concurrent_runs limit is reached. If set to true, new runs will be queued instead of skipped. Default to false.

catch_up

bool

Output only. Whether to backfill missed runs when the schedule is resumed from PAUSED state. If set to true, all missed runs will be scheduled. New runs will be scheduled after the backfill is complete. Default to false.

last_scheduled_run_response

RunResponse

Output only. Response of the last scheduled run. This is the response for starting the scheduled requests and not the execution of the operations/jobs created by the requests (if applicable). Unset if no run has been scheduled yet.

Union field time_specification. Required. The time specification to launch scheduled runs. time_specification can be only one of the following:
cron

string

Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".

Union field request. Required. The API request template to launch the scheduled runs. User-specified ID is not supported in the request template. request can be only one of the following:
create_pipeline_job_request

CreatePipelineJobRequest

Request for PipelineService.CreatePipelineJob. CreatePipelineJobRequest.parent field is required (format: projects/{project}/locations/{location}).

create_model_monitoring_job_request

CreateModelMonitoringJobRequest

Request for ModelMonitoringService.CreateModelMonitoringJob.

create_notebook_execution_job_request

CreateNotebookExecutionJobRequest

Request for NotebookService.CreateNotebookExecutionJob.

RunResponse

Status of a scheduled run.

Fields
scheduled_run_time

Timestamp

The scheduled run time based on the user-specified schedule.

run_response

string

The response of the scheduled run.

State

Possible state of the schedule.

Enums
STATE_UNSPECIFIED Unspecified.
ACTIVE The Schedule is active. Runs are being scheduled on the user-specified timespec.
PAUSED The schedule is paused. No new runs will be created until the schedule is resumed. Already started runs will be allowed to complete.
COMPLETED The Schedule is completed. No new runs will be scheduled. Already started runs will be allowed to complete. Schedules in completed state cannot be paused or resumed.

ScheduleConfig

Schedule configuration for the FeatureMonitor.

Fields
cron

string

Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, "CRON_TZ=America/New_York 1 * * * *", or "TZ=America/New_York 1 * * * *".

Scheduling

All parameters related to queuing and scheduling of custom jobs.

Fields
timeout

Duration

Optional. The maximum job running time. The default is 7 days.

restart_job_on_worker_restart

bool

Optional. Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.

strategy

Strategy

Optional. This determines which type of scheduling strategy to use.

disable_retries

bool

Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.

max_wait_duration

Duration

Optional. This is the maximum duration that a job will wait for the requested resources to be provisioned if the scheduling strategy is set to [Strategy.DWS_FLEX_START]. If set to 0, the job will wait indefinitely. The default is 24 hours.

Strategy

Optional. This determines which type of scheduling strategy to use. Right now users have two options such as STANDARD which will use regular on demand resources to schedule the job, the other is SPOT which would leverage spot resources alongwith regular resources to schedule the job.

Enums
STRATEGY_UNSPECIFIED Strategy will default to STANDARD.
ON_DEMAND

Deprecated. Regular on-demand provisioning strategy.

LOW_COST

Deprecated. Low cost by making potential use of spot resources.

STANDARD Standard provisioning strategy uses regular on-demand resources.
SPOT Spot provisioning strategy uses spot resources.
FLEX_START Flex Start strategy uses DWS to queue for resources.

Schema

Schema is used to define the format of input/output data. Represents a select subset of an OpenAPI 3.0 schema object. More fields may be added in the future as needed.

Fields
type

Type

Optional. The type of the data.

format

string

Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc

title

string

Optional. The title of the Schema.

description

string

Optional. The description of the data.

nullable

bool

Optional. Indicates if the value may be null.

default

Value

Optional. Default value of the data.

items

Schema

Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.

min_items

int64

Optional. Minimum number of the elements for Type.ARRAY.

max_items

int64

Optional. Maximum number of the elements for Type.ARRAY.

enum[]

string

Optional. Possible values of the element of primitive type with enum format. Examples: 1. We can define direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 2. We can define apartment number as : {type:INTEGER, format:enum, enum:["101", "201", "301"]}

properties

map<string, Schema>

Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.

property_ordering[]

string

Optional. The order of the properties. Not a standard field in open api spec. Only used to support the order of the properties.

required[]

string

Optional. Required properties of Type.OBJECT.

min_properties

int64

Optional. Minimum number of the properties for Type.OBJECT.

max_properties

int64

Optional. Maximum number of the properties for Type.OBJECT.

minimum

double

Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER

maximum

double

Optional. Maximum value of the Type.INTEGER and Type.NUMBER

min_length

int64

Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING

max_length

int64

Optional. Maximum length of the Type.STRING

pattern

string

Optional. Pattern of the Type.STRING to restrict a string to a regular expression.

example

Value

Optional. Example of the object. Will only populated when the object is the root.

any_of[]

Schema

Optional. The value should be validated against any (one or more) of the subschemas in the list.

SearchDataItemsRequest

Request message for DatasetService.SearchDataItems.

Fields
dataset

string

Required. The resource name of the Dataset from which to search DataItems. Format: projects/{project}/locations/{location}/datasets/{dataset}

saved_query
(deprecated)

string

The resource name of a SavedQuery(annotation set in UI). Format: projects/{project}/locations/{location}/datasets/{dataset}/savedQueries/{saved_query} All of the search will be done in the context of this SavedQuery.

data_labeling_job

string

The resource name of a DataLabelingJob. Format: projects/{project}/locations/{location}/dataLabelingJobs/{data_labeling_job} If this field is set, all of the search will be done in the context of this DataLabelingJob.

data_item_filter

string

An expression for filtering the DataItem that will be returned.

  • data_item_id - for = or !=.
  • labeled - for = or !=.
  • has_annotation(ANNOTATION_SPEC_ID) - true only for DataItem that have at least one annotation with annotation_spec_id = ANNOTATION_SPEC_ID in the context of SavedQuery or DataLabelingJob.

For example:

  • data_item=1
  • has_annotation(5)
annotations_filter
(deprecated)

string

An expression for filtering the Annotations that will be returned per DataItem. * annotation_spec_id - for = or !=.

annotation_filters[]

string

An expression that specifies what Annotations will be returned per DataItem. Annotations satisfied either of the conditions will be returned. * annotation_spec_id - for = or !=. Must specify saved_query_id= - saved query id that annotations should belong to.

field_mask

FieldMask

Mask specifying which fields of DataItemView to read.

annotations_limit

int32

If set, only up to this many of Annotations will be returned per DataItemView. The maximum value is 1000. If not set, the maximum value will be used.

page_size

int32

Requested page size. Server may return fewer results than requested. Default and maximum page size is 100.

order_by
(deprecated)

string

A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

page_token

string

A token identifying a page of results for the server to return Typically obtained via SearchDataItemsResponse.next_page_token of the previous DatasetService.SearchDataItems call.

Union field order.

order can be only one of the following:

order_by_data_item

string

A comma-separated list of data item fields to order by, sorted in ascending order. Use "desc" after a field name for descending.

order_by_annotation

OrderByAnnotation

Expression that allows ranking results based on annotation's property.

OrderByAnnotation

Expression that allows ranking results based on annotation's property.

Fields
saved_query

string

Required. Saved query of the Annotation. Only Annotations belong to this saved query will be considered for ordering.

order_by

string

A comma-separated list of annotation fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Must also specify saved_query.

SearchDataItemsResponse

Response message for DatasetService.SearchDataItems.

Fields
data_item_views[]

DataItemView

The DataItemViews read.

next_page_token

string

A token to retrieve next page of results. Pass to SearchDataItemsRequest.page_token to obtain that page.

SearchEntryPoint

Google search entry point.

Fields
rendered_content

string

Optional. Web content snippet that can be embedded in a web page or an app webview.

sdk_blob

bytes

Optional. Base64 encoded JSON representing array of <search term, search url> tuple.

SearchFeaturesRequest

Request message for FeaturestoreService.SearchFeatures.

Fields
location

string

Required. The resource name of the Location to search Features. Format: projects/{project}/locations/{location}

query

string

Query string that is a conjunction of field-restricted queries and/or field-restricted filters. Field-restricted queries and filters can be combined using AND to form a conjunction.

A field query is in the form FIELD:QUERY. This implicitly checks if QUERY exists as a substring within Feature's FIELD. The QUERY and the FIELD are converted to a sequence of words (i.e. tokens) for comparison. This is done by:

  • Removing leading/trailing whitespace and tokenizing the search value. Characters that are not one of alphanumeric [a-zA-Z0-9], underscore _, or asterisk * are treated as delimiters for tokens. * is treated as a wildcard that matches characters within a token.
  • Ignoring case.
  • Prepending an asterisk to the first and appending an asterisk to the last token in QUERY.

A QUERY must be either a singular token or a phrase. A phrase is one or multiple words enclosed in double quotation marks ("). With phrases, the order of the words is important. Words in the phrase must be matching in order and consecutively.

Supported FIELDs for field-restricted queries:

  • feature_id
  • description
  • entity_type_id

Examples:

  • feature_id: foo --> Matches a Feature with ID containing the substring foo (eg. foo, foofeature, barfoo).
  • feature_id: foo*feature --> Matches a Feature with ID containing the substring foo*feature (eg. foobarfeature).
  • feature_id: foo AND description: bar --> Matches a Feature with ID containing the substring foo and description containing the substring bar.

Besides field queries, the following exact-match filters are supported. The exact-match filters do not support wildcards. Unlike field-restricted queries, exact-match filters are case-sensitive.

  • feature_id: Supports = comparisons.
  • description: Supports = comparisons. Multi-token filters should be enclosed in quotes.
  • entity_type_id: Supports = comparisons.
  • value_type: Supports = and != comparisons.
  • labels: Supports key-value equality as well as key presence.
  • featurestore_id: Supports = comparisons.

Examples:

  • description = "foo bar" --> Any Feature with description exactly equal to foo bar
  • value_type = DOUBLE --> Features whose type is DOUBLE.
  • labels.active = yes AND labels.env = prod --> Features having both (active: yes) and (env: prod) labels.
  • labels.env: * --> Any Feature which has a label with env as the key.
page_size

int32

The maximum number of Features to return. The service may return fewer than this value. If unspecified, at most 100 Features will be returned. The maximum value is 100; any value greater than 100 will be coerced to 100.

page_token

string

A page token, received from a previous FeaturestoreService.SearchFeatures call. Provide this to retrieve the subsequent page.

When paginating, all other parameters provided to FeaturestoreService.SearchFeatures, except page_size, must match the call that provided the page token.

SearchFeaturesResponse

Response message for FeaturestoreService.SearchFeatures.

Fields
features[]

Feature

The Features matching the request.

Fields returned:

  • name
  • description
  • labels
  • create_time
  • update_time
next_page_token

string

A token, which can be sent as SearchFeaturesRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.

SearchMigratableResourcesRequest

Request message for MigrationService.SearchMigratableResources.

Fields
parent

string

Required. The location that the migratable resources should be searched from. It's the Vertex AI location that the resources can be migrated to, not the resources' original location. Format: projects/{project}/locations/{location}

page_size

int32

The standard page size. The default and maximum value is 100.

page_token

string

The standard page token.

filter

string

A filter for your search. You can use the following types of filters:

  • Resource type filters. The following strings filter for a specific type of MigratableResource:
    • ml_engine_model_version:*
    • automl_model:*
    • automl_dataset:*
    • data_labeling_dataset:*
  • "Migrated or not" filters. The following strings filter for resources that either have or have not already been migrated:
    • last_migrate_time:* filters for migrated resources.
    • NOT last_migrate_time:* filters for not yet migrated resources.

SearchMigratableResourcesResponse

Response message for MigrationService.SearchMigratableResources.

Fields
migratable_resources[]

MigratableResource

All migratable resources that can be migrated to the location specified in the request.

next_page_token

string

The standard next-page token. The migratable_resources may not fill page_size in SearchMigratableResourcesRequest even when there are subsequent pages.

SearchModelDeploymentMonitoringStatsAnomaliesRequest

Request message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.

Fields
model_deployment_monitoring_job

string

Required. ModelDeploymentMonitoring Job resource name. Format: projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}

deployed_model_id

string

Required. The DeployedModel ID of the [ModelDeploymentMonitoringObjectiveConfig.deployed_model_id].

feature_display_name

string

The feature display name. If specified, only return the stats belonging to this feature. Format: ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.feature_display_name, example: "user_destination".

objectives[]

StatsAnomaliesObjective

Required. Objectives of the stats to retrieve.

page_size

int32

The standard list page size.

page_token

string

A page token received from a previous JobService.SearchModelDeploymentMonitoringStatsAnomalies call.

start_time

Timestamp

The earliest timestamp of stats being generated. If not set, indicates fetching stats till the earliest possible one.

end_time

Timestamp

The latest timestamp of stats being generated. If not set, indicates feching stats till the latest possible one.

StatsAnomaliesObjective

Stats requested for specific objective.

Fields
type

ModelDeploymentMonitoringObjectiveType

top_feature_count

int32

If set, all attribution scores between SearchModelDeploymentMonitoringStatsAnomaliesRequest.start_time and SearchModelDeploymentMonitoringStatsAnomaliesRequest.end_time are fetched, and page token doesn't take effect in this case. Only used to retrieve attribution score for the top Features which has the highest attribution score in the latest monitoring run.

SearchModelDeploymentMonitoringStatsAnomaliesResponse

Response message for JobService.SearchModelDeploymentMonitoringStatsAnomalies.

Fields
monitoring_stats[]

ModelMonitoringStatsAnomalies

Stats retrieved for requested objectives. There are at most 1000 ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies.prediction_stats in the response.

next_page_token

string

The page token that can be used by the next JobService.SearchModelDeploymentMonitoringStatsAnomalies call.

SearchModelMonitoringAlertsRequest

Request message for ModelMonitoringService.SearchModelMonitoringAlerts.

Fields
model_monitor

string

Required. ModelMonitor resource name. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}

model_monitoring_job

string

If non-empty, returns the alerts of this model monitoring job.

alert_time_interval

Interval

If non-empty, returns the alerts in this time interval.

stats_name

string

If non-empty, returns the alerts of this stats_name.

objective_type

string

If non-empty, returns the alerts of this objective type. Supported monitoring objectives: raw-feature-drift prediction-output-drift feature-attribution

page_size

int32

The standard list page size.

page_token

string

A page token received from a previous ModelMonitoringService.SearchModelMonitoringAlerts call.

SearchModelMonitoringAlertsResponse

Response message for ModelMonitoringService.SearchModelMonitoringAlerts.

Fields
model_monitoring_alerts[]

ModelMonitoringAlert

Alerts retrieved for the requested objectives. Sorted by alert time descendingly.

total_number_alerts

int64

The total number of alerts retrieved by the requested objectives.

next_page_token

string

The page token that can be used by the next ModelMonitoringService.SearchModelMonitoringAlerts call.

SearchModelMonitoringStatsFilter

Filter for searching ModelMonitoringStats.

Fields

Union field filter.

filter can be only one of the following:

tabular_stats_filter

TabularStatsFilter

Tabular statistics filter.

TabularStatsFilter

Tabular statistics filter.

Fields
stats_name

string

If not specified, will return all the stats_names.

objective_type

string

One of the supported monitoring objectives: raw-feature-drift prediction-output-drift feature-attribution

model_monitoring_job

string

From a particular monitoring job.

model_monitoring_schedule

string

From a particular monitoring schedule.

algorithm

string

Specify the algorithm type used for distance calculation, eg: jensen_shannon_divergence, l_infinity.

SearchModelMonitoringStatsRequest

Request message for ModelMonitoringService.SearchModelMonitoringStats.

Fields
model_monitor

string

Required. ModelMonitor resource name. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}

stats_filter

SearchModelMonitoringStatsFilter

Filter for search different stats.

time_interval

Interval

The time interval for which results should be returned.

page_size

int32

The standard list page size.

page_token

string

A page token received from a previous ModelMonitoringService.SearchModelMonitoringStats call.

SearchModelMonitoringStatsResponse

Response message for ModelMonitoringService.SearchModelMonitoringStats.

Fields
monitoring_stats[]

ModelMonitoringStats

Stats retrieved for requested objectives.

next_page_token

string

The page token that can be used by the next ModelMonitoringService.SearchModelMonitoringStats call.

SearchNearestEntitiesRequest

The request message for FeatureOnlineStoreService.SearchNearestEntities.

Fields
feature_view

string

Required. FeatureView resource format projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}/featureViews/{featureView}

query

NearestNeighborQuery

Required. The query.

return_full_entity

bool

Optional. If set to true, the full entities (including all vector values and metadata) of the nearest neighbors are returned; otherwise only entity id of the nearest neighbors will be returned. Note that returning full entities will significantly increase the latency and cost of the query.

SearchNearestEntitiesResponse

Response message for FeatureOnlineStoreService.SearchNearestEntities

Fields
nearest_neighbors

NearestNeighbors

The nearest neighbors of the query entity.

Segment

Segment of the content.

Fields
part_index

int32

Output only. The index of a Part object within its parent Content object.

start_index

int32

Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.

end_index

int32

Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.

text

string

Output only. The text corresponding to the segment from the response.

ServiceAccountSpec

Configuration for the use of custom service account to run the workloads.

Fields
enable_custom_service_account

bool

Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.

service_account

string

Optional. Required when all below conditions are met * enable_custom_service_account is true; * any runtime is specified via ResourceRuntimeSpec on creation time, for example, Ray

The users must have iam.serviceAccounts.actAs permission on this service account and then the specified runtime containers will run as it.

Do not set this field if you want to submit jobs using custom service account to this PersistentResource after creation, but only specify the service_account inside the job.

SharePointSources

The SharePointSources to pass to ImportRagFiles.

Fields
share_point_sources[]

SharePointSource

The SharePoint sources.

SharePointSource

An individual SharePointSource.

Fields
client_id

string

The Application ID for the app registered in Microsoft Azure Portal. The application must also be configured with MS Graph permissions "Files.ReadAll", "Sites.ReadAll" and BrowserSiteLists.Read.All.

client_secret

ApiKeyConfig

The application secret for the app registered in Azure.

tenant_id

string

Unique identifier of the Azure Active Directory Instance.

sharepoint_site_name

string

The name of the SharePoint site to download from. This can be the site name or the site id.

file_id

string

Output only. The SharePoint file id. Output only.

Union field folder_source. The SharePoint folder source. If not provided, uses "root". folder_source can be only one of the following:
sharepoint_folder_path

string

The path of the SharePoint folder to download from.

sharepoint_folder_id

string

The ID of the SharePoint folder to download from.

Union field drive_source. The SharePoint drive source. drive_source can be only one of the following:
drive_name

string

The name of the drive to download from.

drive_id

string

The ID of the drive to download from.

ShieldedVmConfig

A set of Shielded Instance options. See Images using supported Shielded VM features.

Fields
enable_secure_boot

bool

Defines whether the instance has Secure Boot enabled.

Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.

SlackSource

The Slack source for the ImportRagFilesRequest.

Fields
channels[]

SlackChannels

Required. The Slack channels.

SlackChannels

SlackChannels contains the Slack channels and corresponding access token.

Fields
channels[]

SlackChannel

Required. The Slack channel IDs.

api_key_config

ApiKeyConfig

Required. The SecretManager secret version resource name (e.g. projects/{project}/secrets/{secret}/versions/{version}) storing the Slack channel access token that has access to the slack channel IDs. See: https://api.slack.com/tutorials/tracks/getting-a-token.

SlackChannel

SlackChannel contains the Slack channel ID and the time range to import.

Fields
channel_id

string

Required. The Slack channel ID.

start_time

Timestamp

Optional. The starting timestamp for messages to import.

end_time

Timestamp

Optional. The ending timestamp for messages to import.

SmoothGradConfig

Config for SmoothGrad approximation of gradients.

When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf

Fields
noisy_sample_count

int32

The number of gradient samples to use for approximation. The higher this number, the more accurate the gradient is, but the runtime complexity increases by this factor as well. Valid range of its value is [1, 50]. Defaults to 3.

Union field GradientNoiseSigma. Represents the standard deviation of the gaussian kernel that will be used to add noise to the interpolated inputs prior to computing gradients. GradientNoiseSigma can be only one of the following:
noise_sigma

float

This is a single float value and will be used to add noise to all the features. Use this field when all features are normalized to have the same distribution: scale to range [0, 1], [-1, 1] or z-scoring, where features are normalized to have 0-mean and 1-variance. Learn more about normalization.

For best results the recommended value is about 10% - 20% of the standard deviation of the input feature. Refer to section 3.2 of the SmoothGrad paper: https://arxiv.org/pdf/1706.03825.pdf. Defaults to 0.1.

If the distribution is different per feature, set feature_noise_sigma instead for each feature.

feature_noise_sigma

FeatureNoiseSigma

This is similar to noise_sigma, but provides additional flexibility. A separate noise sigma can be provided for each feature, which is useful if their distributions are different. No noise is added to features that are not set. If this field is unset, noise_sigma will be used for all features.

SpecialistPool

SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers and workers. Managers are responsible for managing the workers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and workers handle the jobs using CrowdCompute console.

Fields
name

string

Required. The resource name of the SpecialistPool.

display_name

string

Required. The user-defined name of the SpecialistPool. The name can be up to 128 characters long and can consist of any UTF-8 characters. This field should be unique on project-level.

specialist_managers_count

int32

Output only. The number of managers in this SpecialistPool.

specialist_manager_emails[]

string

The email addresses of the managers in the SpecialistPool.

pending_data_labeling_jobs[]

string

Output only. The resource name of the pending data labeling jobs.

specialist_worker_emails[]

string

The email addresses of workers in the SpecialistPool.

StartNotebookRuntimeOperationMetadata

Metadata information for NotebookService.StartNotebookRuntime.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

progress_message

string

A human-readable message that shows the intermediate progress details of NotebookRuntime.

StartNotebookRuntimeRequest

Request message for NotebookService.StartNotebookRuntime.

Fields
name

string

Required. The name of the NotebookRuntime resource to be started. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.

StartNotebookRuntimeResponse

This type has no fields.

Response message for NotebookService.StartNotebookRuntime.

StopNotebookRuntimeOperationMetadata

Metadata information for NotebookService.StopNotebookRuntime.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

StopNotebookRuntimeRequest

Request message for NotebookService.StopNotebookRuntime.

Fields
name

string

Required. The name of the NotebookRuntime resource to be stopped. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.

StopNotebookRuntimeResponse

This type has no fields.

Response message for NotebookService.StopNotebookRuntime.

StopTrialRequest

Request message for VizierService.StopTrial.

Fields
name

string

Required. The Trial's name. Format: projects/{project}/locations/{location}/studies/{study}/trials/{trial}

StratifiedSplit

Assigns input data to the training, validation, and test sets so that the distribution of values found in the categorical column (as specified by the key field) is mirrored within each split. The fraction values determine the relative sizes of the splits.

For example, if the specified column has three values, with 50% of the rows having value "A", 25% value "B", and 25% value "C", and the split fractions are specified as 80/10/10, then the training set will constitute 80% of the training data, with about 50% of the training set rows having the value "A" for the specified column, about 25% having the value "B", and about 25% having the value "C".

Only the top 500 occurring values are used; any values not in the top 500 values are randomly assigned to a split. If less than three rows contain a specific value, those rows are randomly assigned.

Supported only for tabular Datasets.

Fields
training_fraction

double

The fraction of the input data that is to be used to train the Model.

validation_fraction

double

The fraction of the input data that is to be used to validate the Model.

test_fraction

double

The fraction of the input data that is to be used to evaluate the Model.

key

string

Required. The key is a name of one of the Dataset's data columns. The key provided must be for a categorical column.

StreamDirectPredictRequest

Request message for PredictionService.StreamDirectPredict.

The first message must contain endpoint field and optionally [input][]. The subsequent messages must contain [input][].

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

inputs[]

Tensor

Optional. The prediction input.

parameters

Tensor

Optional. The parameters that govern the prediction.

StreamDirectPredictResponse

Response message for PredictionService.StreamDirectPredict.

Fields
outputs[]

Tensor

The prediction output.

parameters

Tensor

The parameters that govern the prediction.

StreamDirectRawPredictRequest

Request message for PredictionService.StreamDirectRawPredict.

The first message must contain endpoint and method_name fields and optionally input. The subsequent messages must contain input. method_name in the subsequent messages have no effect.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

method_name

string

Optional. Fully qualified name of the API method being invoked to perform predictions.

Format: /namespace.Service/Method/ Example: /tensorflow.serving.PredictionService/Predict

input

bytes

Optional. The prediction input.

StreamDirectRawPredictResponse

Response message for PredictionService.StreamDirectRawPredict.

Fields
output

bytes

The prediction output.

StreamQueryReasoningEngineRequest

Request message for [ReasoningEngineExecutionService.StreamQuery][].

Fields
name

string

Required. The name of the ReasoningEngine resource to use. Format: projects/{project}/locations/{location}/reasoningEngines/{reasoning_engine}

input

Struct

Optional. Input content provided by users in JSON object format. Examples include text query, function calling parameters, media bytes, etc.

class_method

string

Optional. Class method to be used for the stream query. It is optional and defaults to "stream_query" if unspecified.

StreamRawPredictRequest

Request message for PredictionService.StreamRawPredict.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

http_body

HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

StreamingFetchFeatureValuesRequest

Request message for FeatureOnlineStoreService.StreamingFetchFeatureValues. For the entities requested, all features under the requested feature view will be returned.

Fields
feature_view

string

Required. FeatureView resource format projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}/featureViews/{featureView}

data_keys[]

FeatureViewDataKey

data_format

FeatureViewDataFormat

Specify response data format. If not set, KeyValue format will be used.

StreamingFetchFeatureValuesResponse

Response message for FeatureOnlineStoreService.StreamingFetchFeatureValues.

Fields
status

Status

Response status. If OK, then StreamingFetchFeatureValuesResponse.data will be populated. Otherwise StreamingFetchFeatureValuesResponse.data_keys_with_error will be populated with the appropriate data keys. The error only applies to the listed data keys - the stream will remain open for further [FeatureOnlineStoreService.StreamingFetchFeatureValuesRequest][] requests.

data[]

FetchFeatureValuesResponse

data_keys_with_error[]

FeatureViewDataKey

StreamingPredictRequest

Request message for PredictionService.StreamingPredict.

The first message must contain endpoint field and optionally [input][]. The subsequent messages must contain [input][].

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

inputs[]

Tensor

The prediction input.

parameters

Tensor

The parameters that govern the prediction.

StreamingPredictResponse

Response message for PredictionService.StreamingPredict.

Fields
outputs[]

Tensor

The prediction output.

parameters

Tensor

The parameters that govern the prediction.

StreamingRawPredictRequest

Request message for PredictionService.StreamingRawPredict.

The first message must contain endpoint and method_name fields and optionally input. The subsequent messages must contain input. method_name in the subsequent messages have no effect.

Fields
endpoint

string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

method_name

string

Fully qualified name of the API method being invoked to perform predictions.

Format: /namespace.Service/Method/ Example: /tensorflow.serving.PredictionService/Predict

input

bytes

The prediction input.

StreamingRawPredictResponse

Response message for PredictionService.StreamingRawPredict.

Fields
output

bytes

The prediction output.

StreamingReadFeatureValuesRequest

Request message for FeaturestoreOnlineServingService.StreamingReadFeatureValues.

Fields
entity_type

string

Required. The resource name of the entities' type. Value format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entityType}. For example, for a machine learning model predicting user clicks on a website, an EntityType ID could be user.

entity_ids[]

string

Required. IDs of entities to read Feature values of. The maximum number of IDs is 100. For example, for a machine learning model predicting user clicks on a website, an entity ID could be user_123.

feature_selector

FeatureSelector

Required. Selector choosing Features of the target EntityType. Feature IDs will be deduplicated.

StringArray

A list of string values.

Fields
values[]

string

A list of string values.

StructFieldValue

One field of a Struct (or object) type feature value.

Fields
name

string

Name of the field in the struct feature.

value

FeatureValue

The value for this field.

StructValue

Struct (or object) type feature value.

Fields
values[]

StructFieldValue

A list of field values.

Study

A message representing a Study.

Fields
name

string

Output only. The name of a study. The study's globally unique identifier. Format: projects/{project}/locations/{location}/studies/{study}

display_name

string

Required. Describes the Study, default value is empty string.

study_spec

StudySpec

Required. Configuration of the Study.

state

State

Output only. The detailed state of a Study.

create_time

Timestamp

Output only. Time at which the study was created.

inactive_reason

string

Output only. A human readable reason why the Study is inactive. This should be empty if a study is ACTIVE or COMPLETED.

State

Describes the Study state.

Enums
STATE_UNSPECIFIED The study state is unspecified.
ACTIVE The study is active.
INACTIVE The study is stopped due to an internal error.
COMPLETED The study is done when the service exhausts the parameter search space or max_trial_count is reached.

StudySpec

Represents specification of a Study.

Fields
metrics[]

MetricSpec

Required. Metric specs for the Study.

parameters[]

ParameterSpec

Required. The set of parameters to tune.

algorithm

Algorithm

The search algorithm specified for the Study.

observation_noise

ObservationNoise

The observation noise level of the study. Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

measurement_selection_type

MeasurementSelectionType

Describe which measurement selection type will be used

transfer_learning_config

TransferLearningConfig

The configuration info/options for transfer learning. Currently supported for Vertex AI Vizier service, not HyperParameterTuningJob

Union field automated_stopping_spec.

automated_stopping_spec can be only one of the following:

decay_curve_stopping_spec

DecayCurveAutomatedStoppingSpec

The automated early stopping spec using decay curve rule.

median_automated_stopping_spec

MedianAutomatedStoppingSpec

The automated early stopping spec using median rule.

convex_stop_config
(deprecated)

ConvexStopConfig

Deprecated. The automated early stopping using convex stopping rule.

convex_automated_stopping_spec

ConvexAutomatedStoppingSpec

The automated early stopping spec using convex stopping rule.

study_stopping_config

StudyStoppingConfig

Conditions for automated stopping of a Study. Enable automated stopping by configuring at least one condition.

Algorithm

The available search algorithms for the Study.

Enums
ALGORITHM_UNSPECIFIED The default algorithm used by Vertex AI for hyperparameter tuning and Vertex AI Vizier.

ConvexAutomatedStoppingSpec

Configuration for ConvexAutomatedStoppingSpec. When there are enough completed trials (configured by min_measurement_count), for pending trials with enough measurements and steps, the policy first computes an overestimate of the objective value at max_num_steps according to the slope of the incomplete objective value curve. No prediction can be made if the curve is completely flat. If the overestimation is worse than the best objective value of the completed trials, this pending trial will be early-stopped, but a last measurement will be added to the pending trial with max_num_steps and predicted objective value from the autoregression model.

Fields
max_step_count

int64

Steps used in predicting the final objective for early stopped trials. In general, it's set to be the same as the defined steps in training / tuning. If not defined, it will learn it from the completed trials. When use_steps is false, this field is set to the maximum elapsed seconds.

min_step_count

int64

Minimum number of steps for a trial to complete. Trials which do not have a measurement with step_count > min_step_count won't be considered for early stopping. It's ok to set it to 0, and a trial can be early stopped at any stage. By default, min_step_count is set to be one-tenth of the max_step_count. When use_elapsed_duration is true, this field is set to the minimum elapsed seconds.

min_measurement_count

int64

The minimal number of measurements in a Trial. Early-stopping checks will not trigger if less than min_measurement_count+1 completed trials or pending trials with less than min_measurement_count measurements. If not defined, the default value is 5.

learning_rate_parameter_name

string

The hyper-parameter name used in the tuning job that stands for learning rate. Leave it blank if learning rate is not in a parameter in tuning. The learning_rate is used to estimate the objective value of the ongoing trial.

use_elapsed_duration

bool

This bool determines whether or not the rule is applied based on elapsed_secs or steps. If use_elapsed_duration==false, the early stopping decision is made according to the predicted objective values according to the target steps. If use_elapsed_duration==true, elapsed_secs is used instead of steps. Also, in this case, the parameters max_num_steps and min_num_steps are overloaded to contain max_elapsed_seconds and min_elapsed_seconds.

update_all_stopped_trials

bool

ConvexAutomatedStoppingSpec by default only updates the trials that needs to be early stopped using a newly trained auto-regressive model. When this flag is set to True, all stopped trials from the beginning are potentially updated in terms of their final_measurement. Also, note that the training logic of autoregressive models is different in this case. Enabling this option has shown better results and this may be the default option in the future.

ConvexStopConfig

Configuration for ConvexStopPolicy.

Fields
max_num_steps

int64

Steps used in predicting the final objective for early stopped trials. In general, it's set to be the same as the defined steps in training / tuning. When use_steps is false, this field is set to the maximum elapsed seconds.

min_num_steps

int64

Minimum number of steps for a trial to complete. Trials which do not have a measurement with num_steps > min_num_steps won't be considered for early stopping. It's ok to set it to 0, and a trial can be early stopped at any stage. By default, min_num_steps is set to be one-tenth of the max_num_steps. When use_steps is false, this field is set to the minimum elapsed seconds.

autoregressive_order

int64

The number of Trial measurements used in autoregressive model for value prediction. A trial won't be considered early stopping if has fewer measurement points.

learning_rate_parameter_name

string

The hyper-parameter name used in the tuning job that stands for learning rate. Leave it blank if learning rate is not in a parameter in tuning. The learning_rate is used to estimate the objective value of the ongoing trial.

use_seconds

bool

This bool determines whether or not the rule is applied based on elapsed_secs or steps. If use_seconds==false, the early stopping decision is made according to the predicted objective values according to the target steps. If use_seconds==true, elapsed_secs is used instead of steps. Also, in this case, the parameters max_num_steps and min_num_steps are overloaded to contain max_elapsed_seconds and min_elapsed_seconds.

DecayCurveAutomatedStoppingSpec

The decay curve automated stopping rule builds a Gaussian Process Regressor to predict the final objective value of a Trial based on the already completed Trials and the intermediate measurements of the current Trial. Early stopping is requested for the current Trial if there is very low probability to exceed the optimal value found so far.

Fields
use_elapsed_duration

bool

True if Measurement.elapsed_duration is used as the x-axis of each Trials Decay Curve. Otherwise, Measurement.step_count will be used as the x-axis.

MeasurementSelectionType

This indicates which measurement to use if/when the service automatically selects the final measurement from previously reported intermediate measurements. Choose this based on two considerations: A) Do you expect your measurements to monotonically improve? If so, choose LAST_MEASUREMENT. On the other hand, if you're in a situation where your system can "over-train" and you expect the performance to get better for a while but then start declining, choose BEST_MEASUREMENT. B) Are your measurements significantly noisy and/or irreproducible? If so, BEST_MEASUREMENT will tend to be over-optimistic, and it may be better to choose LAST_MEASUREMENT. If both or neither of (A) and (B) apply, it doesn't matter which selection type is chosen.

Enums
MEASUREMENT_SELECTION_TYPE_UNSPECIFIED Will be treated as LAST_MEASUREMENT.
LAST_MEASUREMENT Use the last measurement reported.
BEST_MEASUREMENT Use the best measurement reported.

MedianAutomatedStoppingSpec

The median automated stopping rule stops a pending Trial if the Trial's best objective_value is strictly below the median 'performance' of all completed Trials reported up to the Trial's last measurement. Currently, 'performance' refers to the running average of the objective values reported by the Trial in each measurement.

Fields
use_elapsed_duration

bool

True if median automated stopping rule applies on Measurement.elapsed_duration. It means that elapsed_duration field of latest measurement of current Trial is used to compute median objective value for each completed Trials.

MetricSpec

Represents a metric to optimize.

Fields
metric_id

string

Required. The ID of the metric. Must not contain whitespaces and must be unique amongst all MetricSpecs.

goal

GoalType

Required. The optimization goal of the metric.

safety_config

SafetyMetricConfig

Used for safe search. In the case, the metric will be a safety metric. You must provide a separate metric for objective metric.

GoalType

The available types of optimization goals.

Enums
GOAL_TYPE_UNSPECIFIED Goal Type will default to maximize.
MAXIMIZE Maximize the goal metric.
MINIMIZE Minimize the goal metric.

SafetyMetricConfig

Used in safe optimization to specify threshold levels and risk tolerance.

Fields
safety_threshold

double

Safety threshold (boundary value between safe and unsafe). NOTE that if you leave SafetyMetricConfig unset, a default value of 0 will be used.

desired_min_safe_trials_fraction

double

Desired minimum fraction of safe trials (over total number of trials) that should be targeted by the algorithm at any time during the study (best effort). This should be between 0.0 and 1.0 and a value of 0.0 means that there is no minimum and an algorithm proceeds without targeting any specific fraction. A value of 1.0 means that the algorithm attempts to only Suggest safe Trials.

ObservationNoise

Describes the noise level of the repeated observations.

"Noisy" means that the repeated observations with the same Trial parameters may lead to different metric evaluations.

Enums
OBSERVATION_NOISE_UNSPECIFIED The default noise level chosen by Vertex AI.
LOW Vertex AI assumes that the objective function is (nearly) perfectly reproducible, and will never repeat the same Trial parameters.
HIGH Vertex AI will estimate the amount of noise in metric evaluations, it may repeat the same Trial parameters more than once.

ParameterSpec

Represents a single parameter to optimize.

Fields
parameter_id

string

Required. The ID of the parameter. Must not contain whitespaces and must be unique amongst all ParameterSpecs.

scale_type

ScaleType

How the parameter should be scaled. Leave unset for CATEGORICAL parameters.

conditional_parameter_specs[]

ConditionalParameterSpec

A conditional parameter node is active if the parameter's value matches the conditional node's parent_value_condition.

If two items in conditional_parameter_specs have the same name, they must have disjoint parent_value_condition.

Union field parameter_value_spec.

parameter_value_spec can be only one of the following:

double_value_spec

DoubleValueSpec

The value spec for a 'DOUBLE' parameter.

integer_value_spec

IntegerValueSpec

The value spec for an 'INTEGER' parameter.

categorical_value_spec

CategoricalValueSpec

The value spec for a 'CATEGORICAL' parameter.

discrete_value_spec

DiscreteValueSpec

The value spec for a 'DISCRETE' parameter.

CategoricalValueSpec

Value specification for a parameter in CATEGORICAL type.

Fields
values[]

string

Required. The list of possible categories.

default_value

string

A default value for a CATEGORICAL parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point.

Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

ConditionalParameterSpec

Represents a parameter spec with condition from its parent parameter.

Fields
parameter_spec

ParameterSpec

Required. The spec for a conditional parameter.

Union field parent_value_condition. A set of parameter values from the parent ParameterSpec's feasible space. parent_value_condition can be only one of the following:
parent_discrete_values

DiscreteValueCondition

The spec for matching values from a parent parameter of DISCRETE type.

parent_int_values

IntValueCondition

The spec for matching values from a parent parameter of INTEGER type.

parent_categorical_values

CategoricalValueCondition

The spec for matching values from a parent parameter of CATEGORICAL type.

CategoricalValueCondition

Represents the spec to match categorical values from parent parameter.

Fields
values[]

string

Required. Matches values of the parent parameter of 'CATEGORICAL' type. All values must exist in categorical_value_spec of parent parameter.

DiscreteValueCondition

Represents the spec to match discrete values from parent parameter.

Fields
values[]

double

Required. Matches values of the parent parameter of 'DISCRETE' type. All values must exist in discrete_value_spec of parent parameter.

The Epsilon of the value matching is 1e-10.

IntValueCondition

Represents the spec to match integer values from parent parameter.

Fields
values[]

int64

Required. Matches values of the parent parameter of 'INTEGER' type. All values must lie in integer_value_spec of parent parameter.

DiscreteValueSpec

Value specification for a parameter in DISCRETE type.

Fields
values[]

double

Required. A list of possible values. The list should be in increasing order and at least 1e-10 apart. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.

default_value

double

A default value for a DISCRETE parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point. It automatically rounds to the nearest feasible discrete point.

Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

DoubleValueSpec

Value specification for a parameter in DOUBLE type.

Fields
min_value

double

Required. Inclusive minimum value of the parameter.

max_value

double

Required. Inclusive maximum value of the parameter.

default_value

double

A default value for a DOUBLE parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point.

Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

IntegerValueSpec

Value specification for a parameter in INTEGER type.

Fields
min_value

int64

Required. Inclusive minimum value of the parameter.

max_value

int64

Required. Inclusive maximum value of the parameter.

default_value

int64

A default value for an INTEGER parameter that is assumed to be a relatively good starting point. Unset value signals that there is no offered starting point.

Currently only supported by the Vertex AI Vizier service. Not supported by HyperparameterTuningJob or TrainingPipeline.

ScaleType

The type of scaling that should be applied to this parameter.

Enums
SCALE_TYPE_UNSPECIFIED By default, no scaling is applied.
UNIT_LINEAR_SCALE Scales the feasible space to (0, 1) linearly.
UNIT_LOG_SCALE Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive.
UNIT_REVERSE_LOG_SCALE Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive.

StudyStoppingConfig

The configuration (stopping conditions) for automated stopping of a Study. Conditions include trial budgets, time budgets, and convergence detection.

Fields
should_stop_asap

BoolValue

If true, a Study enters STOPPING_ASAP whenever it would normally enters STOPPING state.

The bottom line is: set to true if you want to interrupt on-going evaluations of Trials as soon as the study stopping condition is met. (Please see Study.State documentation for the source of truth).

minimum_runtime_constraint

StudyTimeConstraint

Each "stopping rule" in this proto specifies an "if" condition. Before Vizier would generate a new suggestion, it first checks each specified stopping rule, from top to bottom in this list. Note that the first few rules (e.g. minimum_runtime_constraint, min_num_trials) will prevent other stopping rules from being evaluated until they are met. For example, setting min_num_trials=5 and always_stop_after= 1 hour means that the Study will ONLY stop after it has 5 COMPLETED trials, even if more than an hour has passed since its creation. It follows the first applicable rule (whose "if" condition is satisfied) to make a stopping decision. If none of the specified rules are applicable, then Vizier decides that the study should not stop. If Vizier decides that the study should stop, the study enters STOPPING state (or STOPPING_ASAP if should_stop_asap = true). IMPORTANT: The automatic study state transition happens precisely as described above; that is, deleting trials or updating StudyConfig NEVER automatically moves the study state back to ACTIVE. If you want to resume a Study that was stopped, 1) change the stopping conditions if necessary, 2) activate the study, and then 3) ask for suggestions. If the specified time or duration has not passed, do not stop the study.

maximum_runtime_constraint

StudyTimeConstraint

If the specified time or duration has passed, stop the study.

min_num_trials

Int32Value

If there are fewer than this many COMPLETED trials, do not stop the study.

max_num_trials

Int32Value

If there are more than this many trials, stop the study.

max_num_trials_no_progress

Int32Value

If the objective value has not improved for this many consecutive trials, stop the study.

WARNING: Effective only for single-objective studies.

max_duration_no_progress

Duration

If the objective value has not improved for this much time, stop the study.

WARNING: Effective only for single-objective studies.

TransferLearningConfig

This contains flag for manually disabling transfer learning for a study. The names of prior studies being used for transfer learning (if any) are also listed here.

Fields
disable_transfer_learning

bool

Flag to to manually prevent vizier from using transfer learning on a new study. Otherwise, vizier will automatically determine whether or not to use transfer learning.

prior_study_names[]

string

Output only. Names of previously completed studies

StudyTimeConstraint

Time-based Constraint for Study

Fields

Union field constraint.

constraint can be only one of the following:

max_duration

Duration

Counts the wallclock time passed since the creation of this Study.

end_time

Timestamp

Compares the wallclock time to this time. Must use UTC timezone.

SuggestTrialsMetadata

Details of operations that perform Trials suggestion.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for suggesting Trials.

client_id

string

The identifier of the client that is requesting the suggestion.

If multiple SuggestTrialsRequests have the same client_id, the service will return the identical suggested Trial if the Trial is pending, and provide a new Trial if the last suggested Trial was completed.

SuggestTrialsRequest

Request message for VizierService.SuggestTrials.

Fields
parent

string

Required. The project and location that the Study belongs to. Format: projects/{project}/locations/{location}/studies/{study}

suggestion_count

int32

Required. The number of suggestions requested. It must be positive.

client_id

string

Required. The identifier of the client that is requesting the suggestion.

If multiple SuggestTrialsRequests have the same client_id, the service will return the identical suggested Trial if the Trial is pending, and provide a new Trial if the last suggested Trial was completed.

contexts[]

TrialContext

Optional. This allows you to specify the "context" for a Trial; a context is a slice (a subspace) of the search space.

Typical uses for contexts: 1) You are using Vizier to tune a server for best performance, but there's a strong weekly cycle. The context specifies the day-of-week. This allows Tuesday to generalize from Wednesday without assuming that everything is identical. 2) Imagine you're optimizing some medical treatment for people. As they walk in the door, you know certain facts about them (e.g. sex, weight, height, blood-pressure). Put that information in the context, and Vizier will adapt its suggestions to the patient. 3) You want to do a fair A/B test efficiently. Specify the "A" and "B" conditions as contexts, and Vizier will generalize between "A" and "B" conditions. If they are similar, this will allow Vizier to converge to the optimum faster than if "A" and "B" were separate Studies. NOTE: You can also enter contexts as REQUESTED Trials, e.g. via the CreateTrial() RPC; that's the asynchronous option where you don't need a close association between contexts and suggestions.

NOTE: All the Parameters you set in a context MUST be defined in the Study. NOTE: You must supply 0 or $suggestion_count contexts. If you don't supply any contexts, Vizier will make suggestions from the full search space specified in the StudySpec; if you supply a full set of context, each suggestion will match the corresponding context. NOTE: A Context with no features set matches anything, and allows suggestions from the full search space. NOTE: Contexts MUST lie within the search space specified in the StudySpec. It's an error if they don't. NOTE: Contexts preferentially match ACTIVE then REQUESTED trials before new suggestions are generated. NOTE: Generation of suggestions involves a match between a Context and (optionally) a REQUESTED trial; if that match is not fully specified, a suggestion will be geneated in the merged subspace.

SuggestTrialsResponse

Response message for VizierService.SuggestTrials.

Fields
trials[]

Trial

A list of Trials.

study_state

State

The state of the Study.

start_time

Timestamp

The time at which the operation was started.

end_time

Timestamp

The time at which operation processing completed.

SummarizationHelpfulnessInput

Input for summarization helpfulness metric.

Fields
metric_spec

SummarizationHelpfulnessSpec

Required. Spec for summarization helpfulness score metric.

instance

SummarizationHelpfulnessInstance

Required. Summarization helpfulness instance.

SummarizationHelpfulnessInstance

Spec for summarization helpfulness instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Required. Text to be summarized.

instruction

string

Optional. Summarization prompt for LLM.

SummarizationHelpfulnessResult

Spec for summarization helpfulness result.

Fields
explanation

string

Output only. Explanation for summarization helpfulness score.

score

float

Output only. Summarization Helpfulness score.

confidence

float

Output only. Confidence for summarization helpfulness score.

SummarizationHelpfulnessSpec

Spec for summarization helpfulness score metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute summarization helpfulness.

version

int32

Optional. Which version to use for evaluation.

SummarizationQualityInput

Input for summarization quality metric.

Fields
metric_spec

SummarizationQualitySpec

Required. Spec for summarization quality score metric.

instance

SummarizationQualityInstance

Required. Summarization quality instance.

SummarizationQualityInstance

Spec for summarization quality instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Required. Text to be summarized.

instruction

string

Required. Summarization prompt for LLM.

SummarizationQualityResult

Spec for summarization quality result.

Fields
explanation

string

Output only. Explanation for summarization quality score.

score

float

Output only. Summarization Quality score.

confidence

float

Output only. Confidence for summarization quality score.

SummarizationQualitySpec

Spec for summarization quality score metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute summarization quality.

version

int32

Optional. Which version to use for evaluation.

SummarizationVerbosityInput

Input for summarization verbosity metric.

Fields
metric_spec

SummarizationVerbositySpec

Required. Spec for summarization verbosity score metric.

instance

SummarizationVerbosityInstance

Required. Summarization verbosity instance.

SummarizationVerbosityInstance

Spec for summarization verbosity instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Optional. Ground truth used to compare against the prediction.

context

string

Required. Text to be summarized.

instruction

string

Optional. Summarization prompt for LLM.

SummarizationVerbosityResult

Spec for summarization verbosity result.

Fields
explanation

string

Output only. Explanation for summarization verbosity score.

score

float

Output only. Summarization Verbosity score.

confidence

float

Output only. Confidence for summarization verbosity score.

SummarizationVerbositySpec

Spec for summarization verbosity score metric.

Fields
use_reference

bool

Optional. Whether to use instance.reference to compute summarization verbosity.

version

int32

Optional. Which version to use for evaluation.

SupervisedHyperParameters

Hyperparameters for SFT.

Fields
epoch_count

int64

Optional. Number of complete passes the model makes over the entire training dataset during training.

learning_rate_multiplier

double

Optional. Multiplier for adjusting the default learning rate.

adapter_size

AdapterSize

Optional. Adapter size for tuning.

AdapterSize

Supported adapter sizes for tuning.

Enums
ADAPTER_SIZE_UNSPECIFIED Adapter size is unspecified.
ADAPTER_SIZE_ONE Adapter size 1.
ADAPTER_SIZE_FOUR Adapter size 4.
ADAPTER_SIZE_EIGHT Adapter size 8.
ADAPTER_SIZE_SIXTEEN Adapter size 16.
ADAPTER_SIZE_THIRTY_TWO Adapter size 32.

SupervisedTuningDataStats

Tuning data statistics for Supervised Tuning.

Fields
tuning_dataset_example_count

int64

Output only. Number of examples in the tuning dataset.

total_tuning_character_count

int64

Output only. Number of tuning characters in the tuning dataset.

total_billable_character_count
(deprecated)

int64

Output only. Number of billable characters in the tuning dataset.

total_billable_token_count

int64

Output only. Number of billable tokens in the tuning dataset.

tuning_step_count

int64

Output only. Number of tuning steps for this Tuning Job.

user_input_token_distribution

SupervisedTuningDatasetDistribution

Output only. Dataset distributions for the user input tokens.

user_output_token_distribution

SupervisedTuningDatasetDistribution

Output only. Dataset distributions for the user output tokens.

user_message_per_example_distribution

SupervisedTuningDatasetDistribution

Output only. Dataset distributions for the messages per example.

user_dataset_examples[]

Content

Output only. Sample user messages in the training dataset uri.

total_truncated_example_count

int64

The number of examples in the dataset that have been truncated by any amount.

truncated_example_indices[]

int64

A partial sample of the indices (starting from 1) of the truncated examples.

SupervisedTuningDatasetDistribution

Dataset distribution for Supervised Tuning.

Fields
sum

int64

Output only. Sum of a given population of values.

billable_sum

int64

Output only. Sum of a given population of values that are billable.

min

double

Output only. The minimum of the population values.

max

double

Output only. The maximum of the population values.

mean

double

Output only. The arithmetic mean of the values in the population.

median

double

Output only. The median of the values in the population.

p5

double

Output only. The 5th percentile of the values in the population.

p95

double

Output only. The 95th percentile of the values in the population.

buckets[]

DatasetBucket

Output only. Defines the histogram bucket.

DatasetBucket

Dataset bucket used to create a histogram for the distribution given a population of values.

Fields
count

double

Output only. Number of values in the bucket.

left

double

Output only. Left bound of the bucket.

right

double

Output only. Right bound of the bucket.

SupervisedTuningSpec

Tuning Spec for Supervised Tuning for first party models.

Fields
training_dataset_uri

string

Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file.

validation_dataset_uri

string

Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file.

hyper_parameters

SupervisedHyperParameters

Optional. Hyperparameters for SFT.

SyncFeatureViewRequest

Request message for FeatureOnlineStoreAdminService.SyncFeatureView.

Fields
feature_view

string

Required. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

SyncFeatureViewResponse

Response message for FeatureOnlineStoreAdminService.SyncFeatureView.

Fields
feature_view_sync

string

Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}/featureViewSyncs/{feature_view_sync}

TFRecordDestination

The storage details for TFRecord output content.

Fields
gcs_destination

GcsDestination

Required. Google Cloud Storage location.

Tensor

A tensor value type.

Fields
dtype

DataType

The data type of tensor.

shape[]

int64

Shape of the tensor.

bool_val[]

bool

Type specific representations that make it easy to create tensor protos in all languages. Only the representation corresponding to "dtype" can be set. The values hold the flattened representation of the tensor in row major order.

BOOL

string_val[]

string

STRING

bytes_val[]

bytes

STRING

float_val[]

float

FLOAT

double_val[]

double

DOUBLE

int_val[]

int32

INT_8 INT_16 INT_32

int64_val[]

int64

INT64

uint_val[]

uint32

UINT8 UINT16 UINT32

uint64_val[]

uint64

UINT64

list_val[]

Tensor

A list of tensor values.

struct_val

map<string, Tensor>

A map of string to tensor.

tensor_val

bytes

Serialized raw tensor content.

DataType

Data type of the tensor.

Enums
DATA_TYPE_UNSPECIFIED Not a legal value for DataType. Used to indicate a DataType field has not been set.
BOOL Data types that all computation devices are expected to be capable to support.
STRING
FLOAT
DOUBLE
INT8
INT16
INT32
INT64
UINT8
UINT16
UINT32
UINT64

Tensorboard

Tensorboard is a physical database that stores users' training metrics. A default Tensorboard is provided in each region of a Google Cloud project. If needed users can also create extra Tensorboards in their projects.

Fields
name

string

Output only. Name of the Tensorboard. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

display_name

string

Required. User provided name of this Tensorboard.

description

string

Description of this Tensorboard.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a Tensorboard. If set, this Tensorboard and all sub-resources of this Tensorboard will be secured by this key.

blob_storage_path_prefix

string

Output only. Consumer project Cloud Storage path prefix used to store blob data, which can either be a bucket or directory. Does not end with a '/'.

run_count

int32

Output only. The number of Runs stored in this Tensorboard.

create_time

Timestamp

Output only. Timestamp when this Tensorboard was created.

update_time

Timestamp

Output only. Timestamp when this Tensorboard was last updated.

labels

map<string, string>

The labels with user-defined metadata to organize your Tensorboards.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Tensorboard (System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

is_default

bool

Used to indicate if the TensorBoard instance is the default one. Each project & region can have at most one default TensorBoard instance. Creation of a default TensorBoard instance and updating an existing TensorBoard instance to be default will mark all other TensorBoard instances (if any) as non default.

satisfies_pzs

bool

Output only. Reserved for future use.

satisfies_pzi

bool

Output only. Reserved for future use.

TensorboardBlob

One blob (e.g, image, graph) viewable on a blob metric plot.

Fields
id

string

Output only. A URI safe key uniquely identifying a blob. Can be used to locate the blob stored in the Cloud Storage bucket of the consumer project.

data

bytes

Optional. The bytes of the blob is not present unless it's returned by the ReadTensorboardBlobData endpoint.

TensorboardBlobSequence

One point viewable on a blob metric plot, but mostly just a wrapper message to work around repeated fields can't be used directly within oneof fields.

Fields
values[]

TensorboardBlob

List of blobs contained within the sequence.

TensorboardExperiment

A TensorboardExperiment is a group of TensorboardRuns, that are typically the results of a training job run, in a Tensorboard.

Fields
name

string

Output only. Name of the TensorboardExperiment. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

display_name

string

User provided name of this TensorboardExperiment.

description

string

Description of this TensorboardExperiment.

create_time

Timestamp

Output only. Timestamp when this TensorboardExperiment was created.

update_time

Timestamp

Output only. Timestamp when this TensorboardExperiment was last updated.

labels

map<string, string>

The labels with user-defined metadata to organize your TensorboardExperiment.

Label keys and values cannot be longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with aiplatform.googleapis.com/ and are immutable. The following system labels exist for each Dataset:

  • aiplatform.googleapis.com/dataset_metadata_schema: output only. Its value is the metadata_schema's title.
etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

source

string

Immutable. Source of the TensorboardExperiment. Example: a custom training job.

TensorboardRun

TensorboardRun maps to a specific execution of a training job with a given set of hyperparameter values, model definition, dataset, etc

Fields
name

string

Output only. Name of the TensorboardRun. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

display_name

string

Required. User provided name of this TensorboardRun. This value must be unique among all TensorboardRuns belonging to the same parent TensorboardExperiment.

description

string

Description of this TensorboardRun.

create_time

Timestamp

Output only. Timestamp when this TensorboardRun was created.

update_time

Timestamp

Output only. Timestamp when this TensorboardRun was last updated.

labels

map<string, string>

The labels with user-defined metadata to organize your TensorboardRuns.

This field will be used to filter and visualize Runs in the Tensorboard UI. For example, a Vertex AI training job can set a label aiplatform.googleapis.com/training_job_id=xxxxx to all the runs created within that job. An end user can set a label experiment_id=xxxxx for all the runs produced in a Jupyter notebook. These runs can be grouped by a label value and visualized together in the Tensorboard UI.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one TensorboardRun (System labels are excluded).

See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

TensorboardTensor

One point viewable on a tensor metric plot.

Fields
value

bytes

Required. Serialized form of https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor.proto

version_number

int32

Optional. Version number of TensorProto used to serialize value.

TensorboardTimeSeries

TensorboardTimeSeries maps to times series produced in training runs

Fields
name

string

Output only. Name of the TensorboardTimeSeries.

display_name

string

Required. User provided name of this TensorboardTimeSeries. This value should be unique among all TensorboardTimeSeries resources belonging to the same TensorboardRun resource (parent resource).

description

string

Description of this TensorboardTimeSeries.

value_type

ValueType

Required. Immutable. Type of TensorboardTimeSeries value.

create_time

Timestamp

Output only. Timestamp when this TensorboardTimeSeries was created.

update_time

Timestamp

Output only. Timestamp when this TensorboardTimeSeries was last updated.

etag

string

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

plugin_name

string

Immutable. Name of the plugin this time series pertain to. Such as Scalar, Tensor, Blob

plugin_data

bytes

Data of the current plugin, with the size limited to 65KB.

metadata

Metadata

Output only. Scalar, Tensor, or Blob metadata for this TensorboardTimeSeries.

Metadata

Describes metadata for a TensorboardTimeSeries.

Fields
max_step

int64

Output only. Max step index of all data points within a TensorboardTimeSeries.

max_wall_time

Timestamp

Output only. Max wall clock timestamp of all data points within a TensorboardTimeSeries.

max_blob_sequence_length

int64

Output only. The largest blob sequence length (number of blobs) of all data points in this time series, if its ValueType is BLOB_SEQUENCE.

ValueType

An enum representing the value type of a TensorboardTimeSeries.

Enums
VALUE_TYPE_UNSPECIFIED The value type is unspecified.
SCALAR Used for TensorboardTimeSeries that is a list of scalars. E.g. accuracy of a model over epochs/time.
TENSOR Used for TensorboardTimeSeries that is a list of tensors. E.g. histograms of weights of layer in a model over epoch/time.
BLOB_SEQUENCE Used for TensorboardTimeSeries that is a list of blob sequences. E.g. set of sample images with labels over epochs/time.

ThresholdConfig

The config for feature monitoring threshold.

Fields

Union field threshold.

threshold can be only one of the following:

value

double

Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.

TimeSeriesData

All the data stored in a TensorboardTimeSeries.

Fields
tensorboard_time_series_id

string

Required. The ID of the TensorboardTimeSeries, which will become the final component of the TensorboardTimeSeries' resource name

value_type

ValueType

Required. Immutable. The value type of this time series. All the values in this time series data must match this value type.

values[]

TimeSeriesDataPoint

Required. Data points in this time series.

TimeSeriesDataPoint

A TensorboardTimeSeries data point.

Fields
wall_time

Timestamp

Wall clock timestamp when this data point is generated by the end user.

step

int64

Step index of this data point within the run.

Union field value. Value of this time series data point. value can be only one of the following:
scalar

Scalar

A scalar value.

tensor

TensorboardTensor

A tensor value.

blobs

TensorboardBlobSequence

A blob sequence value.

TimestampSplit

Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set.

Supported only for tabular Datasets.

Fields
training_fraction

double

The fraction of the input data that is to be used to train the Model.

validation_fraction

double

The fraction of the input data that is to be used to validate the Model.

test_fraction

double

The fraction of the input data that is to be used to evaluate the Model.

key

string

Required. The key is a name of one of the Dataset's data columns. The values of the key (the values in the column) must be in RFC 3339 date-time format, where time-offset = "Z" (e.g. 1985-04-12T23:20:50.52Z). If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline.

TokensInfo

Tokens info with a list of tokens and the corresponding list of token ids.

Fields
tokens[]

bytes

A list of tokens from the input.

token_ids[]

int64

A list of token ids from the input.

role

string

Optional. Optional fields for the role from the corresponding Content.

Tool

Tool details that the model may use to generate response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).

Fields
function_declarations[]

FunctionDeclaration

Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 128 function declarations can be provided.

retrieval

Retrieval

Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.

google_search_retrieval

GoogleSearchRetrieval

Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.

code_execution

CodeExecution

Optional. CodeExecution tool type. Enables the model to execute code as part of generation. This field is only used by the Gemini Developer API services.

CodeExecution

This type has no fields.

Tool that executes code generated by the model, and automatically returns the result to the model.

See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool.

GoogleSearch

This type has no fields.

GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.

ToolCall

Spec for tool call.

Fields
tool_name

string

Required. Spec for tool name

tool_input

string

Optional. Spec for tool input

ToolCallValidInput

Input for tool call valid metric.

Fields
metric_spec

ToolCallValidSpec

Required. Spec for tool call valid metric.

instances[]

ToolCallValidInstance

Required. Repeated tool call valid instances.

ToolCallValidInstance

Spec for tool call valid instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Required. Ground truth used to compare against the prediction.

ToolCallValidMetricValue

Tool call valid metric value for an instance.

Fields
score

float

Output only. Tool call valid score.

ToolCallValidResults

Results for tool call valid metric.

Fields
tool_call_valid_metric_values[]

ToolCallValidMetricValue

Output only. Tool call valid metric values.

ToolCallValidSpec

This type has no fields.

Spec for tool call valid metric.

ToolConfig

Tool config. This config is shared for all tools provided in the request.

Fields
function_calling_config

FunctionCallingConfig

Optional. Function calling config.

ToolNameMatchInput

Input for tool name match metric.

Fields
metric_spec

ToolNameMatchSpec

Required. Spec for tool name match metric.

instances[]

ToolNameMatchInstance

Required. Repeated tool name match instances.

ToolNameMatchInstance

Spec for tool name match instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Required. Ground truth used to compare against the prediction.

ToolNameMatchMetricValue

Tool name match metric value for an instance.

Fields
score

float

Output only. Tool name match score.

ToolNameMatchResults

Results for tool name match metric.

Fields
tool_name_match_metric_values[]

ToolNameMatchMetricValue

Output only. Tool name match metric values.

ToolNameMatchSpec

This type has no fields.

Spec for tool name match metric.

ToolParameterKVMatchInput

Input for tool parameter key value match metric.

Fields
metric_spec

ToolParameterKVMatchSpec

Required. Spec for tool parameter key value match metric.

instances[]

ToolParameterKVMatchInstance

Required. Repeated tool parameter key value match instances.

ToolParameterKVMatchInstance

Spec for tool parameter key value match instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Required. Ground truth used to compare against the prediction.

ToolParameterKVMatchMetricValue

Tool parameter key value match metric value for an instance.

Fields
score

float

Output only. Tool parameter key value match score.

ToolParameterKVMatchResults

Results for tool parameter key value match metric.

Fields
tool_parameter_kv_match_metric_values[]

ToolParameterKVMatchMetricValue

Output only. Tool parameter key value match metric values.

ToolParameterKVMatchSpec

Spec for tool parameter key value match metric.

Fields
use_strict_string_match

bool

Optional. Whether to use STRICT string match on parameter values.

ToolParameterKeyMatchInput

Input for tool parameter key match metric.

Fields
metric_spec

ToolParameterKeyMatchSpec

Required. Spec for tool parameter key match metric.

instances[]

ToolParameterKeyMatchInstance

Required. Repeated tool parameter key match instances.

ToolParameterKeyMatchInstance

Spec for tool parameter key match instance.

Fields
prediction

string

Required. Output of the evaluated model.

reference

string

Required. Ground truth used to compare against the prediction.

ToolParameterKeyMatchMetricValue

Tool parameter key match metric value for an instance.

Fields
score

float

Output only. Tool parameter key match score.

ToolParameterKeyMatchResults

Results for tool parameter key match metric.

Fields
tool_parameter_key_match_metric_values[]

ToolParameterKeyMatchMetricValue

Output only. Tool parameter key match metric values.

ToolParameterKeyMatchSpec

This type has no fields.

Spec for tool parameter key match metric.

ToolUseExample

A single example of the tool usage.

Fields
display_name

string

Required. The display name for example.

query

string

Required. Query that should be routed to this tool.

request_params

Struct

Request parameters used for executing this tool.

response_params

Struct

Response parameters generated by this tool.

response_summary

string

Summary of the tool response to the user query.

Union field Target. Target tool to use. Target can be only one of the following:
extension_operation

ExtensionOperation

Extension operation to call.

function_name

string

Function name to call.

ExtensionOperation

Identifies one operation of the extension.

Fields
extension

string

Resource name of the extension.

operation_id

string

Required. Operation ID of the extension.

TrainingPipeline

The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload the Model to Vertex AI, and evaluate the Model.

Fields
name

string

Output only. Resource name of the TrainingPipeline.

display_name

string

Required. The user-defined name of this TrainingPipeline.

input_data_config

InputDataConfig

Specifies Vertex AI owned input data that may be used for training the Model. The TrainingPipeline's training_task_definition should make clear whether this config is used and if there are any special requirements on how it should be filled. If nothing about this config is mentioned in the training_task_definition, then it should be assumed that the TrainingPipeline does not depend on this configuration.

training_task_definition

string

Required. A Google Cloud Storage path to the YAML file that defines the training task which is responsible for producing the model artifact, and may also include additional auxiliary work. The definition files that can be used here are found in gs://google-cloud-aiplatform/schema/trainingjob/definition/. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.

training_task_inputs

Value

Required. The training task's parameter(s), as specified in the training_task_definition's inputs.

training_task_metadata

Value

Output only. The metadata information as specified in the training_task_definition's metadata. This metadata is an auxiliary runtime and final information about the training task. While the pipeline is running this information is populated only at a best effort basis. Only present if the pipeline's training_task_definition contains metadata object.

model_to_upload

Model

Describes the Model that may be uploaded (via ModelService.UploadModel) by this TrainingPipeline. The TrainingPipeline's training_task_definition should make clear whether this Model description should be populated, and if there are any special requirements regarding how it should be filled. If nothing is mentioned in the training_task_definition, then it should be assumed that this field should not be filled and the training task either uploads the Model without a need of this information, or that training task does not support uploading a Model as part of the pipeline. When the Pipeline's state becomes PIPELINE_STATE_SUCCEEDED and the trained Model had been uploaded into Vertex AI, then the model_to_upload's resource name is populated. The Model is always uploaded into the Project and Location in which this pipeline is.

model_id

string

Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name.

This value may be up to 63 characters, and valid characters are [a-z0-9_-]. The first character cannot be a number or hyphen.

parent_model

string

Optional. When specify this field, the model_to_upload will not be uploaded as a new model, instead, it will become a new version of this parent_model.

state

PipelineState

Output only. The detailed state of the pipeline.

error

Status

Output only. Only populated when the pipeline's state is PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED.

create_time

Timestamp

Output only. Time when the TrainingPipeline was created.

start_time

Timestamp

Output only. Time when the TrainingPipeline for the first time entered the PIPELINE_STATE_RUNNING state.

end_time

Timestamp

Output only. Time when the TrainingPipeline entered any of the following states: PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED, PIPELINE_STATE_CANCELLED.

update_time

Timestamp

Output only. Time when the TrainingPipeline was most recently updated.

labels

map<string, string>

The labels with user-defined metadata to organize TrainingPipelines.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

encryption_spec

EncryptionSpec

Customer-managed encryption key spec for a TrainingPipeline. If set, this TrainingPipeline will be secured by this key.

Note: Model trained by this TrainingPipeline is also secured by this key if model_to_upload is not set separately.

Trajectory

Spec for trajectory.

Fields
tool_calls[]

ToolCall

Required. Tool calls in the trajectory.

TrajectoryAnyOrderMatchInput

Instances and metric spec for TrajectoryAnyOrderMatch metric.

Fields
metric_spec

TrajectoryAnyOrderMatchSpec

Required. Spec for TrajectoryAnyOrderMatch metric.

instances[]

TrajectoryAnyOrderMatchInstance

Required. Repeated TrajectoryAnyOrderMatch instance.

TrajectoryAnyOrderMatchInstance

Spec for TrajectoryAnyOrderMatch instance.

Fields
predicted_trajectory

Trajectory

Required. Spec for predicted tool call trajectory.

reference_trajectory

Trajectory

Required. Spec for reference tool call trajectory.

TrajectoryAnyOrderMatchMetricValue

TrajectoryAnyOrderMatch metric value for an instance.

Fields
score

float

Output only. TrajectoryAnyOrderMatch score.

TrajectoryAnyOrderMatchResults

Results for TrajectoryAnyOrderMatch metric.

Fields
trajectory_any_order_match_metric_values[]

TrajectoryAnyOrderMatchMetricValue

Output only. TrajectoryAnyOrderMatch metric values.

TrajectoryAnyOrderMatchSpec

This type has no fields.

Spec for TrajectoryAnyOrderMatch metric - returns 1 if all tool calls in the reference trajectory appear in the predicted trajectory in any order, else 0.

TrajectoryExactMatchInput

Instances and metric spec for TrajectoryExactMatch metric.

Fields
metric_spec

TrajectoryExactMatchSpec

Required. Spec for TrajectoryExactMatch metric.

instances[]

TrajectoryExactMatchInstance

Required. Repeated TrajectoryExactMatch instance.

TrajectoryExactMatchInstance

Spec for TrajectoryExactMatch instance.

Fields
predicted_trajectory

Trajectory

Required. Spec for predicted tool call trajectory.

reference_trajectory

Trajectory

Required. Spec for reference tool call trajectory.

TrajectoryExactMatchMetricValue

TrajectoryExactMatch metric value for an instance.

Fields
score

float

Output only. TrajectoryExactMatch score.

TrajectoryExactMatchResults

Results for TrajectoryExactMatch metric.

Fields
trajectory_exact_match_metric_values[]

TrajectoryExactMatchMetricValue

Output only. TrajectoryExactMatch metric values.

TrajectoryExactMatchSpec

This type has no fields.

Spec for TrajectoryExactMatch metric - returns 1 if tool calls in the reference trajectory exactly match the predicted trajectory, else 0.

TrajectoryInOrderMatchInput

Instances and metric spec for TrajectoryInOrderMatch metric.

Fields
metric_spec

TrajectoryInOrderMatchSpec

Required. Spec for TrajectoryInOrderMatch metric.

instances[]

TrajectoryInOrderMatchInstance

Required. Repeated TrajectoryInOrderMatch instance.

TrajectoryInOrderMatchInstance

Spec for TrajectoryInOrderMatch instance.

Fields
predicted_trajectory

Trajectory

Required. Spec for predicted tool call trajectory.

reference_trajectory

Trajectory

Required. Spec for reference tool call trajectory.

TrajectoryInOrderMatchMetricValue

TrajectoryInOrderMatch metric value for an instance.

Fields
score

float

Output only. TrajectoryInOrderMatch score.

TrajectoryInOrderMatchResults

Results for TrajectoryInOrderMatch metric.

Fields
trajectory_in_order_match_metric_values[]

TrajectoryInOrderMatchMetricValue

Output only. TrajectoryInOrderMatch metric values.

TrajectoryInOrderMatchSpec

This type has no fields.

Spec for TrajectoryInOrderMatch metric - returns 1 if tool calls in the reference trajectory appear in the predicted trajectory in the same order, else 0.

TrajectoryPrecisionInput

Instances and metric spec for TrajectoryPrecision metric.

Fields
metric_spec

TrajectoryPrecisionSpec

Required. Spec for TrajectoryPrecision metric.

instances[]

TrajectoryPrecisionInstance

Required. Repeated TrajectoryPrecision instance.

TrajectoryPrecisionInstance

Spec for TrajectoryPrecision instance.

Fields
predicted_trajectory

Trajectory

Required. Spec for predicted tool call trajectory.

reference_trajectory

Trajectory

Required. Spec for reference tool call trajectory.

TrajectoryPrecisionMetricValue

TrajectoryPrecision metric value for an instance.

Fields
score

float

Output only. TrajectoryPrecision score.

TrajectoryPrecisionResults

Results for TrajectoryPrecision metric.

Fields
trajectory_precision_metric_values[]

TrajectoryPrecisionMetricValue

Output only. TrajectoryPrecision metric values.

TrajectoryPrecisionSpec

This type has no fields.

Spec for TrajectoryPrecision metric - returns a float score based on average precision of individual tool calls.

TrajectoryRecallInput

Instances and metric spec for TrajectoryRecall metric.

Fields
metric_spec

TrajectoryRecallSpec

Required. Spec for TrajectoryRecall metric.

instances[]

TrajectoryRecallInstance

Required. Repeated TrajectoryRecall instance.

TrajectoryRecallInstance

Spec for TrajectoryRecall instance.

Fields
predicted_trajectory

Trajectory

Required. Spec for predicted tool call trajectory.

reference_trajectory

Trajectory

Required. Spec for reference tool call trajectory.

TrajectoryRecallMetricValue

TrajectoryRecall metric value for an instance.

Fields
score

float

Output only. TrajectoryRecall score.

TrajectoryRecallResults

Results for TrajectoryRecall metric.

Fields
trajectory_recall_metric_values[]

TrajectoryRecallMetricValue

Output only. TrajectoryRecall metric values.

TrajectoryRecallSpec

This type has no fields.

Spec for TrajectoryRecall metric - returns a float score based on average recall of individual tool calls.

TrajectorySingleToolUseInput

Instances and metric spec for TrajectorySingleToolUse metric.

Fields
metric_spec

TrajectorySingleToolUseSpec

Required. Spec for TrajectorySingleToolUse metric.

instances[]

TrajectorySingleToolUseInstance

Required. Repeated TrajectorySingleToolUse instance.

TrajectorySingleToolUseInstance

Spec for TrajectorySingleToolUse instance.

Fields
predicted_trajectory

Trajectory

Required. Spec for predicted tool call trajectory.

TrajectorySingleToolUseMetricValue

TrajectorySingleToolUse metric value for an instance.

Fields
score

float

Output only. TrajectorySingleToolUse score.

TrajectorySingleToolUseResults

Results for TrajectorySingleToolUse metric.

Fields
trajectory_single_tool_use_metric_values[]

TrajectorySingleToolUseMetricValue

Output only. TrajectorySingleToolUse metric values.

TrajectorySingleToolUseSpec

Spec for TrajectorySingleToolUse metric - returns 1 if tool is present in the predicted trajectory, else 0.

Fields
tool_name

string

Required. Spec for tool name to be checked for in the predicted trajectory.

Trial

A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.

Fields
name

string

Output only. Resource name of the Trial assigned by the service.

id

string

Output only. The identifier of the Trial assigned by the service.

state

State

Output only. The detailed state of the Trial.

parameters[]

Parameter

Output only. The parameters of the Trial.

final_measurement

Measurement

Output only. The final measurement containing the objective value.

measurements[]

Measurement

Output only. A list of measurements that are strictly lexicographically ordered by their induced tuples (steps, elapsed_duration). These are used for early stopping computations.

start_time

Timestamp

Output only. Time when the Trial was started.

end_time

Timestamp

Output only. Time when the Trial's status changed to SUCCEEDED or INFEASIBLE.

client_id

string

Output only. The identifier of the client that originally requested this Trial. Each client is identified by a unique client_id. When a client asks for a suggestion, Vertex AI Vizier will assign it a Trial. The client should evaluate the Trial, complete it, and report back to Vertex AI Vizier. If suggestion is asked again by same client_id before the Trial is completed, the same Trial will be returned. Multiple clients with different client_ids can ask for suggestions simultaneously, each of them will get their own Trial.

infeasible_reason

string

Output only. A human readable string describing why the Trial is infeasible. This is set only if Trial state is INFEASIBLE.

custom_job

string

Output only. The CustomJob name linked to the Trial. It's set for a HyperparameterTuningJob's Trial.

web_access_uris

map<string, string>

Output only. URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a HyperparameterTuningJob and the job's trial_job_spec.enable_web_access field is true.

The keys are names of each node used for the trial; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool.

The values are the URIs for each node's interactive shell.

Parameter

A message representing a parameter to be tuned.

Fields
parameter_id

string

Output only. The ID of the parameter. The parameter should be defined in StudySpec's Parameters.

value

Value

Output only. The value of the parameter. number_value will be set if a parameter defined in StudySpec is in type 'INTEGER', 'DOUBLE' or 'DISCRETE'. string_value will be set if a parameter defined in StudySpec is in type 'CATEGORICAL'.

State

Describes a Trial state.

Enums
STATE_UNSPECIFIED The Trial state is unspecified.
REQUESTED Indicates that a specific Trial has been requested, but it has not yet been suggested by the service.
ACTIVE Indicates that the Trial has been suggested.
STOPPING Indicates that the Trial should stop according to the service.
SUCCEEDED Indicates that the Trial is completed successfully.
INFEASIBLE Indicates that the Trial should not be attempted again. The service will set a Trial to INFEASIBLE when it's done but missing the final_measurement.

TrialContext

Fields
description

string

A human-readable field which can store a description of this context. This will become part of the resulting Trial's description field.

parameters[]

Parameter

If/when a Trial is generated or selected from this Context, its Parameters will match any parameters specified here. (I.e. if this context specifies parameter name:'a' int_value:3, then a resulting Trial will have int_value:3 for its parameter named 'a'.) Note that we first attempt to match existing REQUESTED Trials with contexts, and if there are no matches, we generate suggestions in the subspace defined by the parameters specified here. NOTE: a Context without any Parameters matches the entire feasible search space.

TunedModel

The Model Registry Model and Online Prediction Endpoint assiociated with this TuningJob.

Fields
model

string

Output only. The resource name of the TunedModel. Format: projects/{project}/locations/{location}/models/{model}.

endpoint

string

Output only. A resource name of an Endpoint. Format: projects/{project}/locations/{location}/endpoints/{endpoint}.

TunedModelRef

TunedModel Reference for legacy model migration.

Fields
Union field tuned_model_ref. The Tuned Model Reference for the model. tuned_model_ref can be only one of the following:
tuned_model

string

Support migration from model registry.

tuning_job

string

Support migration from tuning job list page, from gemini-1.0-pro-002 to 1.5 and above.

pipeline_job

string

Support migration from tuning job list page, from bison model to gemini model.

TuningDataStats

The tuning data statistic values for TuningJob.

Fields

Union field tuning_data_stats.

tuning_data_stats can be only one of the following:

supervised_tuning_data_stats

SupervisedTuningDataStats

The SFT Tuning data stats.

distillation_data_stats

DistillationDataStats

Output only. Statistics for distillation.

TuningJob

Represents a TuningJob that runs with Google owned models.

Fields
name

string

Output only. Identifier. Resource name of a TuningJob. Format: projects/{project}/locations/{location}/tuningJobs/{tuning_job}

tuned_model_display_name

string

Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters.

description

string

Optional. The description of the TuningJob.

state

JobState

Output only. The detailed state of the job.

create_time

Timestamp

Output only. Time when the TuningJob was created.

start_time

Timestamp

Output only. Time when the TuningJob for the first time entered the JOB_STATE_RUNNING state.

end_time

Timestamp

Output only. Time when the TuningJob entered any of the following JobStates: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED, JOB_STATE_EXPIRED.

update_time

Timestamp

Output only. Time when the TuningJob was most recently updated.

error

Status

Output only. Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

labels

map<string, string>

Optional. The labels with user-defined metadata to organize TuningJob and generated resources such as Model and Endpoint.

Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed.

See https://goo.gl/xmQnxf for more information and examples of labels.

experiment

string

Output only. The Experiment associated with this TuningJob.

tuned_model

TunedModel

Output only. The tuned model resources assiociated with this TuningJob.

tuning_data_stats

TuningDataStats

Output only. The tuning data statistics associated with this TuningJob.

pipeline_job
(deprecated)

string

Output only. The resource name of the PipelineJob associated with the TuningJob. Format: projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}.

encryption_spec

EncryptionSpec

Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key.

service_account

string

The service account that the tuningJob workload runs as. If not specified, the Vertex AI Secure Fine-Tuned Service Agent in the project will be used. See https://cloud.google.com/iam/docs/service-agents#vertex-ai-secure-fine-tuning-service-agent

Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.

Union field source_model.

source_model can be only one of the following:

base_model

string

The base model that is being tuned, e.g., "gemini-1.0-pro-002". .

Union field tuning_spec.

tuning_spec can be only one of the following:

supervised_tuning_spec

SupervisedTuningSpec

Tuning Spec for Supervised Fine Tuning.

distillation_spec

DistillationSpec

Tuning Spec for Distillation.

partner_model_tuning_spec

PartnerModelTuningSpec

Tuning Spec for open sourced and third party Partner models.

Type

Type contains the list of OpenAPI data types as defined by https://swagger.io/docs/specification/data-models/data-types/

Enums
TYPE_UNSPECIFIED Not specified, should not be used.
STRING OpenAPI string type
NUMBER OpenAPI number type
INTEGER OpenAPI integer type
BOOLEAN OpenAPI boolean type
ARRAY OpenAPI array type
OBJECT OpenAPI object type

UndeployIndexOperationMetadata

Runtime operation information for IndexEndpointService.UndeployIndex.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

UndeployIndexRequest

Request message for IndexEndpointService.UndeployIndex.

Fields
index_endpoint

string

Required. The name of the IndexEndpoint resource from which to undeploy an Index. Format: projects/{project}/locations/{location}/indexEndpoints/{index_endpoint}

deployed_index_id

string

Required. The ID of the DeployedIndex to be undeployed from the IndexEndpoint.

UndeployIndexResponse

This type has no fields.

Response message for IndexEndpointService.UndeployIndex.

UndeployModelOperationMetadata

Runtime operation information for EndpointService.UndeployModel.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

UndeployModelRequest

Request message for EndpointService.UndeployModel.

Fields
endpoint

string

Required. The name of the Endpoint resource from which to undeploy a Model. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

deployed_model_id

string

Required. The ID of the DeployedModel to be undeployed from the Endpoint.

traffic_split

map<string, int32>

If this field is provided, then the Endpoint's traffic_split will be overwritten with it. If last DeployedModel is being undeployed from the Endpoint, the [Endpoint.traffic_split] will always end up empty when this call returns. A DeployedModel will be successfully undeployed only if it doesn't have any traffic assigned to it when this method executes, or if this field unassigns any traffic to it.

UndeployModelResponse

This type has no fields.

Response message for EndpointService.UndeployModel.

UndeploySolverOperationMetadata

Runtime operation information for SolverService.UndeploySolver.

Fields
generic_metadata

GenericOperationMetadata

The generic operation information.

UnmanagedContainerModel

Contains model information necessary to perform batch prediction without requiring a full model import.

Fields
artifact_uri

string

The path to the directory containing the Model artifact and any of its supporting files.

predict_schemata

PredictSchemata

Contains the schemata used in Model's predictions and explanations

container_spec

ModelContainerSpec

Input only. The specification of the container that is to be used when deploying this Model.

UpdateArtifactRequest

Request message for MetadataService.UpdateArtifact.

Fields
artifact

Artifact

Required. The Artifact containing updates. The Artifact's Artifact.name field is used to identify the Artifact to be updated. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/artifacts/{artifact}

update_mask

FieldMask

Optional. A FieldMask indicating which fields should be updated.

allow_missing

bool

If set to true, and the Artifact is not found, a new Artifact is created.

UpdateCachedContentRequest

Request message for GenAiCacheService.UpdateCachedContent. Only expire_time or ttl can be updated.

Fields
cached_content

CachedContent

Required. The cached content to update

update_mask

FieldMask

Required. The list of fields to update.

UpdateContextRequest

Request message for MetadataService.UpdateContext.

Fields
context

Context

Required. The Context containing updates. The Context's Context.name field is used to identify the Context to be updated. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/contexts/{context}

update_mask

FieldMask

Optional. A FieldMask indicating which fields should be updated.

allow_missing

bool

If set to true, and the Context is not found, a new Context is created.

UpdateDatasetRequest

Request message for DatasetService.UpdateDataset.

Fields
dataset

Dataset

Required. The Dataset which replaces the resource on the server.

update_mask

FieldMask

Required. The update mask applies to the resource. For the FieldMask definition, see google.protobuf.FieldMask. Updatable fields:

  • display_name
  • description
  • labels

UpdateDatasetVersionRequest

Request message for DatasetService.UpdateDatasetVersion.

Fields
dataset_version

DatasetVersion

Required. The DatasetVersion which replaces the resource on the server.

update_mask

FieldMask

Required. The update mask applies to the resource. For the FieldMask definition, see google.protobuf.FieldMask. Updatable fields:

  • display_name

UpdateDeploymentResourcePoolOperationMetadata

Runtime operation information for UpdateDeploymentResourcePool method.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

UpdateDeploymentResourcePoolRequest

Request message for UpdateDeploymentResourcePool method.

Fields
deployment_resource_pool

DeploymentResourcePool

Required. The DeploymentResourcePool to update.

The DeploymentResourcePool's name field is used to identify the DeploymentResourcePool to update. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

update_mask

FieldMask

Required. The list of fields to update.

UpdateEndpointLongRunningRequest

Request message for EndpointService.UpdateEndpointLongRunning.

Fields
endpoint

Endpoint

Required. The Endpoint which replaces the resource on the server. Currently we only support updating the client_connection_config field, all the other fields' update will be blocked.

UpdateEndpointOperationMetadata

Runtime operation information for EndpointService.UpdateEndpointLongRunning.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

UpdateEndpointRequest

Request message for EndpointService.UpdateEndpoint.

Fields
endpoint

Endpoint

Required. The Endpoint which replaces the resource on the server.

update_mask

FieldMask

Required. The update mask applies to the resource. See google.protobuf.FieldMask.

UpdateEntityTypeRequest

Request message for FeaturestoreService.UpdateEntityType.

Fields
entity_type

EntityType

Required. The EntityType's name field is used to identify the EntityType to be updated. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}

update_mask

FieldMask

Field mask is used to specify the fields to be overwritten in the EntityType resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to * to override all fields.

Updatable fields:

  • description
  • labels
  • monitoring_config.snapshot_analysis.disabled
  • monitoring_config.snapshot_analysis.monitoring_interval_days
  • monitoring_config.snapshot_analysis.staleness_days
  • monitoring_config.import_features_analysis.state
  • monitoring_config.import_features_analysis.anomaly_detection_baseline
  • monitoring_config.numerical_threshold_config.value
  • monitoring_config.categorical_threshold_config.value
  • offline_storage_ttl_days

UpdateExecutionRequest

Request message for MetadataService.UpdateExecution.

Fields
execution

Execution

Required. The Execution containing updates. The Execution's Execution.name field is used to identify the Execution to be updated. Format: projects/{project}/locations/{location}/metadataStores/{metadatastore}/executions/{execution}

update_mask

FieldMask

Optional. A FieldMask indicating which fields should be updated.

allow_missing

bool

If set to true, and the Execution is not found, a new Execution is created.

UpdateExplanationDatasetOperationMetadata

Runtime operation information for ModelService.UpdateExplanationDataset.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

UpdateExplanationDatasetRequest

Request message for ModelService.UpdateExplanationDataset.

Fields
model

string

Required. The resource name of the Model to update. Format: projects/{project}/locations/{location}/models/{model}

examples

Examples

The example config containing the location of the dataset.

UpdateExplanationDatasetResponse

This type has no fields.

Response message of ModelService.UpdateExplanationDataset operation.

UpdateExtensionRequest

Request message for ExtensionRegistryService.UpdateExtension.

Fields
extension

Extension

Required. The Extension which replaces the resource on the server.

update_mask

FieldMask

Required. Mask specifying which fields to update. Supported fields:

  • display_name
  • description
  • runtime_config
  • tool_use_examples
  • manifest.description

UpdateFeatureGroupOperationMetadata

Details of operations that perform update FeatureGroup.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for FeatureGroup.

UpdateFeatureGroupRequest

Request message for FeatureRegistryService.UpdateFeatureGroup.

Fields
feature_group

FeatureGroup

Required. The FeatureGroup's name field is used to identify the FeatureGroup to be updated. Format: projects/{project}/locations/{location}/featureGroups/{feature_group}

update_mask

FieldMask

Field mask is used to specify the fields to be overwritten in the FeatureGroup resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to * to override all fields.

Updatable fields:

  • labels
  • description
  • big_query
  • big_query.entity_id_columns

UpdateFeatureOnlineStoreOperationMetadata

Details of operations that perform update FeatureOnlineStore.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for FeatureOnlineStore.

UpdateFeatureOnlineStoreRequest

Request message for FeatureOnlineStoreAdminService.UpdateFeatureOnlineStore.

Fields
feature_online_store

FeatureOnlineStore

Required. The FeatureOnlineStore's name field is used to identify the FeatureOnlineStore to be updated. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}

update_mask

FieldMask

Field mask is used to specify the fields to be overwritten in the FeatureOnlineStore resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to * to override all fields.

Updatable fields:

  • labels
  • description
  • bigtable
  • bigtable.auto_scaling
  • bigtable.enable_multi_region_replica

UpdateFeatureOperationMetadata

Details of operations that perform update Feature.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Feature Update.

UpdateFeatureRequest

Request message for FeaturestoreService.UpdateFeature. Request message for FeatureRegistryService.UpdateFeature.

Fields
feature

Feature

Required. The Feature's name field is used to identify the Feature to be updated. Format: projects/{project}/locations/{location}/featurestores/{featurestore}/entityTypes/{entity_type}/features/{feature} projects/{project}/locations/{location}/featureGroups/{feature_group}/features/{feature}

update_mask

FieldMask

Field mask is used to specify the fields to be overwritten in the Features resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to * to override all fields.

Updatable fields:

  • description
  • labels
  • disable_monitoring (Not supported for FeatureRegistryService Feature)
  • point_of_contact (Not supported for FeaturestoreService FeatureStore)

UpdateFeatureViewOperationMetadata

Details of operations that perform update FeatureView.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for FeatureView Update.

UpdateFeatureViewRequest

Request message for FeatureOnlineStoreAdminService.UpdateFeatureView.

Fields
feature_view

FeatureView

Required. The FeatureView's name field is used to identify the FeatureView to be updated. Format: projects/{project}/locations/{location}/featureOnlineStores/{feature_online_store}/featureViews/{feature_view}

update_mask

FieldMask

Field mask is used to specify the fields to be overwritten in the FeatureView resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to * to override all fields.

Updatable fields:

  • labels
  • service_agent_type
  • big_query_source
  • big_query_source.uri
  • big_query_source.entity_id_columns
  • feature_registry_source
  • feature_registry_source.feature_groups
  • sync_config
  • sync_config.cron
  • optimized_config.automatic_resources

UpdateFeaturestoreOperationMetadata

Details of operations that perform update Featurestore.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Featurestore.

UpdateFeaturestoreRequest

Request message for FeaturestoreService.UpdateFeaturestore.

Fields
featurestore

Featurestore

Required. The Featurestore's name field is used to identify the Featurestore to be updated. Format: projects/{project}/locations/{location}/featurestores/{featurestore}

update_mask

FieldMask

Field mask is used to specify the fields to be overwritten in the Featurestore resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to * to override all fields.

Updatable fields:

  • labels
  • online_serving_config.fixed_node_count
  • online_serving_config.scaling
  • online_storage_ttl_days

UpdateIndexEndpointRequest

Request message for IndexEndpointService.UpdateIndexEndpoint.

Fields
index_endpoint

IndexEndpoint

Required. The IndexEndpoint which replaces the resource on the server.

update_mask

FieldMask

Required. The update mask applies to the resource. See google.protobuf.FieldMask.

UpdateIndexOperationMetadata

Runtime operation information for IndexService.UpdateIndex.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

nearest_neighbor_search_operation_metadata

NearestNeighborSearchOperationMetadata

The operation metadata with regard to Matching Engine Index operation.

UpdateIndexRequest

Request message for IndexService.UpdateIndex.

Fields
index

Index

Required. The Index which updates the resource on the server.

update_mask

FieldMask

The update mask applies to the resource. For the FieldMask definition, see google.protobuf.FieldMask.

UpdateModelDeploymentMonitoringJobOperationMetadata

Runtime operation information for JobService.UpdateModelDeploymentMonitoringJob.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

UpdateModelDeploymentMonitoringJobRequest

Request message for JobService.UpdateModelDeploymentMonitoringJob.

Fields
model_deployment_monitoring_job

ModelDeploymentMonitoringJob

Required. The model monitoring configuration which replaces the resource on the server.

update_mask

FieldMask

Required. The update mask is used to specify the fields to be overwritten in the ModelDeploymentMonitoringJob resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. If the user does not provide a mask then only the non-empty fields present in the request will be overwritten. Set the update_mask to * to override all fields. For the objective config, the user can either provide the update mask for model_deployment_monitoring_objective_configs or any combination of its nested fields, such as: model_deployment_monitoring_objective_configs.objective_config.training_dataset.

Updatable fields:

  • display_name
  • model_deployment_monitoring_schedule_config
  • model_monitoring_alert_config
  • logging_sampling_strategy
  • labels
  • log_ttl
  • enable_monitoring_pipeline_logs . and
  • model_deployment_monitoring_objective_configs . or
  • model_deployment_monitoring_objective_configs.objective_config.training_dataset
  • model_deployment_monitoring_objective_configs.objective_config.training_prediction_skew_detection_config
  • model_deployment_monitoring_objective_configs.objective_config.prediction_drift_detection_config

UpdateModelMonitorOperationMetadata

Runtime operation information for ModelMonitoringService.UpdateModelMonitor.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

UpdateModelMonitorRequest

Request message for ModelMonitoringService.UpdateModelMonitor.

Fields
model_monitor

ModelMonitor

Required. The model monitoring configuration which replaces the resource on the server.

update_mask

FieldMask

Required. Mask specifying which fields to update.

UpdateModelRequest

Request message for ModelService.UpdateModel.

Fields
model

Model

Required. The Model which replaces the resource on the server. When Model Versioning is enabled, the model.name will be used to determine whether to update the model or model version. 1. model.name with the @ value, e.g. models/123@1, refers to a version specific update. 2. model.name without the @ value, e.g. models/123, refers to a model update. 3. model.name with @-, e.g. models/123@-, refers to a model update. 4. Supported model fields: display_name, description; supported version-specific fields: version_description. Labels are supported in both scenarios. Both the model labels and the version labels are merged when a model is returned. When updating labels, if the request is for model-specific update, model label gets updated. Otherwise, version labels get updated. 5. A model name or model version name fields update mismatch will cause a precondition error. 6. One request cannot update both the model and the version fields. You must update them separately.

update_mask

FieldMask

Required. The update mask applies to the resource. For the FieldMask definition, see google.protobuf.FieldMask.

UpdateNotebookRuntimeTemplateRequest

Request message for NotebookService.UpdateNotebookRuntimeTemplate.

Fields
notebook_runtime_template

NotebookRuntimeTemplate

Required. The NotebookRuntimeTemplate to update.

update_mask

FieldMask

Required. The update mask applies to the resource. For the FieldMask definition, see google.protobuf.FieldMask. Input format: {paths: "${updated_filed}"} Updatable fields:

  • encryption_spec.kms_key_name

UpdatePersistentResourceOperationMetadata

Details of operations that perform update PersistentResource.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for PersistentResource.

progress_message

string

Progress Message for Update LRO

UpdatePersistentResourceRequest

Request message for UpdatePersistentResource method.

Fields
persistent_resource

PersistentResource

Required. The PersistentResource to update.

The PersistentResource's name field is used to identify the PersistentResource to update. Format: projects/{project}/locations/{location}/persistentResources/{persistent_resource}

update_mask

FieldMask

Required. Specify the fields to be overwritten in the PersistentResource by the update method.

UpdateRagCorpusOperationMetadata

Runtime operation information for VertexRagDataService.UpdateRagCorpus.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

UpdateRagCorpusRequest

Request message for VertexRagDataService.UpdateRagCorpus.

Fields
rag_corpus

RagCorpus

Required. The RagCorpus which replaces the resource on the server.

UpdateReasoningEngineOperationMetadata

Details of ReasoningEngineService.UpdateReasoningEngine operation.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

UpdateReasoningEngineRequest

Request message for ReasoningEngineService.UpdateReasoningEngine.

Fields
reasoning_engine

ReasoningEngine

Required. The ReasoningEngine which replaces the resource on the server.

update_mask

FieldMask

Optional. Mask specifying which fields to update.

UpdateScheduleRequest

Request message for ScheduleService.UpdateSchedule.

Fields
schedule

Schedule

Required. The Schedule which replaces the resource on the server. The following restrictions will be applied:

  • The scheduled request type cannot be changed.
  • The non-empty fields cannot be unset.
  • The output_only fields will be ignored if specified.
update_mask

FieldMask

Required. The update mask applies to the resource. See google.protobuf.FieldMask.

UpdateSpecialistPoolOperationMetadata

Runtime operation metadata for SpecialistPoolService.UpdateSpecialistPool.

Fields
specialist_pool

string

Output only. The name of the SpecialistPool to which the specialists are being added. Format: projects/{project_id}/locations/{location_id}/specialistPools/{specialist_pool}

generic_metadata

GenericOperationMetadata

The operation generic information.

UpdateSpecialistPoolRequest

Request message for SpecialistPoolService.UpdateSpecialistPool.

Fields
specialist_pool

SpecialistPool

Required. The SpecialistPool which replaces the resource on the server.

update_mask

FieldMask

Required. The update mask applies to the resource.

UpdateTensorboardExperimentRequest

Request message for TensorboardService.UpdateTensorboardExperiment.

Fields
update_mask

FieldMask

Required. Field mask is used to specify the fields to be overwritten in the TensorboardExperiment resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.

tensorboard_experiment

TensorboardExperiment

Required. The TensorboardExperiment's name field is used to identify the TensorboardExperiment to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

UpdateTensorboardOperationMetadata

Details of operations that perform update Tensorboard.

Fields
generic_metadata

GenericOperationMetadata

Operation metadata for Tensorboard.

UpdateTensorboardRequest

Request message for TensorboardService.UpdateTensorboard.

Fields
update_mask

FieldMask

Required. Field mask is used to specify the fields to be overwritten in the Tensorboard resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.

tensorboard

Tensorboard

Required. The Tensorboard's name field is used to identify the Tensorboard to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

UpdateTensorboardRunRequest

Request message for TensorboardService.UpdateTensorboardRun.

Fields
update_mask

FieldMask

Required. Field mask is used to specify the fields to be overwritten in the TensorboardRun resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.

tensorboard_run

TensorboardRun

Required. The TensorboardRun's name field is used to identify the TensorboardRun to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

UpdateTensorboardTimeSeriesRequest

Request message for TensorboardService.UpdateTensorboardTimeSeries.

Fields
update_mask

FieldMask

Required. Field mask is used to specify the fields to be overwritten in the TensorboardTimeSeries resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field is overwritten if it's in the mask. If the user does not provide a mask then all fields are overwritten if new values are specified.

tensorboard_time_series

TensorboardTimeSeries

Required. The TensorboardTimeSeries' name field is used to identify the TensorboardTimeSeries to be updated. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}/timeSeries/{time_series}

UpgradeNotebookRuntimeOperationMetadata

Metadata information for NotebookService.UpgradeNotebookRuntime.

Fields
generic_metadata

GenericOperationMetadata

The operation generic information.

progress_message

string

A human-readable message that shows the intermediate progress details of NotebookRuntime.

UpgradeNotebookRuntimeRequest

Request message for NotebookService.UpgradeNotebookRuntime.

Fields
name

string

Required. The name of the NotebookRuntime resource to be upgrade. Instead of checking whether the name is in valid NotebookRuntime resource name format, directly throw NotFound exception if there is no such NotebookRuntime in spanner.

UpgradeNotebookRuntimeResponse

This type has no fields.

Response message for NotebookService.UpgradeNotebookRuntime.

UploadModelOperationMetadata

Details of ModelService.UploadModel operation.

Fields
generic_metadata

GenericOperationMetadata

The common part of the operation metadata.

UploadModelRequest

Request message for ModelService.UploadModel.

Fields
parent

string

Required. The resource name of the Location into which to upload the Model. Format: projects/{project}/locations/{location}

parent_model

string

Optional. The resource name of the model into which to upload the version. Only specify this field when uploading a new version.

model_id

string

Optional. The ID to use for the uploaded Model, which will become the final component of the model resource name.

This value may be up to 63 characters, and valid characters are [a-z0-9_-]. The first character cannot be a number or hyphen.

model

Model

Required. The Model to create.

service_account

string

Optional. The user-provided custom service account to use to do the model upload. If empty, Vertex AI Service Agent will be used to access resources needed to upload the model. This account must belong to the target project where the model is uploaded to, i.e., the project specified in the parent field of this request and have necessary read permissions (to Google Cloud Storage, Artifact Registry, etc.).

UploadModelResponse

Response message of ModelService.UploadModel operation.

Fields
model

string

The name of the uploaded Model resource. Format: projects/{project}/locations/{location}/models/{model}

model_version_id

string

Output only. The version ID of the model that is uploaded.

UploadRagFileConfig

Config for uploading RagFile.

Fields
rag_file_chunking_config
(deprecated)

RagFileChunkingConfig

Specifies the size and overlap of chunks after uploading RagFile.

rag_file_transformation_config

RagFileTransformationConfig

Specifies the transformation config for RagFiles.

UpsertDatapointsRequest

Request message for IndexService.UpsertDatapoints

Fields
index

string

Required. The name of the Index resource to be updated. Format: projects/{project}/locations/{location}/indexes/{index}

datapoints[]

IndexDatapoint

A list of datapoints to be created/updated.

update_mask

FieldMask

Optional. Update mask is used to specify the fields to be overwritten in the datapoints by the update. The fields specified in the update_mask are relative to each IndexDatapoint inside datapoints, not the full request.

Updatable fields:

  • Use all_restricts to update both restricts and numeric_restricts.

UpsertDatapointsResponse

This type has no fields.

Response message for IndexService.UpsertDatapoints

UserActionReference

References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.

Fields
method

string

The method name of the API RPC call. For example, "/google.cloud.aiplatform.{apiVersion}.DatasetService.CreateDataset"

Union field reference.

reference can be only one of the following:

operation

string

For API calls that return a long running operation. Resource name of the long running operation. Format: projects/{project}/locations/{location}/operations/{operation}

data_labeling_job

string

For API calls that start a LabelingJob. Resource name of the LabelingJob. Format: projects/{project}/locations/{location}/dataLabelingJobs/{data_labeling_job}

Value

Value is the value of the field.

Fields

Union field value.

value can be only one of the following:

int_value

int64

An integer value.

double_value

double

A double value.

string_value

string

A string value.

VertexAISearch

Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/products/agent-builder

Fields
datastore

string

Required. Fully-qualified Vertex AI Search data store resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}

VertexAiSearchConfig

Config for the Vertex AI Search.

Fields
serving_config

string

Vertex AI Search Serving Config resource full name. For example, projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config} or projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}/servingConfigs/{serving_config}.

VertexRagStore

Retrieve from Vertex RAG Store for grounding.

Fields
rag_corpora[]
(deprecated)

string

Optional. Deprecated. Please use rag_resources instead.

rag_resources[]

RagResource

Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.

rag_retrieval_config

RagRetrievalConfig

Optional. The retrieval config for the Rag query.

similarity_top_k
(deprecated)

int32

Optional. Number of top k results to return from the selected corpora.

vector_distance_threshold
(deprecated)

double

Optional. Only return results with vector distance smaller than the threshold.

RagResource

The definition of the Rag resource.

Fields
rag_corpus

string

Optional. RagCorpora resource name. Format: projects/{project}/locations/{location}/ragCorpora/{rag_corpus}

rag_file_ids[]

string

Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.

VideoMetadata

Metadata describes the input video content.

Fields
start_offset

Duration

Optional. The start offset of the video.

end_offset

Duration

Optional. The end offset of the video.

WorkerPoolSpec

Represents the spec of a worker pool in a job.

Fields
machine_spec

MachineSpec

Optional. Immutable. The specification of a single machine.

replica_count

int64

Optional. The number of worker replicas to use for this worker pool.

nfs_mounts[]

NfsMount

Optional. List of NFS mount spec.

disk_spec

DiskSpec

Disk spec.

Union field task. The custom task to be executed in this worker pool. task can be only one of the following:
container_spec

ContainerSpec

The custom container task.

python_package_spec

PythonPackageSpec

The Python packaged task.

WriteFeatureValuesPayload

Contains Feature values to be written for a specific entity.

Fields
entity_id

string

Required. The ID of the entity.

feature_values

map<string, FeatureValue>

Required. Feature values to be written, mapping from Feature ID to value. Up to 100,000 feature_values entries may be written across all payloads. The feature generation time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future.

WriteFeatureValuesRequest

Request message for FeaturestoreOnlineServingService.WriteFeatureValues.

Fields
entity_type

string

Required. The resource name of the EntityType for the entities being written. Value format: projects/{project}/locations/{location}/featurestores/ {featurestore}/entityTypes/{entityType}. For example, for a machine learning model predicting user clicks on a website, an EntityType ID could be user.

payloads[]

WriteFeatureValuesPayload

Required. The entities to be written. Up to 100,000 feature values can be written across all payloads.

WriteFeatureValuesResponse

This type has no fields.

Response message for FeaturestoreOnlineServingService.WriteFeatureValues.

WriteTensorboardExperimentDataRequest

Request message for TensorboardService.WriteTensorboardExperimentData.

Fields
tensorboard_experiment

string

Required. The resource name of the TensorboardExperiment to write data to. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}

write_run_data_requests[]

WriteTensorboardRunDataRequest

Required. Requests containing per-run TensorboardTimeSeries data to write.

WriteTensorboardExperimentDataResponse

This type has no fields.

Response message for TensorboardService.WriteTensorboardExperimentData.

WriteTensorboardRunDataRequest

Request message for TensorboardService.WriteTensorboardRunData.

Fields
tensorboard_run

string

Required. The resource name of the TensorboardRun to write data to. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}/experiments/{experiment}/runs/{run}

time_series_data[]

TimeSeriesData

Required. The TensorboardTimeSeries data to write. Values with in a time series are indexed by their step value. Repeated writes to the same step will overwrite the existing value for that step. The upper limit of data points per write request is 5000.

WriteTensorboardRunDataResponse

This type has no fields.

Response message for TensorboardService.WriteTensorboardRunData.

XraiAttribution

An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

Supported only by image Models.

Fields
step_count

int32

Required. The number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.

Valid range of its value is [1, 100], inclusively.

smooth_grad_config

SmoothGradConfig

Config for SmoothGrad approximation of gradients.

When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf

blur_baseline_config

BlurBaselineConfig

Config for XRAI with blur baseline.

When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383