Class ModelDeploymentMonitoringJob (1.51.0)

ModelDeploymentMonitoringJob(
    model_deployment_monitoring_job_name: str,
    project: typing.Optional[str] = None,
    location: typing.Optional[str] = None,
    credentials: typing.Optional[google.auth.credentials.Credentials] = None,
)

Vertex AI Model Deployment Monitoring Job.

This class should be used in conjunction with the Endpoint class in order to configure model monitoring for deployed models.

Properties

create_time

Time this resource was created.

display_name

Display name of this resource.

encryption_spec

Customer-managed encryption key options for this Vertex AI resource.

If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.

end_time

Time when the Job resource entered the JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, or JOB_STATE_CANCELLED state.

error

Detailed error info for this Job resource. Only populated when the Job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.

gca_resource

The underlying resource proto representation.

labels

User-defined labels containing metadata about this resource.

Read more about labels at https://goo.gl/xmQnxf

name

Name of this resource.

resource_name

Full qualified resource name.

start_time

Time when the Job resource entered the JOB_STATE_RUNNING for the first time.

state

Fetch Job again and return the current JobState.

Returns
Type Description
state (job_state.JobState) Enum that describes the state of a Vertex AI job.

update_time

Time this resource was last updated.

Methods

ModelDeploymentMonitoringJob

ModelDeploymentMonitoringJob(
    model_deployment_monitoring_job_name: str,
    project: typing.Optional[str] = None,
    location: typing.Optional[str] = None,
    credentials: typing.Optional[google.auth.credentials.Credentials] = None,
)

Initializer for ModelDeploymentMonitoringJob.

Parameters
Name Description
model_deployment_monitoring_job_name str

Required. A fully-qualified ModelDeploymentMonitoringJob resource name or ID. Example: "projects/.../locations/.../modelDeploymentMonitoringJobs/456" or "456" when project and location are initialized or passed.

project typing.Optional[str]

(str), Optional. project to retrieve ModelDeploymentMonitoringJob from. If not set, project set in aiplatform.init will be used.

location typing.Optional[str]

(str), Optional. location to retrieve ModelDeploymentMonitoringJob from. If not set, location set in aiplatform.init will be used.

credentials typing.Optional[google.auth.credentials.Credentials]

(auth_credentials.Credentials), Optional. Custom credentials to use. If not set, credentials set in aiplatform.init will be used.

cancel

cancel()

Cancels this Job.

Success of cancellation is not guaranteed. Use Job.state property to verify if cancellation was successful.

create

create(
    endpoint: typing.Union[str, google.cloud.aiplatform.models.Endpoint],
    objective_configs: typing.Optional[
        typing.Union[
            google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig,
            typing.Dict[
                str, google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig
            ],
        ]
    ] = None,
    logging_sampling_strategy: typing.Optional[
        google.cloud.aiplatform.model_monitoring.sampling.RandomSampleConfig
    ] = None,
    schedule_config: typing.Optional[
        google.cloud.aiplatform.model_monitoring.schedule.ScheduleConfig
    ] = None,
    display_name: typing.Optional[str] = None,
    deployed_model_ids: typing.Optional[typing.List[str]] = None,
    alert_config: typing.Optional[
        google.cloud.aiplatform.model_monitoring.alert.EmailAlertConfig
    ] = None,
    predict_instance_schema_uri: typing.Optional[str] = None,
    sample_predict_instance: typing.Optional[str] = None,
    analysis_instance_schema_uri: typing.Optional[str] = None,
    bigquery_tables_log_ttl: typing.Optional[int] = None,
    stats_anomalies_base_directory: typing.Optional[str] = None,
    enable_monitoring_pipeline_logs: typing.Optional[bool] = None,
    labels: typing.Optional[typing.Dict[str, str]] = None,
    encryption_spec_key_name: typing.Optional[str] = None,
    project: typing.Optional[str] = None,
    location: typing.Optional[str] = None,
    credentials: typing.Optional[google.auth.credentials.Credentials] = None,
    create_request_timeout: typing.Optional[float] = None,
) -> google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob

Creates and launches a model monitoring job.

Parameters
Name Description
endpoint Union[str, "aiplatform.Endpoint"]

Required. Endpoint resource name or an instance of aiplatform.Endpoint. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

logging_sampling_strategy model_monitoring.sampling.RandomSampleConfig

Optional. Sample Strategy for logging.

schedule_config model_monitoring.schedule.ScheduleConfig

Optional. Configures model monitoring job scheduling interval in hours. This defines how often the monitoring jobs are triggered.

display_name str

Optional. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

deployed_model_ids List[str]

Optional. Use this argument to specify which deployed models to apply the objective config to. If left unspecified, the same config will be applied to all deployed models.

alert_config model_monitoring.alert.EmailAlertConfig

Optional. Configures how alerts are sent to the user. Right now only email alert is supported.

predict_instance_schema_uri str

Optional. YAML schema file uri describing the format of a single instance, which are given to format the Endpoint's prediction (and explanation). If not set, the schema will be generated from collected predict requests.

sample_predict_instance str

Optional. Sample Predict instance, same format as PredictionRequest.instances, this can be set as a replacement of predict_instance_schema_uri If not set, the schema will be generated from collected predict requests.

analysis_instance_schema_uri str

Optional. YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.

bigquery_tables_log_ttl int

Optional. The TTL(time to live) of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

stats_anomalies_base_directory str

Optional. Stats anomalies base folder path.

enable_monitoring_pipeline_logs bool

Optional. If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing https://cloud.google.com/logging#pricing__.

labels Dict[str, str]

Optional. The labels with user-defined metadata to organize the ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

encryption_spec_key_name str

Optional. Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.

create_request_timeout int

Optional. Timeout in seconds for the model monitoring job creation request.

delete

delete() -> None

Deletes an MDM job.

done

done() -> bool

Method indicating whether a job has completed.

list

list(
    filter: typing.Optional[str] = None,
    order_by: typing.Optional[str] = None,
    project: typing.Optional[str] = None,
    location: typing.Optional[str] = None,
    credentials: typing.Optional[google.auth.credentials.Credentials] = None,
) -> typing.List[google.cloud.aiplatform.base.VertexAiResourceNoun]

List all instances of this Job Resource.

Example Usage:

aiplatform.BatchPredictionJobs.list( filter='state="JOB_STATE_SUCCEEDED" AND display_name="my_job"', )

Parameters
Name Description
filter str

Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

order_by str

Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: display_name, create_time, update_time

project str

Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used.

location str

Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used.

credentials auth_credentials.Credentials

Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init.

pause

pause() -> google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob

Pause a running MDM job.

resume

resume() -> google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob

Resumes a paused MDM job.

to_dict

to_dict() -> typing.Dict[str, typing.Any]

Returns the resource proto as a dictionary.

update

update(
    *,
    display_name: typing.Optional[str] = None,
    schedule_config: typing.Optional[
        google.cloud.aiplatform.model_monitoring.schedule.ScheduleConfig
    ] = None,
    alert_config: typing.Optional[
        google.cloud.aiplatform.model_monitoring.alert.EmailAlertConfig
    ] = None,
    logging_sampling_strategy: typing.Optional[
        google.cloud.aiplatform.model_monitoring.sampling.RandomSampleConfig
    ] = None,
    labels: typing.Optional[typing.Dict[str, str]] = None,
    bigquery_tables_log_ttl: typing.Optional[int] = None,
    enable_monitoring_pipeline_logs: typing.Optional[bool] = None,
    objective_configs: typing.Optional[
        typing.Union[
            google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig,
            typing.Dict[
                str, google.cloud.aiplatform.model_monitoring.objective.ObjectiveConfig
            ],
        ]
    ] = None,
    deployed_model_ids: typing.Optional[typing.List[str]] = None,
    update_request_timeout: typing.Optional[float] = None
) -> google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob

Updates an existing ModelDeploymentMonitoringJob.

Parameters
Name Description
display_name str

Optional. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.

schedule_config model_monitoring.schedule.ScheduleConfig

Required. Configures model monitoring job scheduling interval in hours. This defines how often the monitoring jobs are triggered.

alert_config model_monitoring.alert.EmailAlertConfig

Optional. Configures how alerts are sent to the user. Right now only email alert is supported.

logging_sampling_strategy model_monitoring.sampling.RandomSampleConfig

Required. Sample Strategy for logging.

labels Dict[str, str]

Optional. The labels with user-defined metadata to organize the ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

bigquery_tables_log_ttl int

Optional. The number of days for which the logs are stored. The TTL(time to live) of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.

enable_monitoring_pipeline_logs bool

Optional. If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing https://cloud.google.com/logging#pricing__.

deployed_model_ids List[str]

Optional. Use this argument to specify which deployed models to apply the updated objective config to. If left unspecified, the same config will be applied to all deployed models.

upate_request_timeout float

Optional. Timeout in seconds for the model monitoring job update request.

wait

wait()

Helper method that blocks until all futures are complete.

wait_for_completion

wait_for_completion() -> None

Waits for job to complete.

Exceptions
Type Description
RuntimeError If job failed or cancelled.