Class ExecutionTemplate (1.8.0)

ExecutionTemplate(mapping=None, *, ignore_unknown_fields=False, **kwargs)

The description a notebook execution workload.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Attributes

NameDescription
scale_tier google.cloud.notebooks_v1.types.ExecutionTemplate.ScaleTier
Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
master_type str
Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the `special configuration options for training with TPU
accelerator_config google.cloud.notebooks_v1.types.ExecutionTemplate.SchedulerAcceleratorConfig
Configuration (count and accelerator type) for hardware running notebook execution.
labels MutableMapping[str, str]
Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
input_notebook_file str
Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
container_image_uri str
Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
output_notebook_folder str
Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
params_yaml_file str
Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
parameters str
Parameters used within the 'input_notebook_file' notebook.
service_account str
The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
job_type google.cloud.notebooks_v1.types.ExecutionTemplate.JobType
The type of Job to be used on this execution.
dataproc_parameters google.cloud.notebooks_v1.types.ExecutionTemplate.DataprocParameters
Parameters used in Dataproc JobType executions. This field is a member of oneof_ job_parameters.
vertex_ai_parameters google.cloud.notebooks_v1.types.ExecutionTemplate.VertexAIParameters
Parameters used in Vertex AI JobType executions. This field is a member of oneof_ job_parameters.
kernel_spec str
Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
tensorboard str
The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

Classes

DataprocParameters

DataprocParameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Parameters used in Dataproc JobType executions.

JobType

JobType(value)

The backend used for this execution.

Values: JOB_TYPE_UNSPECIFIED (0): No type specified. VERTEX_AI (1): Custom Job in aiplatform.googleapis.com. Default value for an execution. DATAPROC (2): Run execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs

LabelsEntry

LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

The abstract base class for a message.

Parameters
NameDescription
kwargs dict

Keys and values corresponding to the fields of the message.

mapping Union[dict, .Message]

A dictionary or message to be used to determine the values for this message.

ignore_unknown_fields Optional(bool)

If True, do not raise errors for unknown fields. Only applied if mapping is a mapping type or there are keyword parameters.

ScaleTier

ScaleTier(value)

Required. Specifies the machine types, the number of replicas for workers and parameter servers.

Values: SCALE_TIER_UNSPECIFIED (0): Unspecified Scale Tier. BASIC (1): A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets. STANDARD_1 (2): Many workers and a few parameter servers. PREMIUM_1 (3): A large number of workers with many parameter servers. BASIC_GPU (4): A single worker instance with a K80 GPU. BASIC_TPU (5): A single worker instance with a Cloud TPU. CUSTOM (6): The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines:

    -  You *must* set `ExecutionTemplate.masterType` to
       specify the type of machine to use for your master node.
       This is the only required setting.

SchedulerAcceleratorConfig

SchedulerAcceleratorConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Definition of a hardware accelerator. Note that not all combinations of type and core_count are valid. Check GPUs on Compute Engine <https://cloud.google.com/compute/docs/gpus>__ to find a valid combination. TPUs are not supported.

SchedulerAcceleratorType

SchedulerAcceleratorType(value)

Hardware accelerator types for AI Platform Training jobs.

Values: SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED (0): Unspecified accelerator type. Default to no GPU. NVIDIA_TESLA_K80 (1): Nvidia Tesla K80 GPU. NVIDIA_TESLA_P100 (2): Nvidia Tesla P100 GPU. NVIDIA_TESLA_V100 (3): Nvidia Tesla V100 GPU. NVIDIA_TESLA_P4 (4): Nvidia Tesla P4 GPU. NVIDIA_TESLA_T4 (5): Nvidia Tesla T4 GPU. NVIDIA_TESLA_A100 (10): Nvidia Tesla A100 GPU. TPU_V2 (6): TPU v2. TPU_V3 (7): TPU v3.

VertexAIParameters

VertexAIParameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Parameters used in Vertex AI JobType executions.