- JSON representation
- ExecutionConfig
- AuthenticationConfig
- AuthenticationType
- PeripheralsConfig
- SparkHistoryServerConfig
Environment configuration for a workload.
JSON representation |
---|
{ "executionConfig": { object ( |
Fields | |
---|---|
executionConfig |
Optional. Execution configuration for a workload. |
peripheralsConfig |
Optional. Peripherals configuration that workload has access to. |
ExecutionConfig
Execution configuration for a workload.
JSON representation |
---|
{ "serviceAccount": string, "networkTags": [ string ], "kmsKey": string, "idleTtl": string, "ttl": string, "stagingBucket": string, "authenticationConfig": { object ( |
Fields | |
---|---|
serviceAccount |
Optional. Service account that used to execute workload. |
networkTags[] |
Optional. Tags used for network traffic control. |
kmsKey |
Optional. The Cloud KMS key to use for encryption. |
idleTtl |
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). Defaults to 1 hour if not set. If both |
ttl |
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If |
stagingBucket |
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a |
authenticationConfig |
Optional. Authentication configuration used to set the default identity for the workload execution. The config specifies the type of identity (service account or user) that will be used by workloads to access resources on the project(s). |
Union field network . Network configuration for workload execution. network can be only one of the following: |
|
networkUri |
Optional. Network URI to connect workload to. |
subnetworkUri |
Optional. Subnetwork URI to connect workload to. |
AuthenticationConfig
Authentication configuration for a workload is used to set the default identity for the workload execution. The config specifies the type of identity (service account or user) that will be used by workloads to access resources on the project(s).
JSON representation |
---|
{
"userWorkloadAuthenticationType": enum ( |
Fields | |
---|---|
userWorkloadAuthenticationType |
Optional. Authentication type for the user workload running in containers. |
AuthenticationType
Authentication types for workload execution.
Enums | |
---|---|
AUTHENTICATION_TYPE_UNSPECIFIED |
If AuthenticationType is unspecified then END_USER_CREDENTIALS is used for 3.0 and newer runtimes, and SERVICE_ACCOUNT is used for older runtimes. |
SERVICE_ACCOUNT |
Use service account credentials for authenticating to other services. |
END_USER_CREDENTIALS |
Use OAuth credentials associated with the workload creator/user for authenticating to other services. |
PeripheralsConfig
Auxiliary services configuration for a workload.
JSON representation |
---|
{
"metastoreService": string,
"sparkHistoryServerConfig": {
object ( |
Fields | |
---|---|
metastoreService |
Optional. Resource name of an existing Dataproc Metastore service. Example:
|
sparkHistoryServerConfig |
Optional. The Spark History Server configuration for the workload. |
SparkHistoryServerConfig
Spark History Server configuration for the workload.
JSON representation |
---|
{ "dataprocCluster": string } |
Fields | |
---|---|
dataprocCluster |
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload. Example:
|