使用自定义约束条件
Google Cloud 组织政策可让您以编程方式集中控制组织的资源。作为组织政策管理员,您可以定义组织政策,这是一组称为限制条件的限制,会应用于Google Cloud 资源层次结构中的Google Cloud 资源及其后代。您可以在组织、文件夹或项目级强制执行组织政策。
组织政策为各种Google Cloud 服务提供预定义限制条件。但是,如果您想要更精细地控制和自定义组织政策中受限的特定字段,还可以创建自定义限制条件并在组织政策中使用这些自定义限制条件。
优势
您可以使用自定义组织政策来允许或拒绝对 Serverless for Apache Spark 批处理和会话执行特定操作。例如,如果创建批处理工作负载的请求未能满足组织政策设置的自定义限制条件验证,则请求会失败,并且系统会向调用方返回错误。
政策继承
如果您对资源强制执行政策,默认情况下,该资源的后代会继承组织政策。例如,如果您对某个文件夹强制执行一项政策, Google Cloud 会对该文件夹中的所有项目强制执行该政策。如需详细了解此行为及其更改方式,请参阅层次结构评估规则。
价格
组织政策服务(包括预定义限制条件和自定义限制条件)可免费使用。
准备工作
- 设置项目
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Serverless for Apache Spark API.
-
Install the Google Cloud CLI.
-
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Serverless for Apache Spark API.
-
Install the Google Cloud CLI.
-
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
-
To initialize the gcloud CLI, run the following command:
gcloud init
- 请确保您知道您的组织 ID。
-
orgpolicy.constraints.list
-
orgpolicy.policies.create
-
orgpolicy.policies.delete
-
orgpolicy.policies.list
-
orgpolicy.policies.update
-
orgpolicy.policy.get
-
orgpolicy.policy.set
ORGANIZATION_ID
:您的组织 ID,例如123456789
。CONSTRAINT_NAME
:新的自定义限制条件的名称。 自定义限制条件必须以custom.
开头,只能包含大写字母、小写字母或数字,例如custom.batchMustHaveSpecifiedCategoryLabel
。该字段的长度上限为 70 个字符,不计算前缀(例如organizations/123456789/customConstraints/custom
)。CONDITION
:针对受支持的服务资源的表示法编写的 CEL 条件。此字段的最大长度为 1,000 个字符。如需详细了解可用于针对其编写条件的资源,请参阅针对资源和操作的 Dataproc Serverless 限制条件。 条件示例:("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service'])
。ACTION
:满足条件时要执行的操作。可以是ALLOW
或DENY
。DISPLAY_NAME
:限制条件的直观易记名称。 显示名称示例:“强制执行批次‘类别’标签要求”。 此字段的最大长度为 200 个字符。DESCRIPTION
:直观易懂的限制条件说明,在违反政策时显示为错误消息。 此字段的最大长度为 2000 个字符。说明示例:“仅当 Dataproc 批处理具有值为‘零售’、‘广告’或‘服务’的‘类别’标签时,才允许创建该批处理”。ORGANIZATION_ID
:您的组织 ID,例如123456789
。CONSTRAINT_NAME
:新的自定义限制条件的名称。 自定义限制条件必须以custom.
开头,只能包含大写字母、小写字母或数字,例如custom.SessionNameMustStartWithTeamName
。该字段的长度上限为 70 个字符,不计算前缀organizations/123456789/customConstraints/
。例如organizations/123456789/customConstraints/custom
。CONDITION
:针对受支持的服务资源的表示法编写的 CEL 条件。此字段的最大长度为 1,000 个字符。如需详细了解可用于针对其编写条件的资源,请参阅针对资源和操作的 Dataproc Serverless 限制条件。 条件示例:(resource.name.startsWith("dataproc")
。ACTION
:满足条件时要执行的操作。可以是ALLOW
或DENY
。DISPLAY_NAME
:限制条件的直观易记名称。 显示名称示例:“强制执行会话必须具有小于 2 小时的 TTL”。此字段的最大长度为 200 个字符。DESCRIPTION
:直观易懂的限制条件说明,在违反政策时显示为错误消息。 此字段的最大长度为 2000 个字符。示例说明:“仅当会话设置了允许的 TTL 时,才允许创建会话”。- 在 Google Cloud 控制台中,前往组织政策页面。
- 在项目选择器中,选择要设置组织政策的项目。
- 从组织政策页面上的列表中选择您的限制条件,以查看该限制条件的政策详情页面。
- 如需为该资源配置组织政策,请点击管理政策。
- 在修改政策页面,选择覆盖父级政策。
- 点击添加规则。
- 在强制执行部分中,选择开启还是关闭此组织政策的强制执行。
- (可选)如需使组织政策成为基于某个标记的条件性政策,请点击添加条件。请注意,如果您向组织政策添加条件规则,则必须至少添加一个无条件规则,否则无法保存政策。如需了解详情,请参阅设置带有标记的组织政策。
- 点击测试更改以模拟组织政策的效果。政策模拟不适用于旧版托管式限制。如需了解详情,请参阅使用 Policy Simulator 测试组织政策更改。
- 若要完成并应用组织政策,请点击设置政策。该政策最长需要 15 分钟才能生效。
-
PROJECT_ID
:要对其实施限制条件的项目。 -
CONSTRAINT_NAME
:您为自定义限制条件定义的名称。例如,
。custom.batchMustHaveSpecifiedCategoryLabel
resource.labels
resource.pysparkBatch.mainPythonFileUri
resource.pysparkBatch.args
resource.pysparkBatch.pythonFileUris
resource.pysparkBatch.jarFileUris
resource.pysparkBatch.fileUris
resource.pysparkBatch.archiveUris
resource.sparkBatch.mainJarFileUri
resource.sparkBatch.mainClass
resource.sparkBatch.args
resource.sparkBatch.jarFileUris
resource.sparkBatch.fileUris
resource.sparkBatch.archiveUris
resource.sparkRBatch.mainRFileUri
resource.sparkRBatch.args
resource.sparkRBatch.fileUris
resource.sparkRBatch.archiveUris
resource.sparkSqlBatch.queryFileUri
resource.sparkSqlBatch.queryVariables
resource.sparkSqlBatch.jarFileUris
resource.runtimeConfig.version
resource.runtimeConfig.containerImage
resource.runtimeConfig.properties
resource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepository
resource.runtimeConfig.autotuningConfig.scenarios
resource.runtimeConfig.cohort
resource.environmentConfig.executionConfig.serviceAccount
resource.environmentConfig.executionConfig.networkUri
resource.environmentConfig.executionConfig.subnetworkUri
resource.environmentConfig.executionConfig.networkTags
resource.environmentConfig.executionConfig.kmsKey
resource.environmentConfig.executionConfig.idleTtl
resource.environmentConfig.executionConfig.ttl
resource.environmentConfig.executionConfig.stagingBucket
resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
resource.environmentConfig.peripheralsConfig.metastoreService
resource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
resource.name
resource.sparkConnectSession
resource.user
resource.sessionTemplate
resource.jupyterSession.kernel
resource.jupyterSession.displayName
resource.runtimeConfig.version
resource.runtimeConfig.containerImage
resource.runtimeConfig.properties
resource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepository
resource.runtimeConfig.autotuningConfig.scenarios
resource.runtimeConfig.cohort
resource.environmentConfig.executionConfig.serviceAccount
resource.environmentConfig.executionConfig.networkUri
resource.environmentConfig.executionConfig.subnetworkUri
resource.environmentConfig.executionConfig.networkTags
resource.environmentConfig.executionConfig.kmsKey
resource.environmentConfig.executionConfig.idleTtl
resource.environmentConfig.executionConfig.ttl
resource.environmentConfig.executionConfig.stagingBucket
resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
resource.environmentConfig.peripheralsConfig.metastoreService
resource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
所需的角色
如需获得管理组织政策所需的权限,请让您的管理员为您授予组织资源的 Organization Policy Administrator (
roles/orgpolicy.policyAdmin
) IAM 角色。 如需详细了解如何授予角色,请参阅管理对项目、文件夹和组织的访问权限。此预定义角色可提供管理组织政策所需的权限。如需查看所需的确切权限,请展开所需权限部分:
所需权限
管理组织政策需要以下权限:
创建自定义限制条件
自定义限制条件是在 YAML 文件中,由其所应用的资源、方法、条件和操作定义的。Serverless for Apache Spark 支持应用于批处理和会话资源的
CREATE
方法的自定义限制条件。如需详细了解如何创建自定义限制条件,请参阅定义自定义限制条件。
为批处理资源创建自定义限制条件
如需为批处理资源的 Apache Spark 无服务器自定义限制条件创建 YAML 文件,请使用以下格式:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: CONDITION actionType: ACTION displayName: DISPLAY_NAME description: DESCRIPTION
替换以下内容:
为会话资源创建自定义限制条件
如需为会话资源的 Apache Spark 无服务器自定义限制条件创建 YAML 文件,请使用以下格式:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: CONDITION actionType: ACTION displayName: DISPLAY_NAME description: DESCRIPTION
替换以下内容:
设置自定义限制条件
为新的自定义限制条件创建 YAML 文件后,您必须对其进行设置,以使其可用于组织中的组织政策。如需设置自定义限制条件,请使用gcloud org-policies set-custom-constraint
命令: 将gcloud org-policies set-custom-constraint CONSTRAINT_PATH
CONSTRAINT_PATH
替换为自定义限制条件文件的完整路径。例如/home/user/customconstraint.yaml
。完成后,您的自定义限制条件会成为 Google Cloud 组织政策列表中的组织政策。如需验证自定义限制条件是否存在,请使用gcloud org-policies list-custom-constraints
命令: 将gcloud org-policies list-custom-constraints --organization=ORGANIZATION_ID
ORGANIZATION_ID
替换为您的组织资源的 ID。 如需了解详情,请参阅查看组织政策。强制执行自定义限制条件
如需强制执行限制条件,您可以创建引用该限制条件的组织政策,并将该组织政策应用于 Google Cloud 资源。控制台
gcloud
如需创建包含布尔值规则的组织政策,请创建引用该限制条件的 YAML 政策文件:
name: projects/PROJECT_ID/policies/CONSTRAINT_NAME spec: rules: - enforce: true
请替换以下内容:
如需强制执行包含限制条件的组织政策,请运行以下命令:
gcloud org-policies set-policy POLICY_PATH
将
POLICY_PATH
替换为组织政策 YAML 文件的完整路径。该政策最长需要 15 分钟才能生效。测试自定义约束条件
本部分介绍了如何测试批处理资源和会话资源的自定义限制。
测试批处理资源的自定义限制条件
以下批量创建示例假设已创建自定义限制条件,并强制执行该限制条件,以要求批量创建操作附加“category”标签,且该标签的值为“retail”“ads”或“service”:
("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service'])
。gcloud dataproc batches submit spark \ --region us-west1 --jars file:///usr/lib/spark/examples/jars/spark-examples.jar \ --class org.apache.spark.examples.SparkPi \ --network default \ --labels category=foo \ --100
示例输出:
Operation denied by custom org policies: ["customConstraints/
custom.batchMustHaveSpecifiedCategoryLabel
": ""Only allow Dataproc batch creation if it has a 'category' label with a 'retail', 'ads', or 'service' value""]测试会话资源的自定义限制条件
以下会话创建示例假设已创建自定义限制条件,并且在创建会话时强制执行该限制条件,以要求会话具有以
orgName
开头的name
。gcloud beta dataproc sessions create spark test-session --location us-central1
示例输出:
Operation denied by custom org policy: ["customConstraints/custom.denySessionNameNotStartingWithOrgName": "Deny session creation if its name does not start with 'orgName'"]
无服务器 Apache Spark 针对资源和操作的限制条件
本部分列出了适用于批处理和会话资源的 Google Cloud Serverless for Apache Spark 自定义约束条件。
支持的 Google Cloud Serverless for Apache Spark 批处理限制
在创建(提交)批处理工作负载时,您可以使用以下 Serverless for Apache Spark 自定义限制条件:
常规
PySparkBatch
SparkBatch
SparRBatch
SparkSqlBatch
RuntimeConfig
ExecutionConfig
PeripheralsConfig
支持的 Google Cloud Serverless for Apache Spark 会话限制
在为无服务器会话创建自定义限制时,可以使用以下 Google Cloud Serverless for Apache Spark 会话属性:
常规
JupyterSession
RuntimeConfig
ExecutionConfig
PeripheralsConfig
常见用例的自定义限制条件示例
本部分包含针对批次资源和会话资源的常见用例的自定义限制条件示例。
批处理资源的自定义限制条件示例
下表提供了 Serverless for Apache Spark 批处理自定义限制条件的示例:
说明 限制条件语法 批次必须附加具有允许值的“category”标签。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustHaveSpecifiedCategoryLabel resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: ("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']) actionType: ALLOW displayName: Enforce batch "category" label requirement. description: Only allow batch creation if it attaches a "category" label with an allowable value.
批处理必须设置允许的运行时版本。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseAllowedVersion resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"]) actionType: ALLOW displayName: Enforce batch runtime version. description: Only allow batch creation if it sets an allowable runtime version.
必须使用 SparkSQL。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseSparkSQL resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.sparkSqlBatch)) actionType: ALLOW displayName: Enforce batch only use SparkSQL Batch. description: Only allow creation of SparkSQL Batch.
批次必须设置小于 2 小时的 TTL。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustSetLessThan2hTtl resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl <= duration('2h')) actionType: ALLOW displayName: Enforce batch TTL. description: Only allow batch creation if it sets an allowable TTL.
批处理无法设置超过 20 个 Spark 初始执行程序。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.executor.instances'])>20) actionType: DENY displayName: Enforce maximum number of batch Spark executor instances. description: Deny batch creation if it specifies more than 20 Spark executor instances.
批处理无法设置超过 20 个 Spark 动态分配初始执行程序。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20) actionType: DENY displayName: Enforce maximum number of batch dynamic allocation initial executors. description: Deny batch creation if it specifies more than 20 Spark dynamic allocation initial executors.
批处理不得允许超过 20 个动态分配执行器。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationMaxExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (resource.runtimeConfig.properties['spark.dynamicAllocation.enabled']=='false') || (('spark.dynamicAllocation.maxExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.maxExecutors'])<=20)) actionType: ALLOW displayName: Enforce batch maximum number of dynamic allocation executors. description: Only allow batch creation if dynamic allocation is disabled or the maximum number of dynamic allocation executors is set to less than or equal to 20.
批处理必须将 KMS 密钥设置为允许的格式。 name: organizations/ORGANIZATION_ID/custom.batchKmsPattern resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$') actionType: ALLOW displayName: Enforce batch KMS Key pattern. description: Only allow batch creation if it sets the KMS key to an allowable pattern.
批处理必须将临时存储桶前缀设置为允许的值。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchStagingBucketPrefix resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: resource.environmentConfig.executionConfig.stagingBucket.startsWith(
ALLOWED_PREFIX
) actionType: ALLOW displayName: Enforce batch staging bucket prefix. description: Only allow batch creation if it sets the staging bucket prefix to ALLOWED_PREFIX.批处理执行器内存设置必须以后缀 m
结尾,并且小于 20000 m。name: organizations/ORGANIZATION_ID/customConstraints/custom.batchExecutorMemoryMax resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: ('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000) actionType: ALLOW displayName: Enforce batch executor maximum memory. description: Only allow batch creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m.
会话资源的自定义限制条件示例
下表提供了 Serverless for Apache Spark 会话自定义限制条件示例:
说明 限制条件语法 会话必须将 sessionTemplate
设置为空字符串。name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustBeEmpty resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.sessionTemplate == "" actionType: ALLOW displayName: Enforce empty session templates. description: Only allow session creation if session template is empty string.
sessionTemplate
必须等于已获批的模板 ID。name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateIdMustBeApproved resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.sessionTemplate.startsWith("https://www.googleapis.com/compute/v1/projects/") && resource.sessionTemplate.contains("/locations/") && resource.sessionTemplate.contains("/sessionTemplates/") && ( resource.sessionTemplate.endsWith("/1") || resource.sessionTemplate.endsWith("/2") || resource.sessionTemplate.endsWith("/13") ) actionType: ALLOW displayName: Enforce templateId must be 1, 2, or 13. description: Only allow session creation if session template ID is in the approved list, that is, 1, 2 and 13.
会话必须使用最终用户凭证对工作负载进行身份验证。 name: organizations/ORGANIZATION_ID/customConstraints/custom.AllowEUCSessions resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType=="END_USER_CREDENTIALS" actionType: ALLOW displayName: Require end user credential authenticated sessions. description: Allow session creation only if the workload is authenticated using end-user credentials.
会话必须设置允许的运行时版本。 name: organizations/ORGANIZATION_ID/custom.sessionMustUseAllowedVersion resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"]) actionType: ALLOW displayName: Enforce session runtime version. description: Only allow session creation if it sets an allowable runtime version.
会话必须设置小于 2 小时的 TTL。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionMustSetLessThan2hTtl resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl <= duration('2h')) actionType: ALLOW displayName: Enforce session TTL. description: Only allow session creation if it sets an allowable TTL.
会话不能设置超过 20 个 Spark 初始执行程序。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.executor.instances'])>20) actionType: DENY displayName: Enforce maximum number of session Spark executor instances. description: Deny session creation if it specifies more than 20 Spark executor instances.
会话无法设置超过 20 个 Spark 动态分配初始执行程序。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionDynamicAllocationInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20) actionType: DENY displayName: Enforce maximum number of session dynamic allocation initial executors. description: Deny session creation if it specifies more than 20 Spark dynamic allocation initial executors.
会话必须将 KMS 密钥设置为允许的格式。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionKmsPattern resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$') actionType: ALLOW displayName: Enforce session KMS Key pattern. description: Only allow session creation if it sets the KMS key to an allowable pattern.
会话必须将临时存储桶前缀设置为允许的值。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionStagingBucketPrefix resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.environmentConfig.executionConfig.stagingBucket.startsWith(ALLOWED_PREFIX) actionType: ALLOW displayName: Enforce session staging bucket prefix. description: Only allow batch creation if it sets the staging bucket prefix to ALLOWED_PREFIX.
会话执行器内存设置必须以后缀 m
结尾,且小于 20000 m。name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionExecutorMemoryMax resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: ('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000) actionType: ALLOW displayName: Enforce session executor maximum memory. description: Only allow session creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m.
后续步骤