This page lists Dataproc API quota limits, which are enforced at the project and region level. The quotas reset every sixty seconds (one-minute).
For cluster optimization strategies to help avoid quota and resource availability issues, see Resource availability and zone strategies.
The following table lists the specific and default per-project Dataproc API quota types, quota limits, and the methods to which they apply.
Quota Type | Limit | Applicable API Methods |
---|---|---|
AutoscalingOperationRequestsPerMinutePerProjectPerRegion | 400 | CreateAutoscalingPolicy, GetAutoscalingPolicy, ListAutoscalingPolicies, UpdateAutoscalingPolicy, DeleteAutoscalingPolicy |
ClusterOperationRequestsPerMinutePerProjectPerRegion | 200 | CreateCluster, DeleteCluster, UpdateCluster, StopCluster, StartCluster, DiagnoseCluster, RepairCluster |
NodeGroupOperationRequestsPerMinutePerProjectPerRegion | 600 | CreateNodeGroup, DeleteNodeGroup, ResizeNodeGroup, RepairNodeGroup, UpdateLabelsNodeGroup, StartNodeGroup, StopNodeGroup |
GetJobRequestsPerMinutePerProjectPerRegion | 7500 | GetJob |
JobOperationRequestsPerMinutePerProjectPerRegion | 400 | SubmitJob, UpdateJob, CancelJob, DeleteJob |
WorkflowOperationRequestsPerMinutePerProjectPerRegion | 400 | CreateWorkflowTemplate, InstantiateWorkflowTemplate, InstantiateInlineWorkflowTemplate, UpdateWorkflowTemplate, DeleteWorkflowTemplate |
DefaultRequestsPerMinutePerProjectPerRegion | 7500 | All other operations (primarily Get operations) |
The following table lists additional limits on total active operations and jobs at the project and region level.
Quota type | Limit | Description |
---|---|---|
ActiveOperationsPerProjectPerRegion | 5000 | Limit on the total number of concurrent active operations of all types in a single project in a single regional database |
ActiveJobsPerProjectPerRegion | 5000 | Limit on the total number of active jobs in NON_TERMINAL state in a single project in a single regional database |
Other Google Cloud quotas
Dataproc clusters use other Google Cloud products. These products have project-level quotas, which include quotas that apply to Dataproc use. Some services are required to use Dataproc, such as Compute Engine and Cloud Storage. Other services, such as BigQuery and Bigtable, can optionally use Dataproc.
Required cluster services
The following services, which enforce quota limits, are required to create Dataproc clusters.
Compute Engine
Dataproc clusters use Compute Engine virtual machines. The
Compute Engine quotas are split into regional and global limits. These limits apply to
clusters that you create. For example, the creation of a cluster with one n1-standard-4
-m node and two n1-standard-4
-w nodes uses 12 virtual CPUs
(4 * 3
). This cluster usage counts against the regional quota limit of 24
virtual CPUs.
Default clusters resources
When you create a Dataproc cluster with default settings, the following Compute Engine resources are used.
Resource | Usage |
---|---|
Virtual CPUs | 12 |
Virtual Machine (VM) Instances | 3 |
Persistent disk | 1500 GB |
Cloud Logging
Dataproc saves driver output and cluster logs in Cloud Logging. The Logging quota applies to Dataproc clusters.
Optional cluster services
You can optionally use the following services, which have quota limits, with Dataproc clusters.
BigQuery
When reading or writing data into BigQuery, the BigQuery quota applies.
Bigtable
When reading or writing data into Bigtable, the Bigtable quota applies.
Resource availability and zone strategies
To optimize clusters for resource availability and mitigate potential stockout errors, consider the following strategies:
Auto Zone Placement: When creating clusters, use auto zone placement. This allows Dataproc to select an optimal zone within your specified region, improving the chances of successful cluster creation.
Regional quotas: Verify your regional Compute Engine quotas are sufficient since quotas can be exhausted even with auto zone placement if the total regional capacity is insufficient for your requests.
Machine type flexibility: If you experience persistent stockouts with a specific machine type, use a different, more readily available machine type for your cluster.