The following table lists additional limits on total active operations and jobs at the project and region level.
Quota type
Limit
Description
ActiveOperationsPerProjectPerRegion
5000
Limit on the total number of concurrent active operations of all types in a single project in a single regional database
ActiveJobsPerProjectPerRegion
5000
Limit on the total number of active jobs in NON_TERMINAL state in a single project in a single regional database
Other Google Cloud quotas
Dataproc clusters use other Google Cloud products. These products
have project-level quotas, which include quotas that apply to Dataproc
use. Some services are required to use Dataproc,
such as Compute Engine
and Cloud Storage. Other services, such as
BigQuery and Bigtable,
can optionally use Dataproc.
Required cluster services
The following services, which enforce quota limits, are required to create
Dataproc clusters.
Compute Engine
Dataproc clusters use Compute Engine virtual machines. The
Compute Engine quotas are split into regional and global limits. These limits apply to
clusters that you create. For example, the creation of a cluster with one n1-standard-4-m node and two n1-standard-4-w nodes uses 12 virtual CPUs
(4 * 3). This cluster usage counts against the regional quota limit of 24
virtual CPUs.
Default clusters resources
When you create a Dataproc cluster with default settings,
the following Compute Engine resources are used.
Resource
Usage
Virtual CPUs
12
Virtual Machine (VM) Instances
3
Persistent disk
1500 GB
Cloud Logging
Dataproc saves driver output and cluster logs in Cloud Logging.
The Logging quota
applies to Dataproc clusters.
Optional cluster services
You can optionally use the following services, which have quota limits,
with Dataproc clusters.
BigQuery
When reading or writing data into BigQuery,
the BigQuery quota
applies.
Bigtable
When reading or writing data into Bigtable,
the Bigtable quota
applies.
Resource availability and zone strategies
To optimize clusters for resource availability and mitigate potential stockout
errors, consider the following strategies:
Auto Zone Placement: When creating clusters, use
auto zone placement. This
allows Dataproc to select an optimal zone within your specified
region, improving the chances of successful cluster creation.
Regional quotas: Verify your regional Compute Engine quotas are sufficient
since quotas can be exhausted even with auto zone placement if the total
regional capacity is insufficient for your requests.
Machine type flexibility: If you experience persistent stockouts with a
specific machine type, use a different, more readily available
machine type for your cluster.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-01 UTC."],[[["\u003cp\u003eDataproc API quota limits are enforced at the project and region level and reset every sixty seconds.\u003c/p\u003e\n"],["\u003cp\u003eExceeding a quota limit results in a \u003ccode\u003eRESOURCE_EXHAUSTED\u003c/code\u003e error, but retrying after one minute is possible due to the quota refresh.\u003c/p\u003e\n"],["\u003cp\u003eThere are specific quota limits for various operations like Autoscaling, Cluster, NodeGroup, GetJob, Job, and Workflow requests, each with their own per-minute-per-project-per-region limit.\u003c/p\u003e\n"],["\u003cp\u003eThe number of concurrent active operations and active jobs within a project and region are limited to 5000 each, and these limits can be increased by opening a Google support ticket.\u003c/p\u003e\n"],["\u003cp\u003eDataproc utilizes other Google Cloud services like Compute Engine and Cloud Storage, which have their own quotas that apply to Dataproc usage, and optional services like BigQuery and Bigtable have quotas that apply when used.\u003c/p\u003e\n"]]],[],null,["This page lists Dataproc API quota limits,\nwhich are enforced at the project and region level. The quotas reset every\nsixty seconds (one-minute).\n\nFor cluster optimization strategies to help avoid quota and resource availability\nissues, see\n[Resource availability and zone strategies](#resource_availability_and_zone_strategies).\n| If you run Dataproc Serverless workloads, see [Dataproc Serverless quotas](/dataproc-serverless/quotas). **Important:** Dataproc Serverless quota exhaustion can result in batch and session creation failures. For more information, see [Troubleshoot\n| Dataproc Serverless batch and session creation failures](/dataproc-serverless/docs/support/batch-session-creation-failures).\n| **Handling Dataproc Quota Errors** : If you exceed a Dataproc quota limit, a [RESOURCE_EXHAUSTED](/apis/design/errors#handling_errors) (HTTP code 429) is generated, and the corresponding Dataproc API request will fail. However, since your project's Dataproc quota is refreshed every sixty seconds, you can retry your request after one minute has elapsed following the failure.\n\nThe following table lists the specific and default per-project Dataproc\nAPI quota types, quota limits, and the methods to which they apply.\n\n| Quota Type | Limit | Applicable API Methods |\n|----------------------------------------------------------|-------|----------------------------------------------------------------------------------------------------------------------------------------|\n| AutoscalingOperationRequestsPerMinutePerProjectPerRegion | 400 | CreateAutoscalingPolicy, GetAutoscalingPolicy, ListAutoscalingPolicies, UpdateAutoscalingPolicy, DeleteAutoscalingPolicy |\n| ClusterOperationRequestsPerMinutePerProjectPerRegion | 200 | CreateCluster, DeleteCluster, UpdateCluster, StopCluster, StartCluster, DiagnoseCluster, RepairCluster |\n| NodeGroupOperationRequestsPerMinutePerProjectPerRegion | 600 | CreateNodeGroup, DeleteNodeGroup, ResizeNodeGroup, RepairNodeGroup, UpdateLabelsNodeGroup, StartNodeGroup, StopNodeGroup |\n| GetJobRequestsPerMinutePerProjectPerRegion | 7500 | GetJob |\n| JobOperationRequestsPerMinutePerProjectPerRegion | 400 | SubmitJob, UpdateJob, CancelJob, DeleteJob |\n| WorkflowOperationRequestsPerMinutePerProjectPerRegion | 400 | CreateWorkflowTemplate, InstantiateWorkflowTemplate, InstantiateInlineWorkflowTemplate, UpdateWorkflowTemplate, DeleteWorkflowTemplate |\n| DefaultRequestsPerMinutePerProjectPerRegion | 7500 | All other operations (primarily Get operations) |\n\n| **Increasing Resource Quota Limits** : Open the Google Cloud [IAM \\& admin→Quotas](https://console.cloud.google.com/iam-admin/quotas) page, and select the resources you want to modify. Then click **Edit Quotas** at the top of the page to start the quota increase process. If the resources you are trying to increase aren't displayed on the page and the current filtering is \"Quotas with usage,\" change the filtering to \"All quotas\" by toggling the \"Quota type\" drop-down.\n\nThe following table lists additional limits on total active operations and jobs at the project and region level.\n\n| Quota type | Limit | Description |\n|-------------------------------------|-------|--------------------------------------------------------------------------------------------------------------------------|\n| ActiveOperationsPerProjectPerRegion | 5000 | Limit on the total number of concurrent active operations of all types in a single project in a single regional database |\n| ActiveJobsPerProjectPerRegion | 5000 | Limit on the total number of active jobs in `NON_TERMINAL` state in a single project in a single regional database |\n\n| **Increasing limits on the total active operations and jobs**: Open a Google support ticket.\n\nOther Google Cloud quotas\n\nDataproc clusters use other Google Cloud products. These products\nhave project-level quotas, which include quotas that apply to Dataproc\nuse. Some services are [required](#required_cluster_services) to use Dataproc,\nsuch as [Compute Engine](/compute \"Compute Engine\")\nand [Cloud Storage](/storage \"gcs_name\"). Other services, such as\n[BigQuery](/bigquery \"bigquery_name\") and [Bigtable](/bigtable \"Bigtable\"),\ncan [optionally](#optional_cluster_services) use Dataproc.\n\nRequired cluster services\n\nThe following services, which enforce quota limits, are required to create\nDataproc clusters.\n\nCompute Engine\n\nDataproc clusters use Compute Engine virtual machines. The\n[Compute Engine quotas](/compute/docs/resource-quotas \"Compute Engine Quotas\") are split into regional and global limits. These limits apply to\nclusters that you create. For example, the creation of a cluster with one `n1-standard-4`\n*-m* node and two `n1-standard-4` *-w* nodes uses 12 virtual CPUs\n(`4 * 3`). This cluster usage counts against the regional quota limit of 24\nvirtual CPUs.\n| **Free trial quotas:**If you use the Compute Engine free trial, your project will have a limit of 8 CPUs. To increase this limit, you must enable billing on the project.\n\nDefault clusters resources\n\nWhen you create a Dataproc cluster with default settings,\nthe following Compute Engine resources are used.\n\n| **Resource** | **Usage** |\n|--------------------------------|-----------|\n| Virtual CPUs | 12 |\n| Virtual Machine (VM) Instances | 3 |\n| Persistent disk | 1500 GB |\n\nCloud Logging\n\nDataproc saves driver output and cluster logs in [Cloud Logging](/logging \"Cloud Logging\").\nThe [Logging quota](/logging/quota-policy \"Cloud Logging quota\")\napplies to Dataproc clusters.\n\nOptional cluster services\n\nYou can optionally use the following services, which have quota limits,\nwith Dataproc clusters.\n\nBigQuery\n\nWhen reading or writing data into BigQuery,\nthe [BigQuery quota](/bigquery/quota-policy \"BigQuery quota\")\napplies.\n\nBigtable\n\nWhen reading or writing data into Bigtable,\nthe [Bigtable quota](/bigtable/quota \"Bigtable quota\")\napplies.\n\nResource availability and zone strategies\n\nTo optimize clusters for resource availability and mitigate potential stockout\nerrors, consider the following strategies:\n\n- **Auto Zone Placement:** When creating clusters, use\n [auto zone placement](/dataproc/docs/concepts/configuring-clusters/auto-zone). This\n allows Dataproc to select an optimal zone within your specified\n region, improving the chances of successful cluster creation.\n\n- **Regional quotas:** Verify your regional Compute Engine quotas are sufficient\n since quotas can be exhausted even with auto zone placement if the total\n regional capacity is insufficient for your requests.\n\n- **Machine type flexibility:** If you experience persistent stockouts with a\n specific machine type, use a different, more readily available\n machine type for your cluster."]]