Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini mencantumkan batas kuota Dataproc API,
yang diberlakukan pada level project dan region. Kuota ini disetel ulang setiap
enam puluh detik (satu menit).
Untuk mengetahui strategi pengoptimalan cluster yang dapat membantu menghindari masalah kuota dan ketersediaan resource, lihat Strategi ketersediaan resource dan zona.
Tabel berikut mencantumkan jenis kuota, batas kuota, dan metode Dataproc API per project spesifik dan default yang diberlakukan.
Tabel berikut mencantumkan batas tambahan pada total operasi dan tugas aktif di tingkat project dan region.
Jenis kuota
Batas
Deskripsi
ActiveOperationsPerProjectPerRegion
5000
Batas jumlah total operasi aktif serentak dari semua jenis dalam satu project di satu database regional
ActiveJobsPerProjectPerRegion
5000
Batas jumlah total tugas aktif dalam status NON_TERMINAL di satu project dalam satu database regional
Kuota Google Cloud lainnya
Cluster Dataproc menggunakan produk Google Cloud lainnya. Produk-produk ini memiliki kuota level project, yang mencakup kuota yang berlaku pada penggunaan Dataproc. Beberapa layanan diwajibkan menggunakan Dataproc, seperti Compute Engine dan Cloud Storage. Layanan lain, seperti BigQuery dan Bigtable, dapat menggunakan Dataproc secara opsional.
Layanan cluster yang diperlukan
Layanan berikut, yang memberlakukan batas kuota, diperlukan untuk membuat
kluster Dataproc.
Compute Engine
Cluster Dataproc menggunakan mesin virtual Compute Engine. Kuota Compute Engine dibagi menjadi dua, yaitu batas regional dan batas global. Batas ini berlaku untuk
kluster yang Anda buat. Misalnya, pembuatan kluster dengan satu node n1-standard-4-m dan dua node n1-standard-4-w menggunakan 12 CPU virtual
(4 * 3). Penggunaan kluster ini dihitung terhadap batas kuota regional 24
CPU virtual.
Resource cluster default
Saat Anda membuat cluster Dataproc dengan setelan default,
resource Compute Engine berikut akan digunakan.
Resource
Penggunaan
CPU Virtual
12
Instance Mesin Virtual (VM)
3
Persistent disk
1.500 GB
Cloud Logging
Dataproc menyimpan output driver dan log cluster di Cloud Logging.
Kuota Logging berlaku untuk cluster Dataproc.
Layanan cluster opsional
Anda dapat secara opsional menggunakan layanan berikut, yang memiliki batas kuota, dengan kluster Dataproc.
BigQuery
Saat membaca atau menulis data ke BigQuery, kuota BigQuery akan berlaku.
Bigtable
Saat membaca atau menulis data ke Bigtable, kuota Bigtable akan berlaku.
Ketersediaan resource dan strategi zona
Untuk mengoptimalkan ketersediaan resource cluster dan memitigasi potensi kesalahan kehabisan stok, pertimbangkan strategi berikut:
Penempatan Zona Otomatis: Saat membuat cluster, gunakan
penempatan zona otomatis. Hal ini memungkinkan Dataproc memilih zona yang optimal dalam region yang Anda tentukan, sehingga meningkatkan peluang keberhasilan pembuatan cluster.
Kuota regional: Pastikan kuota Compute Engine regional Anda cukup karena kuota dapat habis meskipun dengan penempatan zona otomatis jika total kapasitas regional tidak cukup untuk permintaan Anda.
Fleksibilitas jenis mesin: Jika Anda mengalami kehabisan stok yang terus-menerus dengan jenis mesin tertentu, gunakan jenis mesin lain yang lebih mudah tersedia untuk cluster Anda.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-02 UTC."],[[["\u003cp\u003eDataproc API quota limits are enforced at the project and region level and reset every sixty seconds.\u003c/p\u003e\n"],["\u003cp\u003eExceeding a quota limit results in a \u003ccode\u003eRESOURCE_EXHAUSTED\u003c/code\u003e error, but retrying after one minute is possible due to the quota refresh.\u003c/p\u003e\n"],["\u003cp\u003eThere are specific quota limits for various operations like Autoscaling, Cluster, NodeGroup, GetJob, Job, and Workflow requests, each with their own per-minute-per-project-per-region limit.\u003c/p\u003e\n"],["\u003cp\u003eThe number of concurrent active operations and active jobs within a project and region are limited to 5000 each, and these limits can be increased by opening a Google support ticket.\u003c/p\u003e\n"],["\u003cp\u003eDataproc utilizes other Google Cloud services like Compute Engine and Cloud Storage, which have their own quotas that apply to Dataproc usage, and optional services like BigQuery and Bigtable have quotas that apply when used.\u003c/p\u003e\n"]]],[],null,["This page lists Dataproc API quota limits,\nwhich are enforced at the project and region level. The quotas reset every\nsixty seconds (one-minute).\n\nFor cluster optimization strategies to help avoid quota and resource availability\nissues, see\n[Resource availability and zone strategies](#resource_availability_and_zone_strategies).\n| If you run Dataproc Serverless workloads, see [Dataproc Serverless quotas](/dataproc-serverless/quotas). **Important:** Dataproc Serverless quota exhaustion can result in batch and session creation failures. For more information, see [Troubleshoot\n| Dataproc Serverless batch and session creation failures](/dataproc-serverless/docs/support/batch-session-creation-failures).\n| **Handling Dataproc Quota Errors** : If you exceed a Dataproc quota limit, a [RESOURCE_EXHAUSTED](/apis/design/errors#handling_errors) (HTTP code 429) is generated, and the corresponding Dataproc API request will fail. However, since your project's Dataproc quota is refreshed every sixty seconds, you can retry your request after one minute has elapsed following the failure.\n\nThe following table lists the specific and default per-project Dataproc\nAPI quota types, quota limits, and the methods to which they apply.\n\n| Quota Type | Limit | Applicable API Methods |\n|----------------------------------------------------------|-------|----------------------------------------------------------------------------------------------------------------------------------------|\n| AutoscalingOperationRequestsPerMinutePerProjectPerRegion | 400 | CreateAutoscalingPolicy, GetAutoscalingPolicy, ListAutoscalingPolicies, UpdateAutoscalingPolicy, DeleteAutoscalingPolicy |\n| ClusterOperationRequestsPerMinutePerProjectPerRegion | 200 | CreateCluster, DeleteCluster, UpdateCluster, StopCluster, StartCluster, DiagnoseCluster, RepairCluster |\n| NodeGroupOperationRequestsPerMinutePerProjectPerRegion | 600 | CreateNodeGroup, DeleteNodeGroup, ResizeNodeGroup, RepairNodeGroup, UpdateLabelsNodeGroup, StartNodeGroup, StopNodeGroup |\n| GetJobRequestsPerMinutePerProjectPerRegion | 7500 | GetJob |\n| JobOperationRequestsPerMinutePerProjectPerRegion | 400 | SubmitJob, UpdateJob, CancelJob, DeleteJob |\n| WorkflowOperationRequestsPerMinutePerProjectPerRegion | 400 | CreateWorkflowTemplate, InstantiateWorkflowTemplate, InstantiateInlineWorkflowTemplate, UpdateWorkflowTemplate, DeleteWorkflowTemplate |\n| DefaultRequestsPerMinutePerProjectPerRegion | 7500 | All other operations (primarily Get operations) |\n\n| **Increasing Resource Quota Limits** : Open the Google Cloud [IAM \\& admin→Quotas](https://console.cloud.google.com/iam-admin/quotas) page, and select the resources you want to modify. Then click **Edit Quotas** at the top of the page to start the quota increase process. If the resources you are trying to increase aren't displayed on the page and the current filtering is \"Quotas with usage,\" change the filtering to \"All quotas\" by toggling the \"Quota type\" drop-down.\n\nThe following table lists additional limits on total active operations and jobs at the project and region level.\n\n| Quota type | Limit | Description |\n|-------------------------------------|-------|--------------------------------------------------------------------------------------------------------------------------|\n| ActiveOperationsPerProjectPerRegion | 5000 | Limit on the total number of concurrent active operations of all types in a single project in a single regional database |\n| ActiveJobsPerProjectPerRegion | 5000 | Limit on the total number of active jobs in `NON_TERMINAL` state in a single project in a single regional database |\n\n| **Increasing limits on the total active operations and jobs**: Open a Google support ticket.\n\nOther Google Cloud quotas\n\nDataproc clusters use other Google Cloud products. These products\nhave project-level quotas, which include quotas that apply to Dataproc\nuse. Some services are [required](#required_cluster_services) to use Dataproc,\nsuch as [Compute Engine](/compute \"Compute Engine\")\nand [Cloud Storage](/storage \"gcs_name\"). Other services, such as\n[BigQuery](/bigquery \"bigquery_name\") and [Bigtable](/bigtable \"Bigtable\"),\ncan [optionally](#optional_cluster_services) use Dataproc.\n\nRequired cluster services\n\nThe following services, which enforce quota limits, are required to create\nDataproc clusters.\n\nCompute Engine\n\nDataproc clusters use Compute Engine virtual machines. The\n[Compute Engine quotas](/compute/docs/resource-quotas \"Compute Engine Quotas\") are split into regional and global limits. These limits apply to\nclusters that you create. For example, the creation of a cluster with one `n1-standard-4`\n*-m* node and two `n1-standard-4` *-w* nodes uses 12 virtual CPUs\n(`4 * 3`). This cluster usage counts against the regional quota limit of 24\nvirtual CPUs.\n| **Free trial quotas:**If you use the Compute Engine free trial, your project will have a limit of 8 CPUs. To increase this limit, you must enable billing on the project.\n\nDefault clusters resources\n\nWhen you create a Dataproc cluster with default settings,\nthe following Compute Engine resources are used.\n\n| **Resource** | **Usage** |\n|--------------------------------|-----------|\n| Virtual CPUs | 12 |\n| Virtual Machine (VM) Instances | 3 |\n| Persistent disk | 1500 GB |\n\nCloud Logging\n\nDataproc saves driver output and cluster logs in [Cloud Logging](/logging \"Cloud Logging\").\nThe [Logging quota](/logging/quota-policy \"Cloud Logging quota\")\napplies to Dataproc clusters.\n\nOptional cluster services\n\nYou can optionally use the following services, which have quota limits,\nwith Dataproc clusters.\n\nBigQuery\n\nWhen reading or writing data into BigQuery,\nthe [BigQuery quota](/bigquery/quota-policy \"BigQuery quota\")\napplies.\n\nBigtable\n\nWhen reading or writing data into Bigtable,\nthe [Bigtable quota](/bigtable/quota \"Bigtable quota\")\napplies.\n\nResource availability and zone strategies\n\nTo optimize clusters for resource availability and mitigate potential stockout\nerrors, consider the following strategies:\n\n- **Auto Zone Placement:** When creating clusters, use\n [auto zone placement](/dataproc/docs/concepts/configuring-clusters/auto-zone). This\n allows Dataproc to select an optimal zone within your specified\n region, improving the chances of successful cluster creation.\n\n- **Regional quotas:** Verify your regional Compute Engine quotas are sufficient\n since quotas can be exhausted even with auto zone placement if the total\n regional capacity is insufficient for your requests.\n\n- **Machine type flexibility:** If you experience persistent stockouts with a\n specific machine type, use a different, more readily available\n machine type for your cluster."]]