[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-25。"],[[["\u003cp\u003eDataproc Serverless API quotas are enforced per project and region, resetting every sixty seconds, with limits on various operations like creating, deleting, and listing batches and sessions.\u003c/p\u003e\n"],["\u003cp\u003eExceeding a Dataproc Serverless quota results in a \u003ccode\u003eRESOURCE_EXHAUSTED\u003c/code\u003e error, but requests can be retried after a one-minute refresh period.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless relies on required Google Cloud services such as Compute Engine and Cloud Storage, which have their own project-level quotas that affect Dataproc Serverless operations.\u003c/p\u003e\n"],["\u003cp\u003eCompute Engine quotas, including CPU, disk, and GPU usage, apply to Dataproc Serverless batches, and free trial accounts have an 8-CPU limit.\u003c/p\u003e\n"],["\u003cp\u003eOptional services like BigQuery and Bigtable, which may be integrated with Dataproc Serverless, also have separate quota limits that may be relevant.\u003c/p\u003e\n"]]],[],null,["# Serverless for Apache Spark quotas\n\nServerless for Apache Spark has API quota limits that are enforced at the project and region level.\nThe quotas reset every sixty seconds (one-minute).\n| **Handling Serverless for Apache Spark Quota Errors** : If you exceed a Serverless for Apache Spark quota limit, a [RESOURCE_EXHAUSTED](/apis/design/errors#handling_errors) (HTTP code 429) is generated, and the corresponding Serverless for Apache Spark API request will fail. However, since your project's Serverless for Apache Spark quota is refreshed every sixty seconds, you can retry your request after one minute has elapsed following the failure.\n\nThe following table lists the specific and default per-project\nServerless for Apache Spark API quota types, quota limits, and the\nmethods to which they apply.\n\n| **Increasing Resource Quota Limits** : Open the Google Cloud [IAM \\& Admin Quotas](https://console.cloud.google.com/iam-admin/quotas) page, and select the resources you want to modify. Then click **Edit Quotas** at the top of the page to start the quota increase process. If the resources you are trying to increase aren't displayed on the page and the current filtering is \"Quotas with usage,\" change the filtering to \"All quotas\" by toggling the \"Quota type\" dropdown.\n\nOther Google Cloud quotas\n-------------------------\n\nServerless for Apache Spark batches utilize other Google Cloud products. These products\nhave project-level quotas, which include quotas that apply to Serverless for Apache Spark\nuse. Some services are [required](#required_services) to use Serverless for Apache Spark,\nsuch as [Compute Engine](/compute \"Compute Engine\")\nand [Cloud Storage](/storage \"gcs_name\"). Other services, such as\n[BigQuery](/bigquery \"bigquery_name\") and [Bigtable](/bigtable \"Bigtable\"),\ncan [optionally](#optional_services) be used with Serverless for Apache Spark.\n\n### Required services\n\nThe following services, which enforce quota limits, are required to create\nServerless for Apache Spark batches.\n\n#### Compute Engine\n\nServerless for Apache Spark batches consume the following\n[Compute Engine resource quotas](/compute/resource-usage):\n\nThe [Compute Engine quotas](/compute/docs/resource-quotas \"Compute Engine Quotas\")\nare split into regional and global limits. These limits apply to batches you\ncreate. For example, to run a Spark batch with 4 driver cores\n(`spark.driver.cores=4`) and two executors with 4 cores each (`spark.executor.cores=4`),\nyou will use 12 virtual CPUs (`4 * 3`).\nThis batch usage will count against the regional quota limit of 24 virtual CPUs.\n| **Free trial quotas:**If you use the Compute Engine free trial, your project will have a limit of 8 CPUs. To increase this limit, you must enable billing on the project.\n\n#### Default batch resources\n\nWhen you create a batch with default settings, the following Compute Engine\nresources are used:\n\n#### Cloud Logging\n\nServerless for Apache Spark saves batch output and logs in\n[Cloud Logging](/logging \"Cloud Logging\"). The\n[Cloud Logging quota](/logging/quota-policy \"Cloud Logging quota\")\napplies to your Serverless for Apache Spark batches.\n\n### Optional services\n\nThe following services, which have quota limits, can optionally be used with\nServerless for Apache Spark batches.\n\n#### BigQuery\n\nWhen reading or writing data into BigQuery, the\n[BigQuery quota](/bigquery/quota-policy \"BigQuery quota\") applies.\n\n#### Bigtable\n\nWhen reading or writing data into Bigtable, the\n[Bigtable quota](/bigtable/quota \"Bigtable quota\") applies.\n\nIdentify workloads with quota or IP address limitations\n-------------------------------------------------------\n\nYou can use the following\n[Cloud Logging](https://console.cloud.google.com/logs/query) queries to identify\nServerless for Apache Spark workloads that reached your quota or were unable to\nscale due to IP address exhaustion.\n\nQuota query: \n\n jsonPayload.@type=\"type.googleapis.com/google.cloud.dataproc.logging.AutoscalerLog\"\n jsonPayload.recommendation.outputs.constraintsReached=\"SCALING_CAPPED_DUE_TO_LACK_OF_QUOTA\"\n\nIP address exhaustion query: \n\n jsonPayload.@type=\"type.googleapis.com/google.cloud.dataproc.logging.AutoscalerLog\"\n jsonPayload.status.details =~\".*Insufficient free IP addresses.*\""]]