用户运行 gcloud dataproc batches describe BATCH_ID --region REGION
以获取使用情况指标。命令输出包含以下代码段(milliDcuSeconds:4 DCUs x 3 VMs x 60 seconds x 1000 = 720000、milliAcceleratorSeconds:1 GPU x 2 VMs x 60 seconds x 1000 = 120000 和 shuffleStorageGbSeconds:400GB x 3 VMs x 60 seconds = 72000):
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[[["\u003cp\u003eDataproc Serverless for Spark pricing is calculated based on Data Compute Units (DCUs), the usage of accelerators, and the amount of shuffle storage, with billing per second and minimum charges for each.\u003c/p\u003e\n"],["\u003cp\u003eEach vCPU in Dataproc counts as 0.6 DCU, and RAM usage is calculated differently depending if its below or above 8GB per vCPU, where each GB below is 0.1 DCU, and above is 0.2 DCU.\u003c/p\u003e\n"],["\u003cp\u003eBy default, each Dataproc Serverless workload utilizes a minimum of 12 DCUs, allocated between the driver and executors, but this can be customized via Spark properties.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless workloads can optionally utilize other Google Cloud resources like Cloud Storage and BigQuery, each with their respective pricing structures.\u003c/p\u003e\n"],["\u003cp\u003eAfter a workload is done, the user can use the \u003ccode\u003egcloud\u003c/code\u003e tool to obtain metrics of the resources used, which can help estimate future costs of similar workloads.\u003c/p\u003e\n"]]],[],null,["# Pricing\n\nGoogle Cloud Serverless for Apache Spark pricing\n================================================\n\n\u003cbr /\u003e\n\n[Dataproc](/dataproc/pricing \"View this page for Dataproc\") \\| Serverless for Apache Spark \\| [Dataproc Metastore](/dataproc-metastore/pricing \"View this page for Dataproc Metastore\")\n\n\u003cbr /\u003e\n\n| **Dataproc Serverless** is now **Google Cloud Serverless for Apache Spark**. Until updated, some documents will refer to the previous name.\nServerless for Apache Spark pricing is based on the number of Data Compute Units (DCUs), the number of accelerators used, and the amount of shuffle storage used. DCUs, accelerators, and shuffle storage are billed per second, with a 1-minute minimum charge for DCUs and shuffle storage, and a 5-minute minimum charge for accelerators.\n\n\u003cbr /\u003e\n\nEach Dataproc vCPU counts as 0.6 DCU. RAM is charged differently\nbelow and above 8GB. Each gigabyte of RAM below 8G gigabyte per vCPU counts as\n0.1 DCU, and each gigabyte of RAM above 8G gigabyte per vCPU counts as 0.2 DCU.\nMemory used by Spark drivers and executors and system memory usage are counted\ntowards DCU usage.\n\nBy default, each Serverless for Apache Spark batch and interactive workload\nconsumes a *minimum* of 12 DCUs for the duration of the workload: the driver\nuses 4 vCPUs and 16GB of RAM and consumes 4 DCUs, and each of the 2 executors\nuses 4 vCPUs and 16GB of RAM and consumes 4 DCUs. You can customize the number\nof vCPUs and the amount of memory per vCPU by setting\n[Spark properties](/dataproc-serverless/docs/concepts/properties#resource_allocation_properties).\nNo additional Compute Engine VM or Persistent Disk charges apply.\n\nData Compute Unit (DCU) pricing\n-------------------------------\n\nThe DCU rate shown below is an hourly rate. It is prorated and billed per\nsecond, with a 1-minute minimum charge.\n\n\nIf you pay in a currency other than USD, the prices listed in your currency on [Cloud Platform SKUs](https://cloud.google.com/skus/) apply.\n\nServerless for Apache Spark interactive workload is charged at Premium.\n\nShuffle storage pricing\n-----------------------\n\nThe shuffle storage rate shown below is a monthly rate. It is prorated and\nbilled per second, with a 1-minute minimum charge for standard shuffle storage\nand a 5-minute minimum charge for Premium shuffle storage. Premium shuffle\nstorage can only be used with Premium Compute Unit.\nIf you pay in a currency other than USD, the prices listed in your currency on [Cloud Platform SKUs](https://cloud.google.com/skus/) apply.\n\n\u003cbr /\u003e\n\nAccelerator pricing\n-------------------\n\nThe accelerator rate shown below is an hourly rate. It is prorated and billed per\nsecond, with a 5-minute minimum charge.\n\n\nIf you pay in a currency other than USD, the prices listed in your currency on [Cloud Platform SKUs](https://cloud.google.com/skus/) apply.\n\nPricing example\n---------------\n\nIf the Serverless for Apache Spark batch workload runs with 12 DCUs\n(`spark.driver.cores=4`,`spark.executor.cores=4`,`spark.executor.instances=2`)\nfor 24 hours in the us-central1 region and consumes 25GB of shuffle storage, the\nprice calculation is as follows. \n\n```\nTotal compute cost = 12 * 24 * $0.060000 = $17.28\nTotal storage cost = 25 * ($0.040/301) = $0.03\n------------------------------------------------\nTotal cost = $17.28 + $0.03 = $17.31\n```\n\nNotes:\n\n1. The example assumes a 30-day month. Since the batch workload duration is one day, the monthly shuffle storage rate is divided by 30.\n\nIf the Serverless for Apache Spark batch workload runs with 12 DCUs and 2\nL4 GPUs (`spark.driver.cores=4`,`spark.executor.cores=4`,\n`spark.executor.instances=2`,`spark.dataproc.driver.compute.tier=premium`,\n`spark.dataproc.executor.compute.tier=premium`,\n`spark.dataproc.executor.disk.tier=premium`,\n`spark.dataproc.executor.resource.accelerator.type=l4`) for 24 hours in the\nus-central1 region and consumes 25GB of shuffle storage, the price calculation\nis as follows. \n\n```\nTotal compute cost = 12 * 24 * $0.089000 = $25.632\nTotal storage cost = 25 * ($0.1/301) = $0.083\nTotal accelerator cost = 2 * 24 * $0.6720 = $48.39\n------------------------------------------------\nTotal cost = $25.632 + $0.083 + $48.39 = $74.105\n```\n\nNotes:\n\n1. The example assumes a 30-day month. Since the batch workload duration is one day, the monthly shuffle storage rate is divided by 30.\n\nIf the Serverless for Apache Spark interactive workload runs with 12 DCUs\n(`spark.driver.cores=4`,`spark.executor.cores=4`,`spark.executor.instances=2`)\nfor 24 hours in the us-central1 region and consumes 25GB of shuffle storage, the\nprice calculation is as follows: \n\n```\nTotal compute cost = 12 * 24 * $0.089000 = $25.632\nTotal storage cost = 25 * ($0.040/301) = $0.03\n------------------------------------------------\nTotal cost = $25.632 + $0.03 = $25.662\n```\n\nNotes:\n\n1. The example assumes a 30-day month. Since the batch workload duration is one day, the monthly shuffle storage rate is divided by 30.\n\nPricing estimation example\n--------------------------\n\nWhen a batch workload completes, Serverless for Apache Spark calculates\n[UsageMetrics](/dataproc-serverless/docs/reference/rest/v1/RuntimeInfo#UsageMetrics),\nwhich contain an approximation of the total DCU, accelerator, and shuffle\nstorage resources consumed by the completed workload. After running a workload,\nyou can run the [`gcloud dataproc batches describe `\u003cvar translate=\"no\"\u003eBATCH_ID\u003c/var\u003e](/sdk/gcloud/reference/dataproc/batches/describe)\ncommand to view workload usage metrics to help you estimate the cost of running\nthe workload.\n\n**Example:**\n\nServerless for Apache Spark runs a workload on an ephemeral cluster with\none master and two workers. Each node consumes 4 DCUs (default is 4 DCUs per\ncore---see [`spark.dataproc.driver.disk.size`](/dataproc-serverless/docs/concepts/properties#custom_spark_properties))\nand 400 GB shuffle storage\n(default is 100GB per core---see\n[`spark.driver.cores`](/dataproc-serverless/docs/concepts/properties#resource_allocation_properties)).\nWorkload run time is 60 seconds. Also, each worker has 1 GPU for a total\nof 2 across the cluster.\n\nThe user runs `gcloud dataproc batches describe `\u003cvar translate=\"no\"\u003eBATCH_ID\u003c/var\u003e` --region `\u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e\nto obtain usage metrics. The command output includes the following snippet\n(`milliDcuSeconds`: `4 DCUs x 3 VMs x 60 seconds x 1000` =\n`720000`, `milliAcceleratorSeconds`: `1 GPU x 2 VMs x 60 seconds x 1000` =\n`120000`, and `shuffleStorageGbSeconds`: `400GB x 3 VMs x 60 seconds` = `72000`): \n\n```\nruntimeInfo:\n approximateUsage:\n milliDcuSeconds: '720000'\n shuffleStorageGbSeconds: '72000'\n milliAcceleratorSeconds: '120000'\n```\n\nUse of other Google Cloud resources\n-----------------------------------\n\nYour Serverless for Apache Spark workload can optionally utilize the\nfollowing resources, each billed at its own pricing, including but not limited to:\n\n- [Cloud Storage](/storage/pricing)\n- [Global Networking](/products/networking)\n- [Cloud Monitoring](/monitoring/pricing)\n- [BigQuery](/bigquery/pricing)\n- [Bigtable](/bigtable/pricing)\n\nWhat's next\n-----------\n\n- Read the [Serverless for Apache Spark documentation](/dataproc-serverless/docs).\n- Get started with [Serverless for Apache Spark](/dataproc-serverless/docs/quickstarts/spark-batch).\n- Try the [Pricing calculator](/products/calculator).\n\n#### Request a custom quote\n\nWith Google Cloud's pay-as-you-go pricing, you only pay for the services you use. Connect with our sales team to get a custom quote for your organization.\n[Contact sales](/contact?direct=true)"]]