Stay organized with collections
Save and categorize content based on your preferences.
When you create a cluster, HDFS is used as the default filesystem. You can
override this behavior by setting the defaultFS as a Cloud Storage bucket. By
default, Dataproc also creates a Cloud Storage staging and a
Cloud Storage temp bucket in your project or reuses existing
Dataproc-created staging and temp buckets from previous cluster
creation requests.
Temp bucket: Used to store ephemeral cluster and jobs data,
such as Spark and MapReduce history files. Also stores
checkpoint diagnostic data
collected during the lifecycle of a cluster.
If you do not specify a staging or temp bucket when you create a cluster,
Dataproc sets a Cloud Storage location in US, ASIA,
or EU for your cluster's staging and temp buckets
according to the Compute Engine zone where your cluster is deployed,
and then creates and manages these project-level, per-location buckets.
Dataproc-created staging and temp buckets are
shared among clusters in the same region, and are created with a
Cloud Storage soft delete retention
duration set to 0 seconds.
The temp bucket contains ephemeral data, and has a TTL of 90 days.
The staging bucket, which can contain configuration data
and dependency files needed by multiple clusters,
does not have a TTL. However, you can apply a lifecycle rule to
your dependency files
(files with a ".jar" filename extension located in the staging bucket folder)
to schedule the removal of your dependency files when they are no longer
needed by your clusters.
Create your own staging and temp buckets
Instead of relying on the creation of a default
staging and temp bucket, you can specify existing Cloud Storage buckets that
Dataproc will use as your cluster's staging and temp bucket.
gcloud command
Run the gcloud dataproc clusters create command with the
--bucket
and/or
--temp-bucket
flags locally in a terminal window or in
Cloud Shell
to specify your cluster's staging and/or temp bucket.
In the Google Cloud console, open the Dataproc
Create a cluster
page. Select the Customize cluster panel, then
use the File storage field to specify or select the cluster's staging
bucket.
Note: Currently, specifying a temp bucket using the Google Cloud console
is not supported.
Dataproc uses a defined folder structure for Cloud Storage buckets
attached to clusters. Dataproc also supports attaching more than one
cluster to a Cloud Storage bucket. The folder structure used for saving job
driver output in Cloud Storage is:
cloud-storage-bucket-name
- google-cloud-dataproc-metainfo
- list of cluster IDs
- list of job IDs
- list of output logs for a job
You can use the gcloud command line tool, Dataproc API, or
Google Cloud console to list the name of a cluster's staging and temp buckets.
Console
\View cluster details, which includeas the name of the cluster's staging bucket, on the
Dataproc Clusters
page in the Google Cloud console.
On the Google Cloud console
Cloud Storage Browser
page, filter results that contain "dataproc-temp-".
gcloud command
Run the
gcloud dataproc clusters describe
command locally in a terminal window or in
Cloud Shell.
The staging and temp buckets associated with your cluster are listed in the
output.
You can set core:fs.defaultFS to a bucket location in Cloud Storage (gs://defaultFS-bucket-name) to set Cloud Storage as the default filesystem. This also sets core:fs.gs.reported.permissions, the reported permission returned by the Cloud Storage connector for all files, to 777.
If Cloud Storage is not set as the default filesystem, HDFS will be used, and the core:fs.gs.reported.permissions property will return 700, the default value.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eDataproc uses HDFS as the default filesystem when creating a cluster, but you can override this by setting a Cloud Storage bucket as the defaultFS.\u003c/p\u003e\n"],["\u003cp\u003eDataproc creates or reuses Cloud Storage staging and temp buckets for clusters, with the staging bucket storing job dependencies and the temp bucket storing ephemeral data.\u003c/p\u003e\n"],["\u003cp\u003eUsers can specify their own existing Cloud Storage buckets for staging and temp instead of relying on Dataproc's default creation, which will allow for more control over data storage.\u003c/p\u003e\n"],["\u003cp\u003eYou can use the gcloud CLI, REST API, or the Google Cloud console to define and also list the names of the staging and temp buckets associated with your cluster.\u003c/p\u003e\n"],["\u003cp\u003eWhen using Assured Workloads for regulatory compliance, the cluster, VPC network, and Cloud Storage buckets must be contained within that specific environment.\u003c/p\u003e\n"]]],[],null,["# Dataproc staging and temp buckets\n\nWhen you create a cluster, HDFS is used as the default filesystem. You can\noverride this behavior by setting the defaultFS as a Cloud Storage [bucket](/storage/docs/buckets). By\ndefault, Dataproc also creates a Cloud Storage staging and a\nCloud Storage temp bucket in your project or reuses existing\nDataproc-created staging and temp buckets from previous cluster\ncreation requests.\n\n- Staging bucket: Used to stage cluster job dependencies,\n [job driver output](/dataproc/docs/guides/dataproc-job-output),\n and cluster config files. Also receives output from\n [Snapshot diagnostic data collection](/dataproc/docs/support/diagnose-clusters#snapshot_diagnostic_data_collection).\n\n- Temp bucket: Used to store ephemeral cluster and jobs data,\n such as Spark and MapReduce history files. Also stores\n [checkpoint diagnostic data](/dataproc/docs/support/diagnose-clusters#checkpoint_diagnostic_data_collection)\n collected during the lifecycle of a cluster.\n\nIf you do not specify a staging or temp bucket when you create a cluster,\nDataproc sets a [Cloud Storage location in US, ASIA,\nor EU](/storage/docs/locations#location-mr) for your cluster's staging and temp buckets\naccording to the Compute Engine zone where your cluster is deployed,\nand then creates and manages these project-level, per-location buckets.\nDataproc-created staging and temp buckets are\nshared among clusters in the same region, and are created with a\nCloud Storage [soft delete retention](/storage/docs/soft-delete#retention-duration)\nduration set to 0 seconds.\n\nThe temp bucket contains ephemeral data, and has a TTL of 90 days.\nThe staging bucket, which can contain configuration data\nand dependency files needed by multiple clusters,\ndoes not have a TTL. However, you can [apply a lifecycle rule to\nyour dependency files](/storage/docs/lifecycle#matchesprefix-suffix)\n(files with a \".jar\" filename extension located in the staging bucket folder)\nto schedule the removal of your dependency files when they are no longer\nneeded by your clusters.\n| To locate the default Dataproc staging and temp buckets using the Google Cloud console **[Cloud Storage Browser](https://console.cloud.google.com/storage/browser)**, filter results using the \"dataproc-staging-\" and \"dataproc-temp-\" prefixes.\n\nCreate your own staging and temp buckets\n----------------------------------------\n\nInstead of relying on the creation of a default\nstaging and temp bucket, you can specify existing Cloud Storage buckets that\nDataproc will use as your cluster's staging and temp bucket.\n**Note:** When you use an [Assured Workloads environment](/assured-workloads/docs/deploy-resource) for regulatory compliance, the cluster, VPC network, and Cloud Storage buckets must be contained within the Assured Workloads environment. \n\n### gcloud command\n\nRun the `gcloud dataproc clusters create` command with the\n[`--bucket`](/sdk/gcloud/reference/dataproc/clusters/create#--bucket)\nand/or\n[`--temp-bucket`](/sdk/gcloud/reference/dataproc/clusters/create#--temp-bucket)\nflags locally in a terminal window or in\n[Cloud Shell](https://console.cloud.google.com/?cloudshell=true)\nto specify your cluster's staging and/or temp bucket. \n\n```\ngcloud dataproc clusters create cluster-name \\\n --region=region \\\n --bucket=bucket-name \\\n --temp-bucket=bucket-name \\\n other args ...\n```\n\n### REST API\n\nUse the [`ClusterConfig.configBucket`](/dataproc/docs/reference/rest/v1/ClusterConfig#FIELDS.config_bucket) and\n[`ClusterConfig.tempBucket`](/dataproc/docs/reference/rest/v1/ClusterConfig#FIELDS.temp_bucket)\nfields\nin a [clusters.create](/dataproc/docs/reference/rest/v1/projects.regions.clusters/create)\nrequest to specify your cluster's staging and temp buckets.\n\n### Console\n\nIn the Google Cloud console, open the Dataproc\n[Create a cluster](https://console.cloud.google.com/dataproc/clustersAdd)\npage. Select the Customize cluster panel, then\nuse the File storage field to specify or select the cluster's staging\nbucket.\n\nNote: Currently, specifying a temp bucket using the Google Cloud console\nis not supported.\n\nDataproc uses a defined folder structure for Cloud Storage buckets\nattached to clusters. Dataproc also supports attaching more than one\ncluster to a Cloud Storage bucket. The folder structure used for saving job\ndriver output in Cloud Storage is: \n\n```\ncloud-storage-bucket-name\n - google-cloud-dataproc-metainfo\n - list of cluster IDs\n - list of job IDs\n - list of output logs for a job\n```\n\nYou can use the `gcloud` command line tool, Dataproc API, or\nGoogle Cloud console to list the name of a cluster's staging and temp buckets. \n\n### Console\n\n- \\\\View cluster details, which includeas the name of the cluster's staging bucket, on the Dataproc [Clusters](https://console.cloud.google.com/project/_/dataproc/clusters) page in the Google Cloud console.\n- On the Google Cloud console **[Cloud Storage Browser](https://console.cloud.google.com/storage/browser)** page, filter results that contain \"dataproc-temp-\".\n\n### gcloud command\n\nRun the\n[`gcloud dataproc clusters describe`](/sdk/gcloud/reference/dataproc/clusters/describe)\ncommand locally in a terminal window or in\n[Cloud Shell](https://console.cloud.google.com/?cloudshell=true).\nThe staging and temp buckets associated with your cluster are listed in the\noutput. \n\n```\ngcloud dataproc clusters describe cluster-name \\\n --region=region \\\n...\nclusterName: cluster-name\nclusterUuid: daa40b3f-5ff5-4e89-9bf1-bcbfec ...\nconfig:\n configBucket: dataproc-...\n ...\n tempBucket: dataproc-temp...\n```\n\n### REST API\n\nCall [clusters.get](/dataproc/docs/reference/rest/v1/projects.regions.clusters/get)\nto list the cluster details, including the name of the cluster's staging and temp buckets. \n\n```\n{\n \"projectId\": \"vigilant-sunup-163401\",\n \"clusterName\": \"cluster-name\",\n \"config\": {\n \"configBucket\": \"dataproc-...\",\n...\n \"tempBucket\": \"dataproc-temp-...\",\n}\n```\n\ndefaultFS\n---------\n\nYou can set `core:fs.defaultFS` to a bucket location in Cloud Storage (`gs://`\u003cvar translate=\"no\"\u003edefaultFS-bucket-name\u003c/var\u003e) to set Cloud Storage as the default filesystem. This also sets `core:fs.gs.reported.permissions`, the reported permission returned by the Cloud Storage connector for all files, to `777`.\n| **Note:** When you use an [Assured Workloads environment](/assured-workloads/docs/deploy-resource) for regulatory compliance, the cluster, VPC network, and Cloud Storage buckets must be contained within the Assured Workloads environment.\n\nIf Cloud Storage is not set as the default filesystem, HDFS will be used, and the `core:fs.gs.reported.permissions` property will return `700`, the default value. \n\n```\ngcloud dataproc clusters create cluster-name \\\n --properties=core:fs.defaultFS=gs://defaultFS-bucket-name \\\n --region=region \\\n --bucket=staging-bucket-name \\\n --temp-bucket=temp-bucket-name \\\n other args ...\n```\n\n\u003cbr /\u003e\n\n| **Note:** Currently, console display of the defaultFS bucket is not supported."]]