Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Dokumen ini memberikan informasi tentang bucket penyiapan Serverless untuk Apache Spark.
Serverless for Apache Spark membuat bucket penyiapan Cloud Storage di project Anda atau menggunakan kembali bucket penyiapan yang ada dari permintaan pembuatan batch sebelumnya. Ini adalah bucket default yang dibuat oleh cluster Dataproc di Compute Engine. Untuk mengetahui informasi selengkapnya, lihat Bucket sementara dan penyiapan Dataproc.
Serverless for Apache Spark menyimpan dependensi workload, file konfigurasi, dan output konsol driver tugas di bucket staging.
Serverless untuk Apache Spark menetapkan bucket staging regional di
lokasi Cloud Storage
sesuai dengan zona Compute Engine tempat workload Anda di-deploy,
lalu membuat dan mengelola bucket per lokasi tingkat project ini.
Bucket penyiapan yang dibuat Serverless for Apache Spark dibagikan di antara beban kerja di region yang sama, dan dibuat dengan durasi retensi penghapusan sementara Cloud Storage yang ditetapkan ke 0 detik.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-25 UTC."],[[["\u003cp\u003eDataproc Serverless utilizes Cloud Storage staging buckets to store workload dependencies, config files, and job driver console output.\u003c/p\u003e\n"],["\u003cp\u003eThese staging buckets are created by Dataproc Serverless within your project, or an existing one is reused, similar to the default bucket used with Dataproc on Compute Engine clusters.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless creates regional staging buckets, which are shared across workloads within the same region, based on the Compute Engine zone where the workload is deployed.\u003c/p\u003e\n"],["\u003cp\u003eThe staging buckets created by Dataproc Serverless can be identified by filtering for the \u003ccode\u003edataproc-staging-\u003c/code\u003e prefix in Cloud Storage, and they are created with a 0-second soft delete retention.\u003c/p\u003e\n"]]],[],null,["# Serverless for Apache Spark staging buckets\n\nThis document provides information about Serverless for Apache Spark staging buckets.\nServerless for Apache Spark creates a Cloud Storage staging bucket in your project\nor reuses an existing staging bucket from previous batch\ncreation requests. This is the default bucket created by\nDataproc on Compute Engine clusters. For more\ninformation, see\n[Dataproc staging and temp buckets](/dataproc/docs/concepts/configuring-clusters/staging-bucket).\n\nServerless for Apache Spark stores workload dependencies, config files, and\njob driver console output in the staging bucket.\n\nServerless for Apache Spark sets regional staging buckets in\n[Cloud Storage locations](/storage/docs/locations#location-r)\naccording to the Compute Engine zone where your workload is deployed,\nand then creates and manages these project-level, per-location buckets.\nServerless for Apache Spark-created staging buckets are shared among\nworkloads in the same region, and are created with a\nCloud Storage [soft delete retention](/storage/docs/soft-delete#retention-duration)\nduration set to 0 seconds.\n| To locate the Dataproc default staging bucket, in the Google Cloud console, go to **[Cloud Storage](https://console.cloud.google.com/storage/browser)** and filter the results using the `dataproc-staging-` prefix."]]