Mit Sammlungen den Überblick behalten
Sie können Inhalte basierend auf Ihren Einstellungen speichern und kategorisieren.
Auf dieser Seite werden mehrteilige XML API-Uploads in Cloud Storage erläutert. Bei dieser Uploadmethode werden Dateien teilweise hochgeladen und dann mit einer abschließenden Anfrage in einem einzigen Objekt zusammengefasst. Mehrteilige XML API-Uploads sind mit mehrteiligen Amazon S3-Uploads kompatibel.
Übersicht
Mit einem mehrteiligen XML API-Upload können Sie Daten in mehreren Teilen hochladen und dann zu einem endgültigen Objekt zusammenstellen. Dieses Verhalten bietet mehrere Vorteile, insbesondere bei großen Dateien:
Sie können Teile gleichzeitig hochladen, wodurch der Zeitraum für das Hochladen der Daten komplett verkürzt wird.
Wenn einer der Uploadvorgänge fehlschlägt, müssen Sie nur einen Teil des gesamten Objekts noch einmal hochladen anstatt von vorn beginnen.
Da die Gesamtgröße der Datei nicht im Voraus angegeben wird, können Sie mehrteilige XML API-Uploads für Streaming-Uploads oder zum Komprimieren von Daten während des Uploads verwenden.
Ein mehrteiliger XML API-Upload umfasst drei erforderliche Schritte:
Starten des Uploads mit einer POST-Anfrage, einschließlich der Angabe aller Metadaten, die das abgeschlossene Objekt haben sollte. Als Antwort wird eine UploadId zurückgegeben, die Sie in allen nachfolgenden Anfragen verwenden, die mit dem Upload verknüpft sind.
Wenn Sie aufgegebene mehrteilige Uploads vermeiden möchten, verwenden Sie die Verwaltung des Objektlebenszyklus. Damit werden mehrteilige Uploads automatisch entfernt, wenn sie ein bestimmtes Alter erreicht haben.
Hinweise
Die folgenden Einschränkungen gelten für die Verwendung von mehrteiligen XML API-Uploads:
Es gibt Beschränkungen für die Mindest- und Maximalgröße eines Teils sowie für die Anzahl der Teile, die zum Zusammenfügen des vollständigen Uploads verwendet werden.
Vorbedingungen werden in den Anfragen nicht unterstützt.
Für Objekte, die mit dieser Methode hochgeladen wurden, gibt es keine MD5-Hashes.
Diese Uploadmethode wird in der Google Cloud Console oder der Google Cloud CLI nicht unterstützt.
Beachten Sie bei der Arbeit mit mehrteiligen XML API-Uploads Folgendes:
Sie können zwar einen Upload starten und Teile des Uploads hochladen, die Anfrage zum Abschließen des Uploads schlägt jedoch fehl, wenn Sie ein Objekt mit einem Hold oder einer nicht erfüllten Aufbewahrungsdauer überschreiben.
Sie können mehrteilige XML API-Uploads mit der Methode uploadFileInChunks ausführen. Beispiel:
/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The path of file to upload// const filePath = 'path/to/your/file';// The size of each chunk to be uploaded// const chunkSize = 32 * 1024 * 1024;// Imports the Google Cloud client libraryconst{Storage,TransferManager}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();// Creates a transfer manager clientconsttransferManager=newTransferManager(storage.bucket(bucketName));asyncfunctionuploadFileInChunksWithTransferManager(){// Uploads the filesawaittransferManager.uploadFileInChunks(filePath,{chunkSizeBytes:chunkSize,});console.log(`${filePath} uploaded to ${bucketName}.`);}uploadFileInChunksWithTransferManager().catch(console.error);
defupload_chunks_concurrently(bucket_name,source_filename,destination_blob_name,chunk_size=32*1024*1024,workers=8,):"""Upload a single file, in chunks, concurrently in a process pool."""# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The path to your file to upload# source_filename = "local/path/to/file"# The ID of your GCS object# destination_blob_name = "storage-object-name"# The size of each chunk. The performance impact of this value depends on# the use case. The remote service has a minimum of 5 MiB and a maximum of# 5 GiB.# chunk_size = 32 * 1024 * 1024 (32 MiB)# The maximum number of processes to use for the operation. The performance# impact of this value depends on the use case. Each additional process# occupies some CPU and memory resources until finished. Threads can be used# instead of processes by passing `worker_type=transfer_manager.THREAD`.# workers=8fromgoogle.cloud.storageimportClient,transfer_managerstorage_client=Client()bucket=storage_client.bucket(bucket_name)blob=bucket.blob(destination_blob_name)transfer_manager.upload_chunks_concurrently(source_filename,blob,chunk_size=chunk_size,max_workers=workers)print(f"File {source_filename} uploaded to {destination_blob_name}.")
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],["Zuletzt aktualisiert: 2025-08-18 (UTC)."],[],[],null,["# XML API multipart uploads\n\nThis page discusses XML API multipart uploads in Cloud Storage. This upload\nmethod uploads files in parts and then assembles them into a single object using\na final request. XML API multipart uploads are compatible with Amazon S3\nmultipart uploads.\n| **Note:** Within the [JSON API](/storage/docs/json_api), there is an unrelated [type of single-request upload](/storage/docs/uploads-downloads#uploads) also called a \"multipart upload\".\n\nOverview\n--------\n\nAn *XML API multipart upload* lets you upload data in multiple parts and\nthen assemble them into a final object. This behavior has several advantages,\nparticularly for large files:\n\n- You can upload parts simultaneously, reducing the time it takes to upload the\n data in its entirety.\n\n- If one of the upload operations fails, you only have to re-upload a portion\n of the overall object, instead of restarting from the beginning.\n\n- Since the total file size is not specified in advance, you can use XML API\n multipart uploads for [streaming uploads](/storage/docs/streaming-uploads) or for compressing data\n on-the-fly while uploading.\n\nAn XML API multipart upload has three required steps:\n\n1. [Initiate the upload](/storage/docs/xml-api/post-object-multipart) using a `POST` request, which includes specifying\n any metadata that the completed object should have. The response returns an\n [`UploadId`](/storage/docs/xml-api/reference-headers#uploadid-multipart) that you use in all subsequent requests associated with\n the upload.\n\n2. [Upload the data](/storage/docs/xml-api/put-object-multipart) using one or more `PUT` requests.\n\n3. [Complete the upload](/storage/docs/xml-api/post-object-complete) using a `POST` request. This request overwrites\n any existing object in the bucket with the same name.\n\nThere is no limit to how long a multipart upload and its uploaded parts can\nremain unfinished or idle in a bucket.\n\n- Successfully uploaded parts count toward your [monthly storage usage](/storage/pricing#storage-pricing).\n- You can avoid a buildup of abandoned multipart uploads by using [Object Lifecycle Management](/storage/docs/lifecycle#abort-mpu) to automatically remove multipart uploads when they reach a specified age.\n\nConsiderations\n--------------\n\nThe following limitations apply to using XML API multipart uploads:\n\n- There are [limits](/storage/quotas#requests) to the minimum size a part can be, the maximum size a part can be, and the number of parts used to assemble the completed upload.\n- [Preconditions](/storage/docs/request-preconditions) are not supported in the requests.\n- [MD5 hashes](/storage/docs/metadata#md5) don't exist for objects uploaded using this method.\n- This upload method is not supported in the Google Cloud console or the Google Cloud CLI.\n\nKeep in mind the following when working with XML API multipart uploads:\n\n- XML API multipart uploads have [specific IAM permissions](/storage/docs/access-control/iam-permissions#multipart-uploads).\n If you use [custom IAM roles](/iam/docs/creating-custom-roles), you should ensure those\n roles have the permissions you need.\n\n- While you can initiate an upload and upload parts, the request to\n complete the upload fails if it would overwrite an object that has\n a [hold](/storage/docs/object-holds) on it or an unfulfilled [retention period](/storage/docs/bucket-lock).\n\n- You can [list ongoing uploads](/storage/docs/xml-api/get-bucket-uploads) in a bucket, but only a completed upload\n appears in the normal list of objects in the bucket.\n\n- An uploaded part can be subject to [early deletion charges](/storage/pricing#early-delete) if it is\n never used.\n\nHow client libraries use XML API multipart uploads\n--------------------------------------------------\n\nThis section provides information about performing XML API multipart uploads\nwith client libraries that support it. \n\n### Client libraries\n\n\n### Java\n\n\nFor more information, see the\n[Cloud Storage Java API\nreference documentation](https://cloud.google.com/java/docs/reference/google-cloud-storage/latest/overview).\n\n\nTo authenticate to Cloud Storage, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for client libraries](/storage/docs/authentication#client-libs).\n\nThe Java client library does not support XML API multipart uploads. Instead, use\n[parallel composite uploads](/storage/docs/parallel-composite-uploads).\n\n### Node.js\n\n\nFor more information, see the\n[Cloud Storage Node.js API\nreference documentation](https://cloud.google.com/nodejs/docs/reference/storage/latest).\n\n\nTo authenticate to Cloud Storage, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for client libraries](/storage/docs/authentication#client-libs).\n\nYou can perform XML API multipart uploads using the\n[`uploadFileInChunks`](https://googleapis.dev/nodejs/storage/latest/TransferManager.html#uploadFileInChunks)\nmethod. For example: \n\n /**\n * TODO(developer): Uncomment the following lines before running the sample.\n */\n // The ID of your GCS bucket\n // const bucketName = 'your-unique-bucket-name';\n\n // The path of file to upload\n // const filePath = 'path/to/your/file';\n\n // The size of each chunk to be uploaded\n // const chunkSize = 32 * 1024 * 1024;\n\n // Imports the Google Cloud client library\n const {Storage, TransferManager} = require('https://cloud.google.com/nodejs/docs/reference/storage/latest/overview.html');\n\n // Creates a client\n const storage = new Storage();\n\n // Creates a transfer manager client\n const transferManager = new https://cloud.google.com/nodejs/docs/reference/storage/latest/storage/transfermanager.html(storage.bucket(bucketName));\n\n async function uploadFileInChunksWithTransferManager() {\n // Uploads the files\n await transferManager.https://cloud.google.com/nodejs/docs/reference/storage/latest/storage/transfermanager.html(filePath, {\n chunkSizeBytes: chunkSize,\n });\n\n console.log(`${filePath} uploaded to ${bucketName}.`);\n }\n\n uploadFileInChunksWithTransferManager().catch(console.error);\n\n### Python\n\n\nFor more information, see the\n[Cloud Storage Python API\nreference documentation](https://cloud.google.com/python/docs/reference/storage/latest).\n\n\nTo authenticate to Cloud Storage, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for client libraries](/storage/docs/authentication#client-libs).\n\nYou can perform XML API multipart uploads using the\n[`upload_chunks_concurrently`](/python/docs/reference/storage/latest/google.cloud.storage.transfer_manager#google_cloud_storage_transfer_manager_upload_chunks_concurrently)\nmethod. For example: \n\n def upload_chunks_concurrently(\n bucket_name,\n source_filename,\n destination_blob_name,\n chunk_size=32 * 1024 * 1024,\n workers=8,\n ):\n \"\"\"Upload a single file, in chunks, concurrently in a process pool.\"\"\"\n # The ID of your GCS bucket\n # bucket_name = \"your-bucket-name\"\n\n # The path to your file to upload\n # source_filename = \"local/path/to/file\"\n\n # The ID of your GCS object\n # destination_blob_name = \"storage-object-name\"\n\n # The size of each chunk. The performance impact of this value depends on\n # the use case. The remote service has a minimum of 5 MiB and a maximum of\n # 5 GiB.\n # chunk_size = 32 * 1024 * 1024 (32 MiB)\n\n # The maximum number of processes to use for the operation. The performance\n # impact of this value depends on the use case. Each additional process\n # occupies some CPU and memory resources until finished. Threads can be used\n # instead of processes by passing `worker_type=transfer_manager.THREAD`.\n # workers=8\n\n from google.cloud.storage import https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.client.Client.html, https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.transfer_manager.html\n\n storage_client = Client()\n bucket = storage_client.https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.client.Client.html#google_cloud_storage_client_Client_bucket(bucket_name)\n blob = bucket.blob(destination_blob_name)\n\n https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.transfer_manager.html.https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.transfer_manager.html(\n source_filename, blob, chunk_size=chunk_size, max_workers=workers\n )\n\n print(f\"File {source_filename} uploaded to {destination_blob_name}.\")\n\n\u003cbr /\u003e\n\nWhat's next\n-----------\n\n- Explore additional [uploading methods](/storage/docs/uploads-downloads#uploads) for Cloud Storage.\n- Learn about [truncated exponential backoff](/storage/docs/retry-strategy) and when to retry requests."]]