[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-25。"],[],[],null,["# Storage batch operations\n\n| Storage batch operations is available only if you've configured [Storage Intelligence](/storage/docs/storage-intelligence/overview).\n\nThis document describes storage batch operations, a\nCloud Storage capability that lets you perform operations on billions of\nobjects in a serverless manner. Using\nstorage batch operations, you can automate large-scale API\noperations on billions of objects, reducing the development time required to\nwrite and maintain scripts for each request.\n\nTo learn how to create storage batch operations jobs, see\n[Create and manage storage batch operations jobs](/storage/docs/batch-operations/create-manage-batch-operation-jobs).\n\nOverview\n--------\n\nStorage batch operations let you run one of four transformations on\nmultiple objects at once: placing an object hold, deleting an object,\nupdating object metadata, and rewriting objects. To use\nstorage batch operations, you create a [job configuration](#job-configurations) that\ndefines what transformations should be applied to which objects.\n\nCreating a batch operation returns a long-running operation\n(LRO) that indicates the status of your request: whether the transformation has\nbeen applied to all specified objects in your request.\n\n### Benefits\n\n- **Scalability**: Perform transformations on millions of objects with a single storage batch operations job.\n- **Serverless execution**: Run batch jobs in a serverless environment, eliminating the need to manage infrastructure.\n- **Automation**: Automate complex and repetitive tasks, improving operational efficiency.\n- **Reduced development time**: Avoid writing and maintaining complex custom scripts.\n- **Performance**: Complete time-sensitive operations within the required time. With multiple batch jobs running concurrently on a bucket, you can process up to one billion objects within three hours.\n- **Automatic retries**: Automatic retries for failed operations.\n- **Job monitoring**: Detailed progress tracking to monitor the status and completion of all jobs.\n\n### Use cases\n\nWhen used with [Storage Insights datasets](/storage/docs/insights/datasets),\nstorage batch operations allow you to accomplish the following\ntasks:\n\n- **Security management**:\n\n - Set encryption keys on multiple objects using the [rewrite object](/storage/docs/json_api/v1/objects/rewrite) method.\n - Apply or remove object holds to control object immutability.\n- **Compliance**:\n\n - Use object holds to meet data retention requirements for regulatory compliance.\n - Delete data between specific timeframes, to meet wipeout compliance requirements.\n- **Data transformation**: Perform bulk updates to object metadata.\n\n- **Cost optimization**: Bulk delete objects in Cloud Storage buckets to\n reduce storage costs.\n\nJob configurations\n------------------\n\nTo [create a storage batch operations job](/storage/docs/batch-operations/create-manage-batch-operation-jobs#create-batch-operation-job), you'll need to set the following job configurations.\nJob configurations are parameters that control how the job is defined for\ndifferent processing requirements.\n\n- **Job name** : A unique name to identify the storage batch operations job. This is used for tracking, monitoring, and referencing the job. Job names are alphanumeric, for example, `job-01`.\n\n- **Job Description** (Optional): A brief description of the job's purpose. This helps with understanding and documenting the job details. For example, `Deletes all objects in a bucket`.\n\n- **Bucket name** : The name of the storage bucket containing the objects to be processed. This is essential for locating the input data. For example, `my-bucket`. You can specify only one bucket name for a job.\n\n- **Object selection**: The selection criteria that defines which objects to process. You can specify the criteria using any one of the following options:\n\n - **Manifest** : Create a manifest and specify its location when you create the storage batch operations job. The manifest is a CSV file, uploaded to Google Cloud, that contains one object or a list of objects that you want to process. Each row in the manifest must include the `bucket` and `name` of the object. You can optionally specify the `generation` of the object. If you don't specify the `generation`, the current version of the object is used.\n\n The file must include a header row of the following format:\n\n `bucket,name,generation`\n\n The following is an example of the manifest: \n\n ```\n bucket,name,generation\n bucket_1,object_1,generation_1\n bucket_1,object_2,generation_2\n bucket_1,object_3,generation_3\n ```\n | **Caution:** Ensure the manifest only includes objects from the bucket provided in the storage batch operations job. Rows referencing other buckets are ignored.\n\n You can also create a manifest using Storage Insights datasets. For details, see [Create a manifest using Storage Insights datasets](/storage/docs/batch-operations/create-manage-batch-operation-jobs#create-manifest-using-insights-datasets).\n - **Object prefixes**: Specify a list of prefixes to filter objects within the bucket. Only objects with these prefixes are processed. If empty, all objects in the bucket are processed.\n\n- **Job type:** Storage batch operations supports the following job types, running a single job per batch operation.\n\n - **Object deletion** : You can [delete objects](/storage/docs/deleting-objects) within a bucket. This is crucial for cost optimization, data lifecycle management, and compliance with data deletion policies.\n\n | **Caution:** By default, Cloud Storage retains soft-deleted objects for a duration of seven days. If you have accidentally deleted the objects, you can restore these soft-deleted objects during this duration. However, if you have disabled [soft delete](/storage/docs/soft-delete) for your bucket, you cannot recover deleted objects.\n - **Metadata updates** : You can modify the [object metadata](/storage/docs/metadata#editable). This includes updating custom metadata, storage class, and other object properties.\n\n - **Object hold updates** : You can enable or disable [object holds](/storage/docs/object-holds). Object holds prevent objects from being deleted or modified, which is essential for compliance and data retention purposes.\n\n - **Object encryption key updates** : You can manage the [customer-managed encryption keys](/storage/docs/encryption/customer-managed-keys) for one or more objects. This includes applying or changing encryption keys using the [rewrite object](/storage/docs/json_api/v1/objects/rewrite) method.\n\nLimitations\n-----------\n\nStorage batch operations has the following limitations:\n\n- Storage batch operations jobs have a maximum lifetime of 14 days. Any\n ongoing job that doesn't complete within 14 days of its creation is\n automatically cancelled.\n\n- We don't recommend running more than 20 concurrent batch operations jobs on\n the same bucket.\n\n- Storage batch operations is not supported on the following\n buckets:\n\n - Buckets that have [Requestor Pays](/storage/docs/requester-pays) enabled.\n\n - Buckets located in the `us-west8` region.\n\nWhat's next\n-----------\n\n- [Create and manage storage batch operations jobs](/storage/docs/batch-operations/create-manage-batch-operation-jobs)"]]