Cloud Logging for storage batch operations

This page describes how to configure and view storage batch operations logs by using Cloud Logging. A storage batch operations job can be configured to generate Cloud Logging log entries for each transformation job. Each log entry corresponds to the attempted transformation of an object.

Storage batch operations support logging to both Cloud Logging and Cloud Storage Cloud Audit Logs. While both options capture storage batch operations actions, we recommend using Cloud Logging. Cloud Logging provides a centralized platform for log analysis, real-time monitoring, and advanced filtering, offering a robust solution for managing and understanding your batch operation activity.

Before you begin

Verify that you have access to Cloud Logging. To use Cloud Logging, we recommend granting the Logs Viewer (roles/logging.viewer) Identity and Access Management role. The Logs Viewer (roles/logging.viewer) Identity and Access Management role provides the Identity and Access Management permissions required to view your Cloud Logging data. For more information about Logging access permissions, see Access control with IAM.

To verify and grant the IAM permissions, complete the following steps:

Understand logging details

When logging is enabled, storage batch operations capture the following details:

  • Loggable action: The loggable action value is always transform.

  • Loggable states: For each action, you can choose to log one or both of the following states:

    • SUCCEEDED: The action was successful.
    • FAILED: The action failed.

Enable logging

To enable logging, specify the actions and the states to log.

Command line

When creating a storage batch operations job with gcloud storage batch-operations jobs create, use the --log-actions and --log-action-states flags to enable logging.

gcloud storage batch-operations jobs create JOB_NAME \
  --manifest-location=MANIFEST_LOCATION \
  --delete-object \
  --log-actions=transform \
  --log-action-states=LOG_ACTION_STATES

Where:

  • JOB_NAME is the name you want to give your job. For example, my-job.
  • MANIFEST_LOCATION is the location of your manifest. For example, gs://my-bucket/manifest.csv.
  • LOG_ACTION_STATES is a comma-separated list of states to log. For example, succeeded,failed.

REST API

Create a storage batch operations job with a LoggingConfig.

{
   "loggingConfig": {
      "logActions": ["TRANSFORM"],
      "logActionStates": ["LOG_ACTION_STATES"],
        }
}

Where:

LOG_ACTION_STATES is a comma-separated list of states to log. For example, "SUCCEEDED","FAILED".

View logs

To view storage batch operations logs, do the following:

Console

  1. Go to the Google Cloud navigation menu and select Logging > Logs Explorer :

    Go to the Logs Explorer

  2. Select a Google Cloud project.

  3. From the Upgrade menu, switch from Legacy Logs Viewer to Logs Explorer.

  4. To filter your logs to show only storage batch operations entries, type storage_batch_operations_job into the query field and click Run query.

  5. In the Query results pane, click Edit time to change the time period for which to return results.

For more information on using the Logs Explorer, see Using the Logs Explorer.

Command line

To use the gcloud CLI to search for storage batch operations logs, use the gcloud logging read command.

Specify a filter to limit your results to storage batch operations logs.

gcloud logging read "resource.type=storage_batch_operations_job"

REST API

Use the entries.list Cloud Logging API method.

To filter your results to include only storage batch operations-related entries, use the filter field. The following is a sample JSON request object:

{
"resourceNames":
  [
    "projects/my-project-name"
  ],
  "orderBy": "timestamp desc",
  "filter": "resource.type=\"storage_batch_operations_job\""
}

Where:

my-project-name is the name of your project.

Storage batch operations log format

All storage batch operations-specific fields are contained within a jsonPayload object. While the exact content of jsonPayload varies based on the job type, there is a common structure shared across all TransformActivityLog entries. This section outlines the common log fields and then details the operation-specific fields.

  • Common log fields

    The following fields appear in all logs:

    jsonPayload: {
    "@type": "type.googleapis.com/google.cloud.storagebatchoperations.logging.TransformActivityLog",
    "completeTime": "YYYY-MM-DDTHH:MM:SS.SSSSSSSSSZ",
    "status": {
      "errorMessage": "String indicating error, or empty on success",
      "errorType": "ENUM_VALUE",
      "statusCode": "ENUM_VALUE"
    },
    "logName": "projects/PROJECT_ID/logs/storagebatchoperations.googleapis.com%2Ftransform_activity",
    "receiveTimestamp": "YYYY-MM-DDTHH:MM:SS.SSSSSSSSSZ",
    "resource": {
      "labels": {
        "location":"us-central1",
        "job_id": "BATCH_JOB_ID",
        "resource_container": "RESOURCE_CONTAINER",
        // ... other labels
      },
      "type": "storagebatchoperations.googleapis.com/Job"
    },
    // Operation-specific details will be nested here (for example,
    // "DeleteObject", "PutObjectHold", "RewriteObject", "PutMetadata")
    // Each operation-specific object will also contain the following
    // object: "objectMetadataBefore": {
    //   "gcsObject": {
    //     "bucket": "BUCKET_NAME",
    //     "generation": "GENERATION_NUMBER",
    //     "objectKey": "OBJECT_PATH"
    //   }
    // }
    }
    

    The following table describes each of the common log fields:

    Common log fields Type Description
    @type String Specifies the type of the log entry's payload and indicates that the log represents a TransformActivityLog for storage batch operations.
    completeTime Timestamp The ISO 8601-compliant timestamp at which the operation completed.
    status Object Provides information about the result of the batch operation activity.
    status.errorMessage String An error message if the operation fails.
    status.errorType String Indicates the error type.
    status.statusCode String The status code of the operation
    logName String The full resource name of the log, indicating the project and the log stream.
    receiveTimestamp Timestamp The timestamp when the log entry was received by the logging system.
    resource Object Information about the resource that generated the log entry.
    resource.labels Object Key-value pairs providing additional identifying information about the resource.
    resource.type String The type of resource that generated the log.
    objectMetadataBefore Object Contains metadata of the object before the batch operation was attempted.
    objectMetadataBefore.gcsObject Object Details about the object.
    objectMetadataBefore.gcsObject.bucket String The name of the bucket where the object resides.
    objectMetadataBefore.gcsObject.generation String The generation number of the object before the operation.
    objectMetadataBefore.gcsObject.objectKey String The full path of the object within the bucket.
  • Operation-specific jsonPayload contents

    The difference between log entries for different batch operations lies in the top-level object nested within the jsonPayload. Only one of the following objects is available in a given log entry, corresponding to the specific batch operation performed:

    • Delete object (DeleteObject)

      jsonPayload:
      {
      "DeleteObject": {
        "objectMetadataBefore": {
          "gcsObject": {
            "bucket": "test-bucket",
            "generation": "1678912345678901",
            "objectKey": "test_object.txt"
          }
        }
        }
      }
      
    • Put object hold (PutObjectHold)

      jsonPayload:
      {
      "PutObjectHold": {
        "objectMetadataBefore": {
          "gcsObject": {
            "bucket": "test-bucket",
            "generation": "1678912345678901",
            "objectKey": "test_object.txt"
          }
        },
        "temporaryHoldAfter": True,
        "eventBasedHoldAfter": True
      }
      }
      
    • Rewrite object (RewriteObject)

      jsonPayload:
      {
      "RewriteObject": {
        "objectMetadataBefore": {
          "gcsObject": {
            "bucket": "test-bucket",
            "generation": "1678912345678901",
            "objectKey": "test_object.txt"
          }
        },
        "kmsKeyVersionAfter": "projects/my-gcp-project/locations/us-central1/keyRings/my-keyring-01/cryptoKeys/my-encryption-key/cryptoKeyVersions/1"
      }
      }
      
    • Put metadata (PutMetadata)

      jsonPayload:
      {
      "PutMetadata": {
        "objectMetadataBefore": {
          "gcsObject": {
            "bucket": "test-bucket",
            "generation": "1678912345678901",
            "objectKey": "test_object.txt"
          }
        },
        "content_disposition_after": "attachment; filename=\"report_final.pdf\"",
        "content_encoding_after": "gzip",
        "content_language_after": "en-US",
        "content_type_after": "application/pdf",
        "cache_control_after": "public, max-age=3600",
        "custom_time_after": "2025-06-27T10:00:00Z",
        "custom_metadata_after": {
          "project": "marketing",
          "version": "2.0",
          "approvedBy": "GCP Admin"
        }
      }
      }
      

    The following table describes the operation-specific log fields:

    Operation-specific log fields Type Description
    PutObjectHold Object Indicates a hold operation on an object.
    PutObjectHold.temporaryHoldAfter Boolean If the value is True, it indicates that a temporary hold was applied to the object after the storage batch operations job completed. Valid values are True or False.
    PutObjectHold.eventBasedHoldAfter Boolean If the value is True, it indicates that an event-based hold was applied to the object after the storage batch operations job completed. Valid values are True or False.
    RewriteObject Object Indicates a rewrite operation on an object.
    RewriteObject.kmsKeyVersionAfter String The Cloud Key Management Service key version used after the rewrite job. The kmsKeyVersionAfter field is populated if the object's encryption key changed as a result of the rewrite. It's an optional field, meaning it might not be present if the Cloud KMS key version remained unchanged after the rewrite.
    PutMetadata Object Indicates a metadata update operation on an object.
    PutMetadata.content_disposition_after String Specifies the Content-Disposition header value after the completion of the PutMetadata job. It's an optional field and is only populated if the content disposition was set or modified.
    PutMetadata.content_encoding_after String Specifies the Content-Encoding header value after the completion of the PutMetadata job. It's an optional field and is only populated if the content encoding was set or modified.
    PutMetadata.content_language_after String Specifies the Content-Language header value after the completion of the PutMetadata job. It's an optional field and is only populated if the content language was set or modified.
    PutMetadata.content_type_after String Specifies the Content-Type header value after the completion of the PutMetadata job. It's an optional field and is only populated if the content type was set or modified.
    PutMetadata.cache_control_after String Specifies the Cache-Control header value after the completion of the PutMetadata job. It's an optional field and is only populated if the cache control was set or modified.
    PutMetadata.custom_time_after String Specifies the Custom-Time header value after the completion of the PutMetadata job. It's an optional field and is only populated if the custom time was set or modified.
    PutMetadata.custom_metadata_after Map (key: string, value: string) Contains a map of Custom- Metadata key-value pairs after the transformation. This field includes any user-defined metadata that was set or modified on the object. It allows for flexible storage of additional metadata.