Class AsyncClient (2.25.0)

A client for Google Cloud Storage offering asynchronous operations.

Example: create a client instance
  auto client = google::cloud::storage_experimental::AsyncClient();
  // Use the client.
Example: read an object range
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::uint64_t> {
    // Read the first 8 bytes of the object.
    auto payload = (co_await client.ReadObjectRange(
                        gcs_ex::BucketName(std::move(bucket_name)),
                        std::move(object_name), /*offset=*/0, /*limit=*/8))
                       .value();
    auto contents = payload.contents();
    std::uint64_t count = 0;
    for (auto buffer : contents) {
      count += std::count(buffer.begin(), buffer.end(), '\n');
    }
    co_return count;
  };
Per-operation Overrides

In addition to the request options, which are passed on to the service to modify the request, you can specify options that override the local behavior of the library. For example, you can override the local retry policy:

auto pending = client.DeleteObject(
    "my-bucket", "my-object",
     google::cloud::Options{}
         .set<gcs::RetryPolicyOption>(
             gcs::LimitedErrorCountRetryPolicy(5).clone()));
Retry, Backoff, and Idempotency Policies

The library automatically retries requests that fail with transient errors, and follows the recommended practice to backoff between retries.

The default policies are to continue retrying for up to 15 minutes, and to use truncated (at 5 minutes) exponential backoff, doubling the maximum backoff period between retries. Likewise, the idempotency policy is configured to retry all operations.

The application can override these policies when constructing objects of this class. The documentation for the constructors shows examples of this in action.

Constructors

AsyncClient(Options)

Create a new client configured with options.

Parameter
Name Description
options Options

AsyncClient(std::shared_ptr< AsyncConnection >)

Create a new client using connection. This is often used for mocking.

Parameter
Name Description
connection std::shared_ptr< AsyncConnection >

Functions

InsertObject(BucketName const &, std::string, Collection &&, Options)

Creates an object given its name and contents.

This function always uses single-request uploads. As the name implies, these uploads use a single RPC to upload all the data. There is no way to restart or resume these uploads if there is a partial failure. All the data must be sent again in that case.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example
  namespace gcs_ex = google::cloud::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name) {
    auto object = client.InsertObject(
        gcs_ex::BucketName(std::move(bucket_name)), std::move(object_name),
        std::string("Hello World!\n"));
    // Attach a callback, this is called when the upload completes.
    auto done = object.then([](auto f) {
      auto metadata = f.get();
      if (!metadata) throw std::move(metadata).status();
      std::cerr << "Object successfully inserted " << metadata->DebugString()
                << "\n";
    });
    // To simplify example, block until the operation completes.
    done.get();
  }
Idempotency

This operation is only idempotent if restricted by pre-conditions.

Parameters
Name Description
bucket_name BucketName const &

the name of the bucket that will contain the object.

object_name std::string

the name of the object to be created.

contents Collection &&

the contents (media) for the new object.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

typename Collection

the type for the payload. This must be convertible to std::string, std::vector<CharType>, std::vector<std::string>, or std::vector<std::vector<ChartType>>. Where ChartType is a char, signed char, unsigned char, or std::uint8_t.

Returns
Type Description
future< StatusOr< google::storage::v2::Object > >

InsertObject(google::storage::v2::WriteObjectRequest, Collection &&, Options)

Creates an object given its name and contents.

This function always uses single-request uploads. As the name implies, these uploads use a single RPC to upload all the data. There is no way to restart or resume these uploads if there is a partial failure. All the data must be sent again in that case.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example
  namespace gcs_ex = google::cloud::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name) {
    auto object = client.InsertObject(
        gcs_ex::BucketName(std::move(bucket_name)), std::move(object_name),
        std::string("Hello World!\n"));
    // Attach a callback, this is called when the upload completes.
    auto done = object.then([](auto f) {
      auto metadata = f.get();
      if (!metadata) throw std::move(metadata).status();
      std::cerr << "Object successfully inserted " << metadata->DebugString()
                << "\n";
    });
    // To simplify example, block until the operation completes.
    done.get();
  }
Idempotency

This operation is only idempotent if restricted by pre-conditions.

Parameters
Name Description
request google::storage::v2::WriteObjectRequest

the request contents, it must include the bucket name and object names. Many other fields are optional.

contents Collection &&

the contents (media) for the new object.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

typename Collection

the type for the payload. This must be convertible to std::string, std::vector<CharType>, std::vector<std::string>, or std::vector<std::vector<ChartType>>. Where ChartType is a char, signed char, unsigned char, or std::uint8_t.

Returns
Type Description
future< StatusOr< google::storage::v2::Object > >

InsertObject(google::storage::v2::WriteObjectRequest, WritePayload, Options)

Creates an object given its name and contents.

This function always uses single-request uploads. As the name implies, these uploads use a single RPC to upload all the data. There is no way to restart or resume these uploads if there is a partial failure. All the data must be sent again in that case.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example
  namespace gcs_ex = google::cloud::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name) {
    auto object = client.InsertObject(
        gcs_ex::BucketName(std::move(bucket_name)), std::move(object_name),
        std::string("Hello World!\n"));
    // Attach a callback, this is called when the upload completes.
    auto done = object.then([](auto f) {
      auto metadata = f.get();
      if (!metadata) throw std::move(metadata).status();
      std::cerr << "Object successfully inserted " << metadata->DebugString()
                << "\n";
    });
    // To simplify example, block until the operation completes.
    done.get();
  }
Idempotency

This operation is only idempotent if restricted by pre-conditions.

Parameters
Name Description
request google::storage::v2::WriteObjectRequest

the request contents, it must include the bucket name and object names. Many other fields are optional.

contents WritePayload

the contents (media) for the new object.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< google::storage::v2::Object > >

ReadObject(BucketName const &, std::string, Options)

A streaming download for the contents of an object.

When satisfied, the returned future has a reader to asynchronously download the contents of the given object.

Example
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::uint64_t> {
    auto [reader, token] =
        (co_await client.ReadObject(gcs_ex::BucketName(std::move(bucket_name)),
                                    std::move(object_name)))
            .value();
    std::uint64_t count = 0;
    while (token.valid()) {
      auto [payload, t] = (co_await reader.Read(std::move(token))).value();
      token = std::move(t);
      for (auto const& buffer : payload.contents()) {
        count += std::count(buffer.begin(), buffer.end(), '\n');
      }
    }
    co_return count;
  };
Idempotency

This is a read-only operation and is always idempotent. Once the download starts, this operation will automatically resume the download if is interrupted. Use ResumePolicyOption and ResumePolicy to control this

Parameters
Name Description
bucket_name BucketName const &

the name of the bucket that contains the object.

object_name std::string

the name of the object to be read.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncReader, AsyncToken > > >

ReadObject(google::storage::v2::ReadObjectRequest, Options)

A streaming download for the contents of an object.

When satisfied, the returned future has a reader to asynchronously download the contents of the given object.

Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name,
         std::int64_t generation) -> google::cloud::future<std::uint64_t> {
    auto request = google::storage::v2::ReadObjectRequest{};
    request.set_bucket(gcs_ex::BucketName(std::move(bucket_name)).FullName());
    request.set_object(std::move(object_name));
    request.set_generation(generation);
    auto [reader, token] =
        (co_await client.ReadObject(std::move(request))).value();
    std::uint64_t count = 0;
    while (token.valid()) {
      auto [payload, t] = (co_await reader.Read(std::move(token))).value();
      token = std::move(t);
      for (auto const& buffer : payload.contents()) {
        count += std::count(buffer.begin(), buffer.end(), '\n');
      }
    }
    co_return count;
  };
Idempotency

This is a read-only operation and is always idempotent. Once the download starts, this operation will automatically resume the download if is interrupted. Use ResumePolicyOption and ResumePolicy to control this

Parameters
Name Description
request google::storage::v2::ReadObjectRequest

the request contents, it must include the bucket name and object names. Many other fields are optional.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncReader, AsyncToken > > >

ReadObjectRange(BucketName const &, std::string, std::int64_t, std::int64_t, Options)

Downloads a range of bytes in an object.

When satisfied, the returned future has the contents of the given object between offset and offset + limit (exclusive).

Be aware that this will accumulate all the bytes in memory, you need to consider whether limit is too large for your deployment environment.

Example
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::uint64_t> {
    // Read the first 8 bytes of the object.
    auto payload = (co_await client.ReadObjectRange(
                        gcs_ex::BucketName(std::move(bucket_name)),
                        std::move(object_name), /*offset=*/0, /*limit=*/8))
                       .value();
    auto contents = payload.contents();
    std::uint64_t count = 0;
    for (auto buffer : contents) {
      count += std::count(buffer.begin(), buffer.end(), '\n');
    }
    co_return count;
  };
Idempotency

This is a read-only operation and is always idempotent.

Parameters
Name Description
bucket_name BucketName const &

the name of the bucket that contains the object.

object_name std::string

the name of the object to be read.

offset std::int64_t

where to begin reading from the object, results in an error if the offset is larger than the object

limit std::int64_t

how much data to read starting at offset

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< ReadPayload > >

ReadObjectRange(google::storage::v2::ReadObjectRequest, std::int64_t, std::int64_t, Options)

Downloads a range of bytes in an object.

When satisfied, the returned future has the contents of the given object between offset and offset + limit (exclusive).

Be aware that this will accumulate all the bytes in memory, you need to consider whether limit is too large for your deployment environment.

Example
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::uint64_t> {
    // Read the first 8 bytes of the object.
    auto payload = (co_await client.ReadObjectRange(
                        gcs_ex::BucketName(std::move(bucket_name)),
                        std::move(object_name), /*offset=*/0, /*limit=*/8))
                       .value();
    auto contents = payload.contents();
    std::uint64_t count = 0;
    for (auto buffer : contents) {
      count += std::count(buffer.begin(), buffer.end(), '\n');
    }
    co_return count;
  };
Idempotency

This is a read-only operation and is always idempotent.

Parameters
Name Description
request google::storage::v2::ReadObjectRequest

the request contents, it must include the bucket name and object names. Many other fields are optional. Any values for read_offset() and read_limit() are overridden by the offset and limit.

offset std::int64_t

where to begin reading from the object, results in an error if the offset is larger than the object

limit std::int64_t

how much data to read starting at offset

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< ReadPayload > >

StartBufferedUpload(BucketName const &, std::string, Options)

Starts a new resumable upload session with client-side buffering and automatic recovery from transient failures.

This function always uses resumable uploads. The objects returned by this function buffer data until it is persisted on the service. If the buffer becomes full, they stop accepting new data until the service has persisted enough data.

Because these object buffer data they can recover from most transient errors, including an unexpected closure of the streaming RPC used for the upload. The downside is that these objects must periodically flush these buffers, and this may not achieve the highest possible throughput.

Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                 std::string object_name)
      -> google::cloud::future<google::storage::v2::Object> {
    auto [writer, token] = (co_await client.StartBufferedUpload(
                                gcs_ex::BucketName(std::move(bucket_name)),
                                std::move(object_name)))
                               .value();
    for (int i = 0; i != 1000; ++i) {
      auto line = gcs_ex::WritePayload(std::vector<std::string>{
          std::string("line number "), std::to_string(i), std::string("\n")});
      token =
          (co_await writer.Write(std::move(token), std::move(line))).value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::string> {
    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);
    auto [writer, token] =
        (co_await client.StartBufferedUpload(std::move(request))).value();
    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };
Idempotency

This function is always treated as idempotent, and the library will automatically retry the function on transient errors. Note that this may create multiple upload ids. This is safe as any additional upload ids have no cost and are not visible to any application.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Parameters
Name Description
bucket_name BucketName const &

the name of the bucket that contains the object.

object_name std::string

the name of the object to be read.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

StartBufferedUpload(google::storage::v2::StartResumableWriteRequest, Options)

Starts a new resumable upload session with client-side buffering and automatic recovery from transient failures.

This function always uses resumable uploads. The objects returned by this function buffer data until it is persisted on the service. If the buffer becomes full, they stop accepting new data until the service has persisted enough data.

Because these object buffer data they can recover from most transient errors, including an unexpected closure of the streaming RPC used for the upload. The downside is that these objects must periodically flush these buffers, and this may not achieve the highest possible throughput.

Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                 std::string object_name)
      -> google::cloud::future<google::storage::v2::Object> {
    auto [writer, token] = (co_await client.StartBufferedUpload(
                                gcs_ex::BucketName(std::move(bucket_name)),
                                std::move(object_name)))
                               .value();
    for (int i = 0; i != 1000; ++i) {
      auto line = gcs_ex::WritePayload(std::vector<std::string>{
          std::string("line number "), std::to_string(i), std::string("\n")});
      token =
          (co_await writer.Write(std::move(token), std::move(line))).value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::string> {
    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);
    auto [writer, token] =
        (co_await client.StartBufferedUpload(std::move(request))).value();
    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };
Idempotency

This function is always treated as idempotent, and the library will automatically retry the function on transient errors. Note that this may create multiple upload ids. This is safe as any additional upload ids have no cost and are not visible to any application.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Parameters
Name Description
request google::storage::v2::StartResumableWriteRequest

the request contents, it must include the bucket name and object names. Many other fields are optional.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

ResumeBufferedUpload(std::string, Options)

Resumes an object upload that automatically resumes on failures.

Use this function to resume an upload after your application stops uploading data, even after your application starts.

This function always uses resumable uploads. The objects returned by this function buffer data until it is persisted on the service. If the buffer becomes full, they stop accepting new data until the service has persisted enough data.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example

First use this example to partially upload an object:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::string> {
    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);
    auto [writer, token] =
        (co_await client.StartBufferedUpload(std::move(request))).value();
    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };

Then continue the upload using:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string upload_id)
      -> google::cloud::future<google::storage::v2::Object> {
    auto [writer, token] =
        (co_await client.ResumeBufferedUpload(std::move(upload_id))).value();
    auto state = writer.PersistedState();
    if (std::holds_alternative<google::storage::v2::Object>(state)) {
      std::cout << "The upload " << writer.UploadId()
                << " was already finalized\n";
      co_return std::get<google::storage::v2::Object>(std::move(state));
    }
    auto persisted_bytes = std::get<std::int64_t>(state);
    if (persisted_bytes != 0) {
      // This example naively assumes it will resume from the beginning of the
      // object. Applications should be prepared to handle partially uploaded
      // objects.
      throw std::invalid_argument("example cannot resume after partial upload");
    }
    for (int i = 0; i != 1000; ++i) {
      auto line = gcs_ex::WritePayload(std::vector<std::string>{
          std::string("line number "), std::to_string(i), std::string("\n")});
      token =
          (co_await writer.Write(std::move(token), std::move(line))).value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Parameters
Name Description
upload_id std::string

the id of the upload that should be resumed.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

ResumeBufferedUpload(google::storage::v2::QueryWriteStatusRequest, Options)

Resumes an object upload that automatically resumes on failures.

Use this function to resume an upload after your application stops uploading data, even after your application starts.

This function always uses resumable uploads. The objects returned by this function buffer data until it is persisted on the service. If the buffer becomes full, they stop accepting new data until the service has persisted enough data.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example

First use this example to partially upload an object:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name) -> google::cloud::future<std::string> {
    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);
    auto [writer, token] =
        (co_await client.StartBufferedUpload(std::move(request))).value();
    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };

Then continue the upload using:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string upload_id)
      -> google::cloud::future<google::storage::v2::Object> {
    auto [writer, token] =
        (co_await client.ResumeBufferedUpload(std::move(upload_id))).value();
    auto state = writer.PersistedState();
    if (std::holds_alternative<google::storage::v2::Object>(state)) {
      std::cout << "The upload " << writer.UploadId()
                << " was already finalized\n";
      co_return std::get<google::storage::v2::Object>(std::move(state));
    }
    auto persisted_bytes = std::get<std::int64_t>(state);
    if (persisted_bytes != 0) {
      // This example naively assumes it will resume from the beginning of the
      // object. Applications should be prepared to handle partially uploaded
      // objects.
      throw std::invalid_argument("example cannot resume after partial upload");
    }
    for (int i = 0; i != 1000; ++i) {
      auto line = gcs_ex::WritePayload(std::vector<std::string>{
          std::string("line number "), std::to_string(i), std::string("\n")});
      token =
          (co_await writer.Write(std::move(token), std::move(line))).value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Parameters
Name Description
request google::storage::v2::QueryWriteStatusRequest

the full request to resume the upload. Must include the upload id.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

StartUnbufferedUpload(BucketName const &, std::string, Options)

Starts a new resumable upload session without client-side buffering.

This function always uses resumable uploads. The objects returned by this function do not buffer data and, therefore, cannot automatically recover from transient failures. On the other hand, they do not need to periodically flush any buffers, so they can achieve maximum throughput for a single upload stream.

Use #AsyncWriter::UploadId() to save the upload id if you are planning to resume the upload.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                 std::string object_name, std::string const& filename)
      -> google::cloud::future<google::storage::v2::Object> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);

    auto [writer, token] = (co_await client.StartUnbufferedUpload(
                                gcs_ex::BucketName(std::move(bucket_name)),
                                std::move(object_name)))
                               .value();
    is.seekg(0);  // clear EOF bit
    while (token.valid() && !is.eof()) {
      std::vector<char> buffer(1024 * 1024);
      is.read(buffer.data(), buffer.size());
      buffer.resize(is.gcount());
      token = (co_await writer.Write(std::move(token),
                                     gcs_ex::WritePayload(std::move(buffer))))
                  .value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name,
         std::string const& filename) -> google::cloud::future<std::string> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);

    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);  // Create the object if it does not exist
    auto [writer, token] =
        (co_await client.StartUnbufferedUpload(std::move(request))).value();

    // Write some data and then return. That data may or may not be received
    // and persisted by the service.
    std::vector<char> buffer(1024 * 1024);
    is.read(buffer.data(), buffer.size());
    buffer.resize(is.gcount());
    token = (co_await writer.Write(std::move(token),
                                   gcs_ex::WritePayload(std::move(buffer))))
                .value();

    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };
Idempotency

This function is always treated as idempotent, and the library will automatically retry the function on transient errors. Note that this may create multiple upload ids. This is safe as any additional upload ids have no cost and are not visible to any application.

Parameters
Name Description
bucket_name BucketName const &

the name of the bucket that will contain the object.

object_name std::string

the name of the object to be read.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

StartUnbufferedUpload(google::storage::v2::StartResumableWriteRequest, Options)

Starts a new resumable upload session without client-side buffering.

This function always uses resumable uploads. The objects returned by this function do not buffer data and, therefore, cannot automatically recover from transient failures. On the other hand, they do not need to periodically flush any buffers, so they can achieve maximum throughput for a single upload stream.

Use #AsyncWriter::UploadId() to save the upload id if you are planning to resume the upload.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                 std::string object_name, std::string const& filename)
      -> google::cloud::future<google::storage::v2::Object> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);

    auto [writer, token] = (co_await client.StartUnbufferedUpload(
                                gcs_ex::BucketName(std::move(bucket_name)),
                                std::move(object_name)))
                               .value();
    is.seekg(0);  // clear EOF bit
    while (token.valid() && !is.eof()) {
      std::vector<char> buffer(1024 * 1024);
      is.read(buffer.data(), buffer.size());
      buffer.resize(is.gcount());
      token = (co_await writer.Write(std::move(token),
                                     gcs_ex::WritePayload(std::move(buffer))))
                  .value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Example
  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name,
         std::string const& filename) -> google::cloud::future<std::string> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);

    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);  // Create the object if it does not exist
    auto [writer, token] =
        (co_await client.StartUnbufferedUpload(std::move(request))).value();

    // Write some data and then return. That data may or may not be received
    // and persisted by the service.
    std::vector<char> buffer(1024 * 1024);
    is.read(buffer.data(), buffer.size());
    buffer.resize(is.gcount());
    token = (co_await writer.Write(std::move(token),
                                   gcs_ex::WritePayload(std::move(buffer))))
                .value();

    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };
Idempotency

This function is always treated as idempotent, and the library will automatically retry the function on transient errors. Note that this may create multiple upload ids. This is safe as any additional upload ids have no cost and are not visible to any application.

Parameters
Name Description
request google::storage::v2::StartResumableWriteRequest

the request contents, it must include the bucket name and object names. Many other fields are optional.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

ResumeUnbufferedUpload(std::string, Options)

Resumes an upload without buffering or automatic recovery from transient failures.

Use this function to resume an upload after your application stops uploading data, even after your application starts.

This function always uses resumable uploads. The objects returned by this function buffer data until it is persisted on the service. If the buffer becomes full, they stop accepting new data until the service has persisted enough data.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example

First use this example to partially upload an object:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name,
         std::string const& filename) -> google::cloud::future<std::string> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);

    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);  // Create the object if it does not exist
    auto [writer, token] =
        (co_await client.StartUnbufferedUpload(std::move(request))).value();

    // Write some data and then return. That data may or may not be received
    // and persisted by the service.
    std::vector<char> buffer(1024 * 1024);
    is.read(buffer.data(), buffer.size());
    buffer.resize(is.gcount());
    token = (co_await writer.Write(std::move(token),
                                   gcs_ex::WritePayload(std::move(buffer))))
                .value();

    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };

Then continue the upload using:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string upload_id,
                 std::string filename)
      -> google::cloud::future<google::storage::v2::Object> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);
    auto [writer, token] =
        (co_await client.ResumeUnbufferedUpload(std::move(upload_id))).value();

    auto state = writer.PersistedState();
    if (std::holds_alternative<google::storage::v2::Object>(state)) {
      std::cout << "The upload " << writer.UploadId()
                << " was already finalized\n";
      co_return std::get<google::storage::v2::Object>(std::move(state));
    }

    auto persisted_bytes = std::get<std::int64_t>(state);
    is.seekg(persisted_bytes);
    while (token.valid() && !is.eof()) {
      std::vector<char> buffer(1024 * 1024);
      is.read(buffer.data(), buffer.size());
      buffer.resize(is.gcount());
      token = (co_await writer.Write(std::move(token),
                                     gcs_ex::WritePayload(std::move(buffer))))
                  .value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Parameters
Name Description
upload_id std::string

the id of the upload that should be resumed.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

ResumeUnbufferedUpload(google::storage::v2::QueryWriteStatusRequest, Options)

Resumes an upload without buffering or automatic recovery from transient failures.

Use this function to resume an upload after your application stops uploading data, even after your application starts.

This function always uses resumable uploads. The objects returned by this function buffer data until it is persisted on the service. If the buffer becomes full, they stop accepting new data until the service has persisted enough data.

Selecting an upload function

When choosing an upload method consider the following tradeoffs:

We recommend using InsertObject() for relatively small objects that fit in memory.

  • Pro: Easy to use, a single function call uploads the object.
  • Pro: Lowest latency for small objects. Use <= 4MiB as a rule of thumb. The precise threshold depends on your environment.
  • Con: Recovery from transient errors requires resending all the data.
  • Con: Multiple concurrent calls to InsertObject() will consume as much memory as is needed to hold all the data.

We recommend using StartBufferedUpload() to upload data of unknown or arbitrary size.

  • Pro: Relatively easy to use, the library can automatically resend data under most transient errors.
  • Pro: The application can limit the amount of memory used by each upload, even if the full object is arbitrarily large.
  • Pro: Can be used to upload "streaming" data sources where it is inefficient or impossible to go back and re-read data from an arbitrary point.
  • Con: Throughput is limited as it needs to periodically wait for the service to flush the buffer to persistent storage.
  • Con: Cannot automatically resume uploads after the application restarts.

We recommend using StartUnbufferedUpload() to upload data where the upload can efficiently resume from arbitrary points.

  • Pro: Can achieve the maximum theoretical throughput for a single stream upload. It is possible to use Parallel Composite Uploads to achieve even higher throughput.
  • Pro: It can resume uploads even after the application restarts.
  • Con: Requires manually handling transient errors during the upload.
Example

First use this example to partially upload an object:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name,
         std::string const& filename) -> google::cloud::future<std::string> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);

    // Use the overload consuming
    // `google::storage::v2::StartResumableWriteRequest` and show how to set
    // additional parameters in the request.
    auto request = google::storage::v2::StartResumableWriteRequest{};
    auto& spec = *request.mutable_write_object_spec();
    spec.mutable_resource()->set_bucket(
        gcs_ex::BucketName(bucket_name).FullName());
    spec.mutable_resource()->set_name(std::move(object_name));
    spec.mutable_resource()->mutable_metadata()->emplace("custom-field",
                                                         "example");
    spec.mutable_resource()->set_content_type("text/plain");
    spec.set_if_generation_match(0);  // Create the object if it does not exist
    auto [writer, token] =
        (co_await client.StartUnbufferedUpload(std::move(request))).value();

    // Write some data and then return. That data may or may not be received
    // and persisted by the service.
    std::vector<char> buffer(1024 * 1024);
    is.read(buffer.data(), buffer.size());
    buffer.resize(is.gcount());
    token = (co_await writer.Write(std::move(token),
                                   gcs_ex::WritePayload(std::move(buffer))))
                .value();

    // This example does not finalize the upload, so it can be resumed in a
    // separate example.
    co_return writer.UploadId();
  };

Then continue the upload using:

  namespace gcs = google::cloud::storage;
  namespace gcs_ex = google::cloud::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string upload_id,
                 std::string filename)
      -> google::cloud::future<google::storage::v2::Object> {
    std::ifstream is(filename);
    if (is.bad()) throw std::runtime_error("Cannot read " + filename);
    auto [writer, token] =
        (co_await client.ResumeUnbufferedUpload(std::move(upload_id))).value();

    auto state = writer.PersistedState();
    if (std::holds_alternative<google::storage::v2::Object>(state)) {
      std::cout << "The upload " << writer.UploadId()
                << " was already finalized\n";
      co_return std::get<google::storage::v2::Object>(std::move(state));
    }

    auto persisted_bytes = std::get<std::int64_t>(state);
    is.seekg(persisted_bytes);
    while (token.valid() && !is.eof()) {
      std::vector<char> buffer(1024 * 1024);
      is.read(buffer.data(), buffer.size());
      buffer.resize(is.gcount());
      token = (co_await writer.Write(std::move(token),
                                     gcs_ex::WritePayload(std::move(buffer))))
                  .value();
    }
    co_return (co_await writer.Finalize(std::move(token))).value();
  };
Parameters
Name Description
request google::storage::v2::QueryWriteStatusRequest

the full request to resume the upload. Must include the upload id.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< std::pair< AsyncWriter, AsyncToken > > >

ComposeObject(BucketName const &, std::string, std::vector< google::storage::v2::ComposeObjectRequest::SourceObject >, Options)

Composes existing objects into a new object in the same bucket.

Example
  namespace g = google::cloud;
  namespace gcs_ex = g::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name, std::string name1, std::string name2) {
    auto make_source = [](std::string name) {
      google::storage::v2::ComposeObjectRequest::SourceObject source;
      source.set_name(std::move(name));
      return source;
    };
    client
        .ComposeObject(
            gcs_ex::BucketName(std::move(bucket_name)), std::move(object_name),
            {make_source(std::move(name1)), make_source(std::move(name2))})
        .then([](auto f) {
          auto metadata = f.get();
          if (!metadata) throw std::move(metadata).status();
          std::cout << "Object successfully composed: "
                    << metadata->DebugString() << "\n";
        })
        .get();
  }
Idempotency

This operation is never idempotent. Use the overload consuming a google::storage::v2::ComposeObjectRequest and set pre-conditions on the destination object to make the request idempotent.

Parameters
Name Description
bucket_name BucketName const &

the name of the bucket used for source object and destination objects.

destination_object_name std::string

the composed object name.

source_objects std::vector< google::storage::v2::ComposeObjectRequest::SourceObject >

objects used to compose destination_object_name.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< google::storage::v2::Object > >

ComposeObject(google::storage::v2::ComposeObjectRequest, Options)

Composes existing objects into a new object in the same bucket.

Example
  namespace g = google::cloud;
  namespace gcs_ex = g::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name, std::string name1, std::string name2) {
    google::storage::v2::ComposeObjectRequest request;
    request.mutable_destination()->set_bucket(
        gcs_ex::BucketName(std::move(bucket_name)).FullName());
    request.mutable_destination()->set_name(std::move(object_name));
    // Only create the destination object if it does not already exist.
    request.set_if_generation_match(0);
    request.add_source_objects()->set_name(std::move(name1));
    request.add_source_objects()->set_name(std::move(name2));

    client.ComposeObject(std::move(request))
        .then([](auto f) {
          auto metadata = f.get();
          if (!metadata) throw std::move(metadata).status();
          std::cout << "Object successfully composed: "
                    << metadata->DebugString() << "\n";
        })
        .get();
  }
Idempotency

This operation is idempotent if there are pre-conditions on the destination object. Set the if_generation_match() or if_metageneration_match() fields.

Parameters
Name Description
request google::storage::v2::ComposeObjectRequest

the full request describing what objects to compose, the name of the destination object, and any metadata for this destination object. See the proto documentation for details.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< StatusOr< google::storage::v2::Object > >

DeleteObject(BucketName const &, std::string, Options)

Deletes an object.

Example
  namespace g = google::cloud;
  namespace gcs_ex = google::cloud::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name) {
    client
        .DeleteObject(gcs_ex::BucketName(std::move(bucket_name)),
                      std::move(object_name))
        .then([](auto f) {
          auto status = f.get();
          if (!status.ok()) throw g::Status(std::move(status));
          std::cout << "Object successfully deleted\n";
        })
        .get();
  }
Idempotency

This operation is only idempotent if:

  • restricted by pre-conditions, in this case, IfGenerationMatch
  • or, if it applies to only one object version via Generation.
Parameters
Name Description
bucket_name BucketName const &

the name of the bucket that contains the object.

object_name std::string

the name of the object to delete.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< Status >

DeleteObject(BucketName const &, std::string, std::int64_t, Options)

Deletes an object.

Example
  namespace g = google::cloud;
  namespace gcs_ex = google::cloud::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name) {
    client
        .DeleteObject(gcs_ex::BucketName(std::move(bucket_name)),
                      std::move(object_name))
        .then([](auto f) {
          auto status = f.get();
          if (!status.ok()) throw g::Status(std::move(status));
          std::cout << "Object successfully deleted\n";
        })
        .get();
  }
Idempotency

This operation is only idempotent if:

  • restricted by pre-conditions, in this case, IfGenerationMatch
  • or, if it applies to only one object version via Generation.
Parameters
Name Description
bucket_name BucketName const &

the name of the bucket that contains the object.

object_name std::string

the name of the object to delete.

generation std::int64_t

the object generation to delete.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< Status >

DeleteObject(google::storage::v2::DeleteObjectRequest, Options)

Deletes an object.

Example
  namespace g = google::cloud;
  namespace gcs_ex = google::cloud::storage_experimental;
  [](gcs_ex::AsyncClient& client, std::string bucket_name,
     std::string object_name) {
    client
        .DeleteObject(gcs_ex::BucketName(std::move(bucket_name)),
                      std::move(object_name))
        .then([](auto f) {
          auto status = f.get();
          if (!status.ok()) throw g::Status(std::move(status));
          std::cout << "Object successfully deleted\n";
        })
        .get();
  }
Idempotency

This operation is only idempotent if:

  • restricted by pre-conditions, in this case, IfGenerationMatch
  • or, if it applies to only one object version via Generation.
Parameters
Name Description
request google::storage::v2::DeleteObjectRequest

the full request describing what object to delete. It may also include any preconditions the object must satisfy, and other parameters that are necessary to complete the RPC.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
future< Status >

StartRewrite(BucketName const &, std::string, BucketName const &, std::string, Options)

Creates an AsyncRewriter to copy the source object.

Applications use this function to reliably copy objects across location boundaries, and to rewrite objects with different encryption keys. The operation returns an AsyncRewriter, which the application can use to initiate the copy and to iterate if the copy requires more than one call to complete.

Example
  namespace g = google::cloud;
  namespace gcs = g::storage;
  namespace gcs_ex = g::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                 std::string object_name, std::string destination_name)
      -> g::future<google::storage::v2::Object> {
    auto [rewriter, token] =
        client.StartRewrite(gcs_ex::BucketName(bucket_name), object_name,
                            gcs_ex::BucketName(bucket_name), destination_name);
    while (token.valid()) {
      auto [progress, t] =
          (co_await rewriter.Iterate(std::move(token))).value();
      token = std::move(t);
      std::cout << progress.total_bytes_rewritten() << " of "
                << progress.object_size() << " bytes rewritten\n";
      if (progress.has_resource()) co_return std::move(progress.resource());
    }
    throw std::runtime_error("rewrite failed before completion");
  };
Idempotency

This operation is purely local, and always succeeds. The Iterate() calls are always treated as idempotent. Their only observable side effect is the creation of the object, and this can only succeed once.

Parameters
Name Description
source_bucket BucketName const &

the name of the bucket containing the source object.

source_object_name std::string

the name of the source object.

destination_bucket BucketName const &

the name of the bucket for the new object.

destination_object_name std::string

what to name the destination object.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
std::pair< AsyncRewriter, AsyncToken >

StartRewrite(google::storage::v2::RewriteObjectRequest, Options)

Creates an AsyncRewriter to copy the source object.

Applications use this function to reliably copy objects across location boundaries, and to rewrite objects with different encryption keys. The operation returns an AsyncRewriter, which the application can use to initiate the copy and to iterate if the copy requires more than one call to complete.

Example
  namespace g = google::cloud;
  namespace gcs = g::storage;
  namespace gcs_ex = g::storage_experimental;
  auto coro = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                 std::string object_name, std::string destination_name)
      -> g::future<google::storage::v2::Object> {
    auto [rewriter, token] =
        client.StartRewrite(gcs_ex::BucketName(bucket_name), object_name,
                            gcs_ex::BucketName(bucket_name), destination_name);
    while (token.valid()) {
      auto [progress, t] =
          (co_await rewriter.Iterate(std::move(token))).value();
      token = std::move(t);
      std::cout << progress.total_bytes_rewritten() << " of "
                << progress.object_size() << " bytes rewritten\n";
      if (progress.has_resource()) co_return std::move(progress.resource());
    }
    throw std::runtime_error("rewrite failed before completion");
  };
Idempotency

This operation is purely local, and always succeeds. The Iterate() calls are always treated as idempotent. Their only observable side effect is the creation of the object, and this can only succeed once.

Parameters
Name Description
request google::storage::v2::RewriteObjectRequest

any additional parameters modifying the request, such as pre-conditions, and overrides for the destination object metadata.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
std::pair< AsyncRewriter, AsyncToken >

ResumeRewrite(BucketName const &, std::string, BucketName const &, std::string, std::string, Options)

Creates an AsyncRewriter to resume copying the source object.

Applications use this function to reliably copy objects across location boundaries, and to rewrite objects with different encryption keys. The operation returns an AsyncRewriter, which the application can use to continue an existing copy operation until it completes.

Example
  namespace g = google::cloud;
  namespace gcs = g::storage;
  namespace gcs_ex = g::storage_experimental;
  auto start = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                  std::string object_name,
                  std::string destination_name) -> g::future<std::string> {
    // First start a rewrite. In this example we will limit the number of bytes
    // rewritten by each iteration, then capture the token, and then resume the
    // rewrite operation.
    auto bucket = gcs_ex::BucketName(std::move(bucket_name));
    auto request = google::storage::v2::RewriteObjectRequest{};
    request.set_destination_name(destination_name);
    request.set_destination_bucket(bucket.FullName());
    request.set_source_object(std::move(object_name));
    request.set_source_bucket(bucket.FullName());
    request.set_max_bytes_rewritten_per_call(1024 * 1024);
    auto [rewriter, token] = client.StartRewrite(std::move(request));
    auto [progress, t] = (co_await rewriter.Iterate(std::move(token))).value();
    co_return progress.rewrite_token();
  };
  auto resume =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name, std::string destination_name,
         std::string rewrite_token) -> g::future<google::storage::v2::Object> {
    // Continue rewriting, this could happen on a separate process, or even
    // after the application restarts.
    auto bucket = gcs_ex::BucketName(std::move(bucket_name));
    auto request = google::storage::v2::RewriteObjectRequest();
    request.set_destination_bucket(bucket.FullName());
    request.set_destination_name(std::move(destination_name));
    request.set_source_bucket(bucket.FullName());
    request.set_source_object(std::move(object_name));
    request.set_rewrite_token(std::move(rewrite_token));
    request.set_max_bytes_rewritten_per_call(1024 * 1024);
    auto [rewriter, token] = client.ResumeRewrite(std::move(request));
    while (token.valid()) {
      auto [progress, t] =
          (co_await rewriter.Iterate(std::move(token))).value();
      token = std::move(t);
      std::cout << progress.total_bytes_rewritten() << " of "
                << progress.object_size() << " bytes rewritten\n";
      if (progress.has_resource()) co_return progress.resource();
    }
    throw std::runtime_error("rewrite failed before completion");
  };
Idempotency

This operation is purely local, and always succeeds. The Iterate() calls are always treated as idempotent. Their only observable side effect is the creation of the object, and this can only succeed once.

Example
  namespace g = google::cloud;
  namespace gcs = g::storage;
  namespace gcs_ex = g::storage_experimental;
  auto start = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                  std::string object_name,
                  std::string destination_name) -> g::future<std::string> {
    // First start a rewrite. In this example we will limit the number of bytes
    // rewritten by each iteration, then capture the token, and then resume the
    // rewrite operation.
    auto bucket = gcs_ex::BucketName(std::move(bucket_name));
    auto request = google::storage::v2::RewriteObjectRequest{};
    request.set_destination_name(destination_name);
    request.set_destination_bucket(bucket.FullName());
    request.set_source_object(std::move(object_name));
    request.set_source_bucket(bucket.FullName());
    request.set_max_bytes_rewritten_per_call(1024 * 1024);
    auto [rewriter, token] = client.StartRewrite(std::move(request));
    auto [progress, t] = (co_await rewriter.Iterate(std::move(token))).value();
    co_return progress.rewrite_token();
  };
  auto resume =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name, std::string destination_name,
         std::string rewrite_token) -> g::future<google::storage::v2::Object> {
    // Continue rewriting, this could happen on a separate process, or even
    // after the application restarts.
    auto bucket = gcs_ex::BucketName(std::move(bucket_name));
    auto request = google::storage::v2::RewriteObjectRequest();
    request.set_destination_bucket(bucket.FullName());
    request.set_destination_name(std::move(destination_name));
    request.set_source_bucket(bucket.FullName());
    request.set_source_object(std::move(object_name));
    request.set_rewrite_token(std::move(rewrite_token));
    request.set_max_bytes_rewritten_per_call(1024 * 1024);
    auto [rewriter, token] = client.ResumeRewrite(std::move(request));
    while (token.valid()) {
      auto [progress, t] =
          (co_await rewriter.Iterate(std::move(token))).value();
      token = std::move(t);
      std::cout << progress.total_bytes_rewritten() << " of "
                << progress.object_size() << " bytes rewritten\n";
      if (progress.has_resource()) co_return progress.resource();
    }
    throw std::runtime_error("rewrite failed before completion");
  };
Parameters
Name Description
source_bucket BucketName const &

the name of the bucket containing the source object.

source_object_name std::string

the name of the source object.

destination_bucket BucketName const &

the name of the bucket for the new object.

destination_object_name std::string

what to name the destination object.

rewrite_token std::string

the token from a previous successful rewrite iteration. Can be the empty string, in which case this starts a new rewrite operation.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
std::pair< AsyncRewriter, AsyncToken >

ResumeRewrite(google::storage::v2::RewriteObjectRequest, Options)

Creates an AsyncRewriter to resume copying the source object.

Applications use this function to reliably copy objects across location boundaries, and to rewrite objects with different encryption keys. The operation returns an AsyncRewriter, which the application can use to continue an existing copy operation until it completes.

Example
  namespace g = google::cloud;
  namespace gcs = g::storage;
  namespace gcs_ex = g::storage_experimental;
  auto start = [](gcs_ex::AsyncClient& client, std::string bucket_name,
                  std::string object_name,
                  std::string destination_name) -> g::future<std::string> {
    // First start a rewrite. In this example we will limit the number of bytes
    // rewritten by each iteration, then capture the token, and then resume the
    // rewrite operation.
    auto bucket = gcs_ex::BucketName(std::move(bucket_name));
    auto request = google::storage::v2::RewriteObjectRequest{};
    request.set_destination_name(destination_name);
    request.set_destination_bucket(bucket.FullName());
    request.set_source_object(std::move(object_name));
    request.set_source_bucket(bucket.FullName());
    request.set_max_bytes_rewritten_per_call(1024 * 1024);
    auto [rewriter, token] = client.StartRewrite(std::move(request));
    auto [progress, t] = (co_await rewriter.Iterate(std::move(token))).value();
    co_return progress.rewrite_token();
  };
  auto resume =
      [](gcs_ex::AsyncClient& client, std::string bucket_name,
         std::string object_name, std::string destination_name,
         std::string rewrite_token) -> g::future<google::storage::v2::Object> {
    // Continue rewriting, this could happen on a separate process, or even
    // after the application restarts.
    auto bucket = gcs_ex::BucketName(std::move(bucket_name));
    auto request = google::storage::v2::RewriteObjectRequest();
    request.set_destination_bucket(bucket.FullName());
    request.set_destination_name(std::move(destination_name));
    request.set_source_bucket(bucket.FullName());
    request.set_source_object(std::move(object_name));
    request.set_rewrite_token(std::move(rewrite_token));
    request.set_max_bytes_rewritten_per_call(1024 * 1024);
    auto [rewriter, token] = client.ResumeRewrite(std::move(request));
    while (token.valid()) {
      auto [progress, t] =
          (co_await rewriter.Iterate(std::move(token))).value();
      token = std::move(t);
      std::cout << progress.total_bytes_rewritten() << " of "
                << progress.object_size() << " bytes rewritten\n";
      if (progress.has_resource()) co_return progress.resource();
    }
    throw std::runtime_error("rewrite failed before completion");
  };
Idempotency

This operation is purely local, and always succeeds. The Iterate() calls are always treated as idempotent. Their only observable side effect is the creation of the object, and this can only succeed once.

Parameters
Name Description
request google::storage::v2::RewriteObjectRequest

the full specification for the request.

opts Options

options controlling the behavior of this RPC, for example the application may change the retry policy.

Returns
Type Description
std::pair< AsyncRewriter, AsyncToken >