The Cloud Storage gRPC API allows applications to read and write data through
the abstractions of buckets and objects. For a description of these
abstractions please see https://cloud.google.com/storage/docs.
Resources are named as follows:
Projects are referred to as they are defined by the Resource Manager API,
using strings like projects/123456 or projects/my-string-id.
Buckets are named using string names of the form:
projects/{project}/buckets/{bucket}
For globally unique buckets, _ may be substituted for the project.
Objects are uniquely identified by their name along with the name of the
bucket they belong to, as separate strings in this API. For example:
ReadObjectRequest {
bucket: 'projects/_/buckets/my-bucket'
object: 'my-object'
}
Note that object names can contain / characters, which are treated as
any other character (no special directory semantics).
Gets the IAM policy for a specified bucket or object.
The resource field in the request should be
projects//buckets/<bucket_name> for a bucket or
projects//buckets/<bucket_name>/objects/<object_name> for an object.
Determines the persisted_size for an object that is being written, which
can then be used as the write_offset for the next Write() call.
If the object does not exist (i.e., the object has been deleted, or the
first Write() has not yet reached the service), this method returns the
error NOT_FOUND.
The client may call QueryWriteStatus() at any time to determine how
much data has been processed for this object. This is useful if the
client is buffering data and needs to know which data can be safely
evicted. For any sequence of QueryWriteStatus() calls for a given
object name, the sequence of returned persisted_size values will be
non-decreasing.
Updates an IAM policy for the specified bucket or object.
The resource field in the request should be
projects//buckets/<bucket_name> for a bucket or
projects//buckets/<bucket_name>/objects/<object_name> for an object.
Starts a resumable write. How long the write operation remains valid, and
what happens when the write operation becomes invalid, are
service-dependent.
Tests a set of permissions on the given bucket or object to see which, if
any, are held by the caller.
The resource field in the request should be
projects//buckets/<bucket_name> for a bucket or
projects//buckets/<bucket_name>/objects/<object_name> for an object.
Stores a new object and metadata.
An object can be written either in a single message stream or in a
resumable sequence of message streams. To write using a single stream,
the client should include in the first message of the stream an
WriteObjectSpec describing the destination bucket, object, and any
preconditions. Additionally, the final message must set 'finish_write' to
true, or else it is an error.
For a resumable write, the client should instead call
StartResumableWrite(), populating a WriteObjectSpec into that request.
They should then attach the returned upload_id to the first message of
each following call to WriteObject. If the stream is closed before
finishing the upload (either explicitly by the client or due to a network
error or an error response from the server), the client should do as
follows:
Check the result Status of the stream, to determine if writing can be
resumed on this stream or must be restarted from scratch (by calling
StartResumableWrite()). The resumable errors are DEADLINE_EXCEEDED,
INTERNAL, and UNAVAILABLE. For each case, the client should use binary
exponential backoff before retrying. Additionally, writes can be
resumed after RESOURCE_EXHAUSTED errors, but only after taking
appropriate measures, which may include reducing aggregate send rate
across clients and/or requesting a quota increase for your project.
If the call to WriteObject returns ABORTED, that indicates
concurrent attempts to update the resumable write, caused either by
multiple racing clients or by a single client where the previous
request was timed out on the client side but nonetheless reached the
server. In this case the client should take steps to prevent further
concurrent writes (e.g., increase the timeouts, stop using more than
one process to perform the upload, etc.), and then should follow the
steps below for resuming the upload.
For resumable errors, the client should call QueryWriteStatus() and
then continue writing from the returned persisted_size. This may be
less than the amount of data the client previously sent. Note also that
it is acceptable to send data starting at an offset earlier than the
returned persisted_size; in this case, the service will skip data at
offsets that were already persisted (without checking that it matches
the previously written data), and write only the data starting from the
persisted offset. Even though the data isn't written, it may still
incur a performance cost over resuming at the correct write offset.
This behavior can make client-side handling simpler in some cases.
The service will not view the object as complete until the client has
sent a WriteObjectRequest with finish_write set to true. Sending any
requests on a stream after sending a request with finish_write set to
true will cause an error. The client should check the response it
receives to determine how much data the service was able to commit and
whether the service views the object as complete.
Attempting to resume an already finalized object will result in an OK
status, with a WriteObjectResponse containing the finalized object's
metadata.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-01-28 UTC."],[],[]]