- 2.17.0 (latest)
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Buckets
Create / interact with Google Cloud Storage buckets.
class google.cloud.storage.bucket.Bucket(client, name=None, user_project=None)
Bases: google.cloud.storage._helpers._PropertyMixin
A class representing a Bucket on Cloud Storage.
Parameters
client (
google.cloud.storage.client.Client
) – A client which holds credentials and project configuration for the bucket (which requires a project).name (str) – The name of the bucket. Bucket names must start and end with a number or letter.
user_project (str) – (Optional) the project ID to be billed for API requests made via this instance.
property name
Get the bucket’s name.
STORAGE_CLASSES( = ('STANDARD', 'NEARLINE', 'COLDLINE', 'ARCHIVE', 'MULTI_REGIONAL', 'REGIONAL', 'DURABLE_REDUCED_AVAILABILITY' )
Allowed values for storage_class
.
Default value is STANDARD_STORAGE_CLASS
.
See https://cloud.google.com/storage/docs/json_api/v1/buckets#storageClass https://cloud.google.com/storage/docs/storage-classes
property acl()
Create our ACL on demand.
add_lifecycle_delete_rule(**kw)
Add a “delete” rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
bucket = client.get_bucket("my-bucket")
bucket.add_lifecycle_delete_rule(age=2)
bucket.patch()
Params kw
arguments passed to
LifecycleRuleConditions
.
add_lifecycle_set_storage_class_rule(storage_class, **kw)
Add a “delete” rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
bucket = client.get_bucket("my-bucket")
bucket.add_lifecycle_set_storage_class_rule(
"COLD_LINE", matches_storage_class=["NEARLINE"]
)
bucket.patch()
Parameters
storage_class (str, one of
STORAGE_CLASSES
.) – new storage class to assign to matching items.Params kw
arguments passed to
LifecycleRuleConditions
.
blob(blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None)
Factory constructor for blob object.
NOTE: This will not make an HTTP request; it simply instantiates a blob object owned by this bucket.
Parameters
blob_name (str) – The name of the blob to be instantiated.
chunk_size (int) – The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.
encryption_key (bytes) – (Optional) 32 byte encryption key for customer-supplied encryption.
kms_key_name (str) – (Optional) Resource name of KMS key used to encrypt blob’s content.
generation (long) – (Optional) If present, selects a specific revision of this object.
Return type
Returns
The blob object created.
clear_lifecyle_rules()
Set lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
property client()
The client bound to this bucket.
configure_website(main_page_suffix=None, not_found_page=None)
Configure website-related properties.
See https://cloud.google.com/storage/docs/hosting-static-website
NOTE: This (apparently) only works if your bucket name is a domain name (and to do that, you need to get approved somehow…).
If you want this bucket to host a website, just provide the name of an index page and a page to use when a blob isn’t found:
client = storage.Client()
bucket = client.get_bucket(bucket_name)
bucket.configure_website("index.html", "404.html")
You probably should also make the whole bucket public:
bucket.make_public(recursive=True, future=True)
This says: “Make the bucket public, and all the stuff already in the bucket, and anything else I add to the bucket. Just make it all public.”
Parameters
copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Copy the given blob to the given bucket, optionally with a new name.
If user_project
is set, bills the API request to that project.
Parameters
blob (
google.cloud.storage.blob.Blob
) – The blob to be copied.destination_bucket (
google.cloud.storage.bucket.Bucket
) – The bucket into which the blob should be copied.new_name (str) – (Optional) The new name for the copied file.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.preserve_acl (bool) – DEPRECATED. This argument is not functional! (Optional) Copies ACL from old blob to new blob. Default: True.
source_generation (long) – (Optional) The generation of the blob to be copied.
if_generation_match (long) – (Optional) See Using if_generation_match Note that the generation to be matched is that of the
destination
blob.if_generation_not_match (long) – (Optional) See Using if_generation_not_match Note that the generation to be matched is that of the
destination
blob.if_metageneration_match (long) – (Optional) See Using if_metageneration_match Note that the metageneration to be matched is that of the
destination
blob.if_metageneration_not_match (long) – (Optional) See Using if_metageneration_not_match Note that the metageneration to be matched is that of the
destination
blob.if_source_generation_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation matches the given value.
if_source_generation_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation does not match the given value.
if_source_metageneration_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration matches the given value.
if_source_metageneration_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration does not match the given value.
timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
Returns
The new Blob.
Example
Copy a blob including ACL.
>>> from google.cloud import storage
>>> client = storage.Client(project="project")
>>> bucket = client.bucket("bucket")
>>> dst_bucket = client.bucket("destination-bucket")
>>> blob = bucket.blob("file.ext")
>>> new_blob = bucket.copy_blob(blob, dst_bucket)
>>> new_blob.acl.save(blob.acl)
property cors()
Retrieve or set CORS policies configured for this bucket.
See http://www.w3.org/TR/cors/ and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
NOTE: The getter for this property returns a list which contains copies of the bucket’s CORS policy mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:
>>> policies = bucket.cors
>>> policies.append({'origin': '/foo', ...})
>>> policies[1]['maxAgeSeconds'] = 3600
>>> del policies[0]
>>> bucket.cors = policies
>>> bucket.update()
Setter
Set CORS policies for this bucket.
Getter
Gets the CORS policies for this bucket.
Return type
list of dictionaries
Returns
A sequence of mappings describing each CORS policy.
create(client=None, project=None, location=None, predefined_acl=None, predefined_default_object_acl=None, timeout=60, retry=<google.api_core.retry.Retry object>)
DEPRECATED. Creates current bucket.
NOTE: Direct use of this method is deprecated. Use Client.create_bucket()
instead.
If the bucket already exists, will raise
google.cloud.exceptions.Conflict
.
This implements “storage.buckets.insert”.
If user_project
is set, bills the API request to that project.
Parameters
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.project (str) – (Optional) The project under which the bucket is to be created. If not passed, uses the project set on the client.
location (str) – (Optional) The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations
predefined_acl (str) – (Optional) Name of predefined ACL to apply to bucket. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl
predefined_default_object_acl (str) – (Optional) Name of predefined ACL to apply to bucket’s objects. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl
timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Raises
ValueError – if
project
is None and client’sproject
is also None.
property default_event_based_hold()
Are uploaded objects automatically placed under an even-based hold?
If True, uploaded objects will be placed under an event-based hold to be released at a future time. When released an object will then begin the retention period determined by the policy retention period for the object bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
If the property is not set locally, returns None
.
Return type
bool or
NoneType
property default_kms_key_name()
Retrieve / set default KMS encryption key for objects in the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Setter
Set default KMS encryption key for items in this bucket.
Getter
Get default KMS encryption key for items in this bucket.
Return type
Returns
Default KMS encryption key, or
None
if not set.
property default_object_acl()
Create our defaultObjectACL on demand.
delete(force=False, client=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Delete this bucket.
The bucket must be empty in order to submit a delete request. If
force=True
is passed, this will first attempt to delete all the
objects / blobs in the bucket (i.e. try to empty the bucket).
If the bucket doesn’t exist, this will raise
google.cloud.exceptions.NotFound
. If the bucket is not empty
(and force=False
), will raise google.cloud.exceptions.Conflict
.
If force=True
and the bucket contains more than 256 objects / blobs
this will cowardly refuse to delete the objects (or the bucket). This
is to prevent accidental bucket deletion and to prevent extremely long
runtime of this method.
If user_project
is set, bills the API request to that project.
Parameters
force (bool) – If True, empties the bucket’s objects then deletes it.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.
if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.
timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Raises
ValueError
ifforce
isTrue
and the bucket contains more than 256 objects / blobs.
delete_blob(blob_name, client=None, generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Deletes a blob from the current bucket.
If the blob isn’t found (backend 404), raises a
google.cloud.exceptions.NotFound
.
For example:
from google.cloud.exceptions import NotFound
client = storage.Client()
bucket = client.get_bucket("my-bucket")
blobs = list(client.list_blobs(bucket))
assert len(blobs) > 0
# [<Blob: my-bucket, my-file.txt>]
bucket.delete_blob("my-file.txt")
try:
bucket.delete_blob("doesnt-exist")
except NotFound:
pass
If user_project
is set, bills the API request to that project.
Parameters
blob_name (str) – A blob name to delete.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.generation (long) – (Optional) If present, permanently deletes a specific revision of this object.
if_generation_match (long) – (Optional) See Using if_generation_match
if_generation_not_match (long) – (Optional) See Using if_generation_not_match
if_metageneration_match (long) – (Optional) See Using if_metageneration_match
if_metageneration_not_match (long) – (Optional) See Using if_metageneration_not_match
timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Raises
google.cloud.exceptions.NotFound
(to suppress the exception, calldelete_blobs
, passing a no-opon_error
callback, e.g.:
bucket.delete_blobs([blob], on_error=lambda blob: None)
delete_blobs(blobs, on_error=None, client=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Deletes a list of blobs from the current bucket.
Uses delete_blob()
to delete each individual blob.
If user_project
is set, bills the API request to that project.
Parameters
on_error (callable) – (Optional) Takes single argument:
blob
. Called called once for each blob raisingNotFound
; otherwise, the exception is propagated.client (
Client
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.if_generation_match (list of long) – (Optional) See Using if_generation_match Note that the length of the list must match the length of The list must match
blobs
item-to-item.if_generation_not_match (list of long) – (Optional) See Using if_generation_not_match The list must match
blobs
item-to-item.if_metageneration_match (list of long) – (Optional) See Using if_metageneration_match The list must match
blobs
item-to-item.if_metageneration_not_match (list of long) – (Optional) See Using if_metageneration_not_match The list must match
blobs
item-to-item.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Raises
NotFound
(if on_error is not passed).
Example
Delete blobs using generation match preconditions.
>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = client.bucket("bucket-name")
>>> blobs = [bucket.blob("blob-name-1"), bucket.blob("blob-name-2")]
>>> if_generation_match = [None] * len(blobs)
>>> if_generation_match[0] = "123" # precondition for "blob-name-1"
>>> bucket.delete_blobs(blobs, if_generation_match=if_generation_match)
disable_logging()
Disable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#disabling
disable_website()
Disable the website configuration for this bucket.
This is really just a shortcut for setting the website-related
attributes to None
.
enable_logging(bucket_name, object_prefix='')
Enable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs
Parameters
property etag()
Retrieve the ETag for the bucket.
See https://tools.ietf.org/html/rfc2616#section-3.11 and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
Return type
str or
NoneType
Returns
The bucket etag or
None
if the bucket’s resource has not been loaded from the server.
exists(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)
Determines whether or not this bucket exists.
If user_project
is set, bills the API request to that project.
Parameters
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.
if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
Returns
True if the bucket exists in Cloud Storage.
classmethod from_string(uri, client=None)
Get a constructor for bucket object by URI.
Parameters
Return type
google.cloud.storage.bucket.Bucket
Returns
The bucket object created.
Example
Get a constructor for bucket object by URI..
>>> from google.cloud import storage
>>> from google.cloud.storage.bucket import Bucket
>>> client = storage.Client()
>>> bucket = Bucket.from_string("gs://bucket", client)
generate_signed_url(expiration=None, api_access_endpoint='https://storage.googleapis.com', method='GET', headers=None, query_parameters=None, client=None, credentials=None, version=None, virtual_hosted_style=False, bucket_bound_hostname=None, scheme='http')
Generates a signed URL for this bucket.
NOTE: If you are on Google Compute Engine, you can’t generate a signed URL using GCE service account. Follow Issue 50 for updates on this. If you’d like to be able to generate a signed URL from GCE, you can use a standard service account from a JSON file rather than a GCE service account.
If you have a bucket that you want to allow access to for a set amount of time, you can use this method to generate a URL that is only valid within a certain time period.
If bucket_bound_hostname
is set as an argument of api_access_endpoint
,
https
works only if using a CDN
.
Example
Generates a signed URL for this bucket using bucket_bound_hostname and scheme.
>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = client.get_bucket('my-bucket-name')
>>> url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld',
>>> version='v4')
>>> url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld',
>>> version='v4',scheme='https') # If using ``CDN``
This is particularly useful if you don’t want publicly accessible buckets, but don’t want to require users to explicitly log in.
Parameters
expiration (Union[Integer, *[datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime), [datetime.timedelta](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.timedelta)]*) – Point in time when the signed URL should expire. If a
datetime
instance is passed without an explicittzinfo
set, it will be assumed to beUTC
.api_access_endpoint (str) – (Optional) URI base.
method (str) – The HTTP verb that will be used when requesting the URL.
headers (dict) – (Optional) Additional HTTP headers to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers Requests using the signed URL must pass the specified header (name and value) with each request for the URL.
query_parameters (dict) – (Optional) Additional query parameters to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers#query
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the blob’s bucket.credentials (
google.auth.credentials.Credentials
orNoneType
) – The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment.version (str) – (Optional) The version of signed credential to create. Must be one of ‘v2’ | ‘v4’.
virtual_hosted_style (bool) – (Optional) If true, then construct the URL relative the bucket’s virtual hostname, e.g., ‘
bucket_bound_hostname (str) – (Optional) If pass, then construct the URL relative to the bucket-bound hostname. Value cane be a bare or with scheme, e.g., ‘example.com’ or ‘http://example.com’. See: https://cloud.google.com/storage/docs/request-endpoints#cname
scheme (str) – (Optional) If
bucket_bound_hostname
is passed as a bare hostname, use this value as the scheme.https
will work only when using a CDN. Defaults to"http"
.
Raises
ValueError
when version is invalid.Raises
TypeError
when expiration is not a valid type.Raises
AttributeError
if credentials is not an instance ofgoogle.auth.credentials.Signing
.Return type
Returns
A signed URL you can use to access the resource until expiration.
generate_upload_policy(conditions, expiration=None, client=None)
Create a signed upload policy for uploading objects.
This method generates and signs a policy document. You can use policy documents to allow visitors to a website to upload files to Google Cloud Storage without giving them direct write access.
For example:
bucket = client.bucket("my-bucket")
conditions = [["starts-with", "$key", ""], {"acl": "public-read"}]
policy = bucket.generate_upload_policy(conditions)
# Generate an upload form using the form fields.
policy_fields = "".join(
'<input type="hidden" name="{key}" value="{value}">'.format(
key=key, value=value
)
for key, value in policy.items()
)
upload_form = (
'<form action="http://{bucket_name}.storage.googleapis.com"'
' method="post" enctype="multipart/form-data">'
'<input type="text" name="key" value="my-test-key">'
'<input type="hidden" name="bucket" value="{bucket_name}">'
'<input type="hidden" name="acl" value="public-read">'
'<input name="file" type="file">'
'<input type="submit" value="Upload">'
"{policy_fields}"
"</form>"
).format(bucket_name=bucket.name, policy_fields=policy_fields)
print(upload_form)
Parameters
expiration (datetime) – (Optional) Expiration in UTC. If not specified, the policy will expire in 1 hour.
conditions (list) – A list of conditions as described in the policy documents documentation.
client (
Client
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.
Return type
Returns
A dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature.
get_blob(blob_name, client=None, encryption_key=None, generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.Retry object>, **kwargs)
Get a blob object by name.
This will return None if the blob doesn’t exist:
client = storage.Client()
bucket = client.get_bucket("my-bucket")
assert isinstance(bucket.get_blob("/path/to/blob.txt"), Blob)
# <Blob: my-bucket, /path/to/blob.txt>
assert not bucket.get_blob("/does-not-exist.txt")
# None
If user_project
is set, bills the API request to that project.
Parameters
blob_name (str) – The name of the blob to retrieve.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.encryption_key (bytes) – (Optional) 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied.
generation (long) – (Optional) If present, selects a specific revision of this object.
if_generation_match (long) – (Optional) See Using if_generation_match
if_generation_not_match (long) – (Optional) See Using if_generation_not_match
if_metageneration_match (long) – (Optional) See Using if_metageneration_match
if_metageneration_not_match (long) – (Optional) See Using if_metageneration_not_match
timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
kwargs – Keyword arguments to pass to the
Blob
constructor.
Return type
google.cloud.storage.blob.Blob
or NoneReturns
The blob object if it exists, otherwise None.
get_iam_policy(client=None, requested_policy_version=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Retrieve the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy
If user_project
is set, bills the API request to that project.
Parameters
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.requested_policy_version (int or
NoneType
) – (Optional) The version of IAM policies to request. If a policy with a condition is requested without setting this, the server will return an error. This must be set to a value of 3 to retrieve IAM policies containing conditions. This is to prevent client code that isn’t aware of IAM conditions from interpreting and modifying policies incorrectly. The service might return a policy with version lower than the one that was requested, based on the feature syntax in the policy fetched.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
Returns
the policy instance, based on the resource returned from the
getIamPolicy
API request.
Example:
from google.cloud.storage.iam import STORAGE_OBJECT_VIEWER_ROLE
policy = bucket.get_iam_policy(requested_policy_version=3)
policy.version = 3
# Add a binding to the policy via it's bindings property
policy.bindings.append({
"role": STORAGE_OBJECT_VIEWER_ROLE,
"members": {"serviceAccount:account@project.iam.gserviceaccount.com", ...},
# Optional:
"condition": {
"title": "prefix"
"description": "Objects matching prefix"
"expression": "resource.name.startsWith("projects/project-name/buckets/bucket-name/objects/prefix")"
}
})
bucket.set_iam_policy(policy)
get_logging()
Return info about access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#status
Return type
Returns
a dict w/ keys,
logBucket
andlogObjectPrefix
(if logging is enabled), or None (if not).
get_notification(notification_id, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Get Pub / Sub notification for this bucket.
See: https://cloud.google.com/storage/docs/json_api/v1/notifications/get
If user_project
is set, bills the API request to that project.
Parameters
notification_id (str) – The notification id to retrieve the notification configuration.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
Returns
notification instance.
Example
Get notification using notification id.
>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = client.get_bucket('my-bucket-name') # API request.
>>> notification = bucket.get_notification(notification_id='id') # API request.
property iam_configuration()
Retrieve IAM configuration for this bucket.
Return type
IAMConfiguration
Returns
an instance for managing the bucket’s IAM configuration.
property id()
Retrieve the ID for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type
str or
NoneType
Returns
The ID of the bucket or
None
if the bucket’s resource has not been loaded from the server.
property labels()
Retrieve or set labels assigned to this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels
NOTE: The getter for this property returns a dict which is a copy of the bucket’s labels. Mutating that dict has no effect unless you then re-assign the dict via the setter. E.g.:
>>> labels = bucket.labels
>>> labels['new_key'] = 'some-label'
>>> del labels['old_key']
>>> bucket.labels = labels
>>> bucket.update()
Setter
Set labels for this bucket.
Getter
Gets the labels for this bucket.
Return type
Returns
Name-value pairs (string->string) labelling the bucket.
property lifecycle_rules()
Retrieve or set lifecycle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
NOTE: The getter for this property returns a list which contains copies of the bucket’s lifecycle rules mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:
>>> rules = bucket.lifecycle_rules
>>> rules.append({'origin': '/foo', ...})
>>> rules[1]['rule']['action']['type'] = 'Delete'
>>> del rules[0]
>>> bucket.lifecycle_rules = rules
>>> bucket.update()
Setter
Set lifestyle rules for this bucket.
Getter
Gets the lifestyle rules for this bucket.
Return type
generator(dict)
Returns
A sequence of mappings describing each lifecycle rule.
list_blobs(max_results=None, page_token=None, prefix=None, delimiter=None, start_offset=None, end_offset=None, include_trailing_delimiter=None, versions=None, projection='noAcl', fields=None, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
DEPRECATED. Return an iterator used to find blobs in the bucket.
NOTE: Direct use of this method is deprecated. Use Client.list_blobs
instead.
If user_project
is set, bills the API request to that project.
Parameters
max_results (int) – (Optional) The maximum number of blobs to return.
page_token (str) – (Optional) If present, return the next batch of blobs, using the value, which must correspond to the
nextPageToken
value returned in the previous response. Deprecated: use thepages
property of the returned iterator instead of manually passing the token.prefix (str) – (Optional) Prefix used to filter blobs.
delimiter (str) – (Optional) Delimiter, used with
prefix
to emulate hierarchy.start_offset (str) – (Optional) Filter results to objects whose names are lexicographically equal to or after
startOffset
. IfendOffset
is also set, the objects listed will have names betweenstartOffset
(inclusive) andendOffset
(exclusive).end_offset (str) – (Optional) Filter results to objects whose names are lexicographically before
endOffset
. IfstartOffset
is also set, the objects listed will have names betweenstartOffset
(inclusive) andendOffset
(exclusive).include_trailing_delimiter (boolean) – (Optional) If true, objects that end in exactly one instance of
delimiter
will have their metadata included initems
in addition toprefixes
.versions (bool) – (Optional) Whether object versions should be returned as separate blobs.
projection (str) – (Optional) If used, must be ‘full’ or ‘noAcl’. Defaults to
'noAcl'
. Specifies the set of properties to return.fields (str) – (Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned:
'items(name,contentLanguage),nextPageToken'
. See: https://cloud.google.com/storage/docs/json_api/v1/parameters#fieldsclient (
Client
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
Returns
Iterator of all
Blob
in this bucket matching the arguments.
Example
List blobs in the bucket with user_project.
>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = storage.Bucket(client, "my-bucket-name", user_project="my-project")
>>> all_blobs = list(client.list_blobs(bucket))
list_notifications(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
List Pub / Sub notifications for this bucket.
See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list
If user_project
is set, bills the API request to that project.
Parameters
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
list of
BucketNotification
Returns
notification instances
property location()
Retrieve location configured for this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/bucket-locations
Returns None
if the property has not been set before creation,
or if the bucket’s resource has not been loaded from the server.
:rtype: str or NoneType
property location_type()
Retrieve or set the location type for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
Setter
Set the location type for this bucket.
Getter
Gets the the location type for this bucket.
Return type
str or
NoneType
Returns
If set, one of
MULTI_REGION_LOCATION_TYPE
,REGION_LOCATION_TYPE
, orDUAL_REGION_LOCATION_TYPE
, elseNone
.
lock_retention_policy(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Lock the bucket’s retention policy.
Parameters
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the blob’s bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Raises
ValueError – if the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket’s retention policy is already locked.
make_private(recursive=False, future=False, client=None, timeout=60)
Update bucket’s ACL, revoking read access for anonymous users.
Parameters
recursive (bool) – If True, this will make all blobs inside the bucket private as well.
future (bool) – If True, this will make all objects created in the future private as well.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
Raises
ValueError – If
recursive
is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned bylist_blobs()
and callmake_private()
for each blob.
make_public(recursive=False, future=False, client=None, timeout=60)
Update bucket’s ACL, granting read access to anonymous users.
Parameters
recursive (bool) – If True, this will make all blobs inside the bucket public as well.
future (bool) – If True, this will make all objects created in the future public as well.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
Raises
ValueError – If
recursive
is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned bylist_blobs()
and callmake_public()
for each blob.
property metageneration()
Retrieve the metageneration for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type
int or
NoneType
Returns
The metageneration of the bucket or
None
if the bucket’s resource has not been loaded from the server.
notification(topic_name=None, topic_project=None, custom_attributes=None, event_types=None, blob_name_prefix=None, payload_format='NONE', notification_id=None)
Factory: create a notification resource for the bucket.
See: BucketNotification
for parameters.
Return type
property owner()
Retrieve info about the owner of the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type
dict or
NoneType
Returns
Mapping of owner’s role/ID. Returns
None
if the bucket’s resource has not been loaded from the server.
patch(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Sends all changed properties in a PATCH request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.
Parameters
client (
Client
orNoneType
) – the client to use. If not passed, falls back to theclient
stored on the current object.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.
if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
property path()
The URL path to this bucket.
static path_helper(bucket_name)
Relative URL path for a bucket.
Parameters
bucket_name (str) – The bucket name in the path.
Return type
Returns
The relative URL path for
bucket_name
.
property project_number()
Retrieve the number of the project to which the bucket is assigned.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type
int or
NoneType
Returns
The project number that owns the bucket or
None
if the bucket’s resource has not been loaded from the server.
reload(client=None, projection='noAcl', timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)
Reload properties from Cloud Storage.
If user_project
is set, bills the API request to that project.
Parameters
client (
Client
orNoneType
) – the client to use. If not passed, falls back to theclient
stored on the current object.projection (str) – (Optional) If used, must be ‘full’ or ‘noAcl’. Defaults to
'noAcl'
. Specifies the set of properties to return.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.
if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
rename_blob(blob, new_name, client=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Rename the given blob using copy and delete operations.
If user_project
is set, bills the API request to that project.
Effectively, copies blob to the same bucket with a new name, then deletes the blob.
WARNING: This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation. If you need more control over the copy and deletion, instead use google.cloud.storage.blob.Blob.copy_to and google.cloud.storage.blob.Blob.delete directly.
Parameters
blob (
google.cloud.storage.blob.Blob
) – The blob to be renamed.new_name (str) – The new name for this blob.
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.if_generation_match (long) – (Optional) See Using if_generation_match Note that the generation to be matched is that of the
destination
blob.if_generation_not_match (long) – (Optional) See Using if_generation_not_match Note that the generation to be matched is that of the
destination
blob.if_metageneration_match (long) – (Optional) See Using if_metageneration_match Note that the metageneration to be matched is that of the
destination
blob.if_metageneration_not_match (long) – (Optional) See Using if_metageneration_not_match Note that the metageneration to be matched is that of the
destination
blob.if_source_generation_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation matches the given value. Also used in the (implied) delete request.
if_source_generation_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation does not match the given value. Also used in the (implied) delete request.
if_source_metageneration_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration matches the given value. Also used in the (implied) delete request.
if_source_metageneration_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration does not match the given value. Also used in the (implied) delete request.
timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
Blob
Returns
The newly-renamed blob.
property requester_pays()
Does the requester pay for API requests for this bucket?
See https://cloud.google.com/storage/docs/requester-pays for details.
Setter
Update whether requester pays for this bucket.
Getter
Query whether requester pays for this bucket.
Return type
Returns
True if requester pays for API requests for the bucket, else False.
property retention_period()
Retrieve or set the retention period for items in the bucket.
Return type
int or
NoneType
Returns
number of seconds to retain items after upload or release from event-based lock, or
None
if the property is not set locally.
property retention_policy_effective_time()
Retrieve the effective time of the bucket’s retention policy.
Return type
datetime.datetime or
NoneType
Returns
point-in time at which the bucket’s retention policy is effective, or
None
if the property is not set locally.
property retention_policy_locked()
Retrieve whthere the bucket’s retention policy is locked.
Return type
Returns
True if the bucket’s policy is locked, or else False if the policy is not locked, or the property is not set locally.
property self_link()
Retrieve the URI for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type
str or
NoneType
Returns
The self link for the bucket or
None
if the bucket’s resource has not been loaded from the server.
set_iam_policy(policy, client=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Update the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
If user_project
is set, bills the API request to that project.
Parameters
policy (
google.api_core.iam.Policy
) – policy instance used to update bucket’s IAM policy.client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
Returns
the policy instance, based on the resource returned from the
setIamPolicy
API request.
property storage_class()
Retrieve or set the storage class for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
Setter
Set the storage class for this bucket.
Getter
Gets the the storage class for this bucket.
Return type
str or
NoneType
Returns
If set, one of
NEARLINE_STORAGE_CLASS
,COLDLINE_STORAGE_CLASS
,ARCHIVE_STORAGE_CLASS
,STANDARD_STORAGE_CLASS
,MULTI_REGIONAL_LEGACY_STORAGE_CLASS
,REGIONAL_LEGACY_STORAGE_CLASS
, orDURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS
, elseNone
.
test_iam_permissions(permissions, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
API call: test permissions
See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions
If user_project
is set, bills the API request to that project.
Parameters
permissions (list of string) – the permissions to check
client (
Client
orNoneType
) – (Optional) The client to use. If not passed, falls back to theclient
stored on the current bucket.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
Return type
list of string
Returns
the permissions returned by the
testIamPermissions
API request.
property time_created()
Retrieve the timestamp at which the bucket was created.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type
datetime.datetime
orNoneType
Returns
Datetime object parsed from RFC3339 valid timestamp, or
None
if the bucket’s resource has not been loaded from the server.
update(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Sends all properties in a PUT request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.
Parameters
client (
Client
orNoneType
) – the client to use. If not passed, falls back to theclient
stored on the current object.timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. See: Configuring Timeouts
if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.
if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.
retry (google.api_core.retry.Retry* or [google.cloud.storage.retry.ConditionalRetryPolicy*](retry_timeout.md#google.cloud.storage.retry.ConditionalRetryPolicy)) – (Optional) How to retry the RPC. See: Configuring Retries
property user_project()
Project ID to be billed for API requests made via this bucket.
If unset, API requests are billed to the bucket owner.
A user project is required for all operations on Requester Pays buckets.
See https://cloud.google.com/storage/docs/requester-pays#requirements for details.
Return type
property versioning_enabled()
Is versioning enabled for this bucket?
See https://cloud.google.com/storage/docs/object-versioning for details.
Setter
Update whether versioning is enabled for this bucket.
Getter
Query whether versioning is enabled for this bucket.
Return type
Returns
True if enabled, else False.
class google.cloud.storage.bucket.IAMConfiguration(bucket, uniform_bucket_level_access_enabled=
Bases: dict
Map a bucket’s IAM configuration.
Params bucket
Bucket for which this instance is the policy.
Params bucket_policy_only_enabled
(Optional) Whether the IAM-only policy is enabled for the bucket.
Params uniform_bucket_level_locked_time
(Optional) When the bucket’s IAM-only policy was enabled. This value should normally only be set by the back-end API.
Params bucket_policy_only_enabled
Deprecated alias for
uniform_bucket_level_access_enabled
.Params bucket_policy_only_locked_time
Deprecated alias for
uniform_bucket_level_access_locked_time
.
property bucket()
Bucket for which this instance is the policy.
Return type
Bucket
Returns
the instance’s bucket.
property bucket_policy_only_enabled()
Deprecated alias for uniform_bucket_level_access_enabled
.
Return type
Returns
whether the bucket is configured to allow only IAM.
property bucket_policy_only_locked_time()
Deprecated alias for uniform_bucket_level_access_locked_time
.
Return type
Union[
datetime.datetime
, None]Returns
(readonly) Time after which
bucket_policy_only_enabled
will be frozen as true.
clear()
copy()
classmethod from_api_repr(resource, bucket)
Factory: construct instance from resource.
Params bucket
Bucket for which this instance is the policy.
Parameters
resource (dict) – mapping as returned from API call.
Return type
IAMConfiguration
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
property uniform_bucket_level_access_enabled()
If set, access checks only use bucket-level IAM policies or above.
Return type
Returns
whether the bucket is configured to allow only IAM.
property uniform_bucket_level_access_locked_time()
Deadline for changing uniform_bucket_level_access_enabled
from true to false.
If the bucket’s uniform_bucket_level_access_enabled
is true, this property
is time time after which that setting becomes immutable.
If the bucket’s uniform_bucket_level_access_enabled
is false, this property
is None
.
Return type
Union[
datetime.datetime
, None]Returns
(readonly) Time after which
uniform_bucket_level_access_enabled
will be frozen as true.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
class google.cloud.storage.bucket.LifecycleRuleConditions(age=None, created_before=None, is_live=None, matches_storage_class=None, number_of_newer_versions=None, days_since_custom_time=None, custom_time_before=None, days_since_noncurrent_time=None, noncurrent_time_before=None, _factory=False)
Bases: dict
Map a single lifecycle rule for a bucket.
See: https://cloud.google.com/storage/docs/lifecycle
Parameters
age (int) – (Optional) Apply rule action to items whose age, in days, exceeds this value.
created_before (datetime.date) – (Optional) Apply rule action to items created before this date.
is_live (bool) – (Optional) If true, apply rule action to non-versioned items, or to items with no newer versions. If false, apply rule action to versioned items with at least one newer version.
matches_storage_class (list(str), one or more of
Bucket.STORAGE_CLASSES
.) – (Optional) Apply rule action to items which whose storage class matches this value.number_of_newer_versions (int) – (Optional) Apply rule action to versioned items having N newer versions.
days_since_custom_time (int) – (Optional) Apply rule action to items whose number of days elapsed since the custom timestamp. This condition is relevant only for versioned objects. The value of the field must be a non negative integer. If it’s zero, the object version will become eligible for lifecycle action as soon as it becomes custom.
custom_time_before (
datetime.date
) – (Optional) Date object parsed from RFC3339 valid date, apply rule action to items whose custom time is before this date. This condition is relevant only for versioned objects, e.g., 2019-03-16.days_since_noncurrent_time (int) – (Optional) Apply rule action to items whose number of days elapsed since the non current timestamp. This condition is relevant only for versioned objects. The value of the field must be a non negative integer. If it’s zero, the object version will become eligible for lifecycle action as soon as it becomes non current.
noncurrent_time_before (
datetime.date
) – (Optional) Date object parsed from RFC3339 valid date, apply rule action to items whose non current time is before this date. This condition is relevant only for versioned objects, e.g, 2019-03-16.
Raises
ValueError – if no arguments are passed.
property age()
Conditon’s age value.
clear()
copy()
property created_before()
Conditon’s created_before value.
property custom_time_before()
Conditon’s ‘custom_time_before’ value.
property days_since_custom_time()
Conditon’s ‘days_since_custom_time’ value.
property days_since_noncurrent_time()
Conditon’s ‘days_since_noncurrent_time’ value.
classmethod from_api_repr(resource)
Factory: construct instance from resource.
Parameters
resource (dict) – mapping as returned from API call.
Return type
LifecycleRuleConditions
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
property is_live()
Conditon’s ‘is_live’ value.
items()
keys()
property matches_storage_class()
Conditon’s ‘matches_storage_class’ value.
property noncurrent_time_before()
Conditon’s ‘noncurrent_time_before’ value.
property number_of_newer_versions()
Conditon’s ‘number_of_newer_versions’ value.
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
class google.cloud.storage.bucket.LifecycleRuleDelete(**kw)
Bases: dict
Map a lifecycle rule deleting matching items.
Params kw
arguments passed to
LifecycleRuleConditions
.
clear()
copy()
classmethod from_api_repr(resource)
Factory: construct instance from resource.
Parameters
resource (dict) – mapping as returned from API call.
Return type
LifecycleRuleDelete
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
class google.cloud.storage.bucket.LifecycleRuleSetStorageClass(storage_class, **kw)
Bases: dict
Map a lifecycle rule updating storage class of matching items.
Parameters
storage_class (str, one of
Bucket.STORAGE_CLASSES
.) – new storage class to assign to matching items.Params kw
arguments passed to
LifecycleRuleConditions
.
clear()
copy()
classmethod from_api_repr(resource)
Factory: construct instance from resource.
Parameters
resource (dict) – mapping as returned from API call.
Return type
LifecycleRuleDelete
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]