- 3.0.0-rc1 (latest)
- 2.19.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Bucket(client, name=None, user_project=None, generation=None)
A class representing a Bucket on Cloud Storage.
Parameters |
|
---|---|
Name | Description |
client |
Client
A client which holds credentials and project configuration for the bucket (which requires a project). |
name |
str
The name of the bucket. Bucket names must start and end with a number or letter. |
user_project |
str
(Optional) the project ID to be billed for API requests made via this instance. |
generation |
int
(Optional) If present, selects a specific revision of this bucket. |
Properties
acl
Create our ACL on demand.
autoclass_enabled
Whether Autoclass is enabled for this bucket.
See https://cloud.google.com/storage/docs/using-autoclass for details.
:setter: Update whether autoclass is enabled for this bucket. :getter: Query whether autoclass is enabled for this bucket.
Returns | |
---|---|
Type | Description |
bool |
True if enabled, else False. |
autoclass_terminal_storage_class
The storage class that objects in an Autoclass bucket eventually transition to if they are not read for a certain length of time. Valid values are NEARLINE and ARCHIVE.
See https://cloud.google.com/storage/docs/using-autoclass for details.
:setter: Set the terminal storage class for Autoclass configuration. :getter: Get the terminal storage class for Autoclass configuration.
Returns | |
---|---|
Type | Description |
str |
The terminal storage class if Autoclass is enabled, else None . |
autoclass_terminal_storage_class_update_time
The time at which the Autoclass terminal_storage_class field was last updated for this bucket
Returns | |
---|---|
Type | Description |
datetime.datetime or |
point-in time at which the bucket's terminal_storage_class is last updated, or None if the property is not set locally. |
autoclass_toggle_time
Retrieve the toggle time when Autoclaass was last enabled or disabled for the bucket.
Returns | |
---|---|
Type | Description |
datetime.datetime or |
point-in time at which the bucket's autoclass is toggled, or None if the property is not set locally. |
client
The client bound to this bucket.
cors
Retrieve or set CORS policies configured for this bucket.
See http://www.w3.org/TR/cors/ and https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
list of dictionaries |
A sequence of mappings describing each CORS policy. |
data_locations
Retrieve the list of regional locations for custom dual-region buckets.
See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/locations
Returns None
if the property has not been set before creation,
if the bucket's resource has not been loaded from the server,
or if the bucket is not a dual-regions bucket.
default_event_based_hold
Scalar property getter.
default_kms_key_name
Retrieve / set default KMS encryption key for objects in the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:setter: Set default KMS encryption key for items in this bucket. :getter: Get default KMS encryption key for items in this bucket.
Returns | |
---|---|
Type | Description |
str |
Default KMS encryption key, or None if not set. |
default_object_acl
Create our defaultObjectACL on demand.
etag
Retrieve the ETag for the bucket.
See https://tools.ietf.org/html/rfc2616#section-3.11 and https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
str or |
The bucket etag or None if the bucket's resource has not been loaded from the server. |
generation
Retrieve the generation for the bucket.
Returns | |
---|---|
Type | Description |
int or |
The generation of the bucket or None if the bucket's resource has not been loaded from the server. |
hard_delete_time
If this bucket has been soft-deleted, returns the time at which it will be permanently deleted.
Returns | |
---|---|
Type | Description |
|
(readonly) The time that the bucket will be permanently deleted. Note this property is only set for soft-deleted buckets. |
hierarchical_namespace_enabled
Whether hierarchical namespace is enabled for this bucket.
:setter: Update whether hierarchical namespace is enabled for this bucket. :getter: Query whether hierarchical namespace is enabled for this bucket.
Returns | |
---|---|
Type | Description |
bool |
True if enabled, else False. |
iam_configuration
Retrieve IAM configuration for this bucket.
Returns | |
---|---|
Type | Description |
|
an instance for managing the bucket's IAM configuration. |
id
Retrieve the ID for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
str or |
The ID of the bucket or None if the bucket's resource has not been loaded from the server. |
labels
Retrieve or set labels assigned to this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels
Returns | |
---|---|
Type | Description |
|
Name-value pairs (string->string) labelling the bucket. |
lifecycle_rules
Retrieve or set lifecycle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
generator(dict) |
A sequence of mappings describing each lifecycle rule. |
location
Retrieve location configured for this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/locations
Returns None
if the property has not been set before creation,
or if the bucket's resource has not been loaded from the server.
location_type
Retrieve the location type for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
:getter: Gets the the location type for this bucket.
Returns | |
---|---|
Type | Description |
str or |
If set, one of MULTI_REGION_LOCATION_TYPE, REGION_LOCATION_TYPE, or DUAL_REGION_LOCATION_TYPE, else None . |
metageneration
Retrieve the metageneration for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
int or |
The metageneration of the bucket or None if the bucket's resource has not been loaded from the server. |
object_retention_mode
Retrieve the object retention mode set on the bucket.
Returns | |
---|---|
Type | Description |
str |
When set to Enabled, retention configurations can be set on objects in the bucket. |
owner
Retrieve info about the owner of the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
dict or |
Mapping of owner's role/ID. Returns None if the bucket's resource has not been loaded from the server. |
path
The URL path to this bucket.
project_number
Retrieve the number of the project to which the bucket is assigned.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
int or |
The project number that owns the bucket or None if the bucket's resource has not been loaded from the server. |
requester_pays
Does the requester pay for API requests for this bucket?
See https://cloud.google.com/storage/docs/requester-pays for details.
:setter: Update whether requester pays for this bucket. :getter: Query whether requester pays for this bucket.
Returns | |
---|---|
Type | Description |
bool |
True if requester pays for API requests for the bucket, else False. |
retention_period
Retrieve or set the retention period for items in the bucket.
Returns | |
---|---|
Type | Description |
int or |
number of seconds to retain items after upload or release from event-based lock, or None if the property is not set locally. |
retention_policy_effective_time
Retrieve the effective time of the bucket's retention policy.
Returns | |
---|---|
Type | Description |
datetime.datetime or |
point-in time at which the bucket's retention policy is effective, or None if the property is not set locally. |
retention_policy_locked
Retrieve whthere the bucket's retention policy is locked.
Returns | |
---|---|
Type | Description |
bool |
True if the bucket's policy is locked, or else False if the policy is not locked, or the property is not set locally. |
rpo
Get the RPO (Recovery Point Objective) of this bucket
See: https://cloud.google.com/storage/docs/managing-turbo-replication
"ASYNC_TURBO" or "DEFAULT"
self_link
Retrieve the URI for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
str or |
The self link for the bucket or None if the bucket's resource has not been loaded from the server. |
soft_delete_policy
Retrieve the soft delete policy for this bucket.
Returns | |
---|---|
Type | Description |
|
an instance for managing the bucket's soft delete policy. |
soft_delete_time
If this bucket has been soft-deleted, returns the time at which it became soft-deleted.
Returns | |
---|---|
Type | Description |
|
(readonly) The time that the bucket became soft-deleted. Note this property is only set for soft-deleted buckets. |
storage_class
Retrieve or set the storage class for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
:setter: Set the storage class for this bucket. :getter: Gets the the storage class for this bucket.
Returns | |
---|---|
Type | Description |
str or |
If set, one of NEARLINE_STORAGE_CLASS, COLDLINE_STORAGE_CLASS, ARCHIVE_STORAGE_CLASS, STANDARD_STORAGE_CLASS, MULTI_REGIONAL_LEGACY_STORAGE_CLASS, REGIONAL_LEGACY_STORAGE_CLASS, or DURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS, else None . |
time_created
Retrieve the timestamp at which the bucket was created.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
|
Datetime object parsed from RFC3339 valid timestamp, or None if the bucket's resource has not been loaded from the server. |
updated
Retrieve the timestamp at which the bucket was last updated.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
|
Datetime object parsed from RFC3339 valid timestamp, or None if the bucket's resource has not been loaded from the server. |
user_project
Project ID to be billed for API requests made via this bucket.
If unset, API requests are billed to the bucket owner.
A user project is required for all operations on Requester Pays buckets.
See https://cloud.google.com/storage/docs/requester-pays#requirements for details.
versioning_enabled
Is versioning enabled for this bucket?
See https://cloud.google.com/storage/docs/object-versioning for details.
:setter: Update whether versioning is enabled for this bucket. :getter: Query whether versioning is enabled for this bucket.
Returns | |
---|---|
Type | Description |
bool |
True if enabled, else False. |
Methods
Bucket
Bucket(client, name=None, user_project=None, generation=None)
property name
Get the bucket's name.
add_lifecycle_abort_incomplete_multipart_upload_rule
add_lifecycle_abort_incomplete_multipart_upload_rule(**kw)
Add a "abort incomplete multipart upload" rule to lifecycle rules.
This defines a lifecycle configuration, which is set on the bucket. For the general format of a lifecycle configuration, see the bucket resource representation for JSON.add_lifecycle_delete_rule
add_lifecycle_delete_rule(**kw)
Add a "delete" rule to lifecycle rules configured for this bucket.
This defines a lifecycle configuration, which is set on the bucket. For the general format of a lifecycle configuration, see the bucket resource representation for JSON. See also a code sample.
add_lifecycle_set_storage_class_rule
add_lifecycle_set_storage_class_rule(storage_class, **kw)
Add a "set storage class" rule to lifecycle rules.
This defines a lifecycle configuration, which is set on the bucket. For the general format of a lifecycle configuration, see the bucket resource representation for JSON.
Parameter | |
---|---|
Name | Description |
storage_class |
str, one of
new storage class to assign to matching items. |
blob
blob(
blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None
)
Factory constructor for blob object.
Parameters | |
---|---|
Name | Description |
blob_name |
str
The name of the blob to be instantiated. |
chunk_size |
int
The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. |
encryption_key |
bytes
(Optional) 32 byte encryption key for customer-supplied encryption. |
kms_key_name |
str
(Optional) Resource name of KMS key used to encrypt blob's content. |
generation |
long
(Optional) If present, selects a specific revision of this object. |
Returns | |
---|---|
Type | Description |
Blob |
The blob object created. |
clear_lifecycle_rules
clear_lifecycle_rules()
Clear lifecycle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets
clear_lifecyle_rules
clear_lifecyle_rules()
Deprecated alias for clear_lifecycle_rules.
configure_website
configure_website(main_page_suffix=None, not_found_page=None)
Configure website-related properties.
Parameters | |
---|---|
Name | Description |
main_page_suffix |
str
The page to use as the main page of a directory. Typically something like index.html. |
not_found_page |
str
The file to use when a page isn't found. |
copy_blob
copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Copy the given blob to the given bucket, optionally with a new name.
If user_project
is set, bills the API request to that project.
See API reference docs and a code sample.
Returns | |
---|---|
Type | Description |
Blob |
The new Blob. |
create
create(client=None, project=None, location=None, predefined_acl=None, predefined_default_object_acl=None, enable_object_retention=False, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>)
Creates current bucket.
If the bucket already exists, will raise xref_Conflict.
This implements "storage.buckets.insert".
If user_project
is set, bills the API request to that project.
Exceptions | |
---|---|
Type | Description |
ValueError |
if project is None and client's project is also None. |
delete
delete(force=False, client=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>)
Delete this bucket.
The bucket must be empty in order to submit a delete request. If
force=True
is passed, this will first attempt to delete all the
objects / blobs in the bucket (i.e. try to empty the bucket).
If the bucket doesn't exist, this will raise
xref_NotFound. If the bucket is not empty
(and force=False
), will raise xref_Conflict.
If force=True
and the bucket contains more than 256 objects / blobs
this will cowardly refuse to delete the objects (or the bucket). This
is to prevent accidental bucket deletion and to prevent extremely long
runtime of this method. Also note that force=True
is not supported
in a Batch
context.
If user_project
is set, bills the API request to that project.
Exceptions | |
---|---|
Type | Description |
`ValueError |
if force is True and the bucket contains more than 256 objects / blobs. |
delete_blob
delete_blob(blob_name, client=None, generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Deletes a blob from the current bucket.
If user_project
is set, bills the API request to that project.
Exceptions | |
---|---|
Type | Description |
NotFound |
Raises a NotFound if the blob isn't found. To suppress the exception, use delete_blobs by passing a no-op on_error callback. |
delete_blobs
delete_blobs(blobs, on_error=None, client=None, preserve_generation=False, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Deletes a list of blobs from the current bucket.
Uses delete_blob
to delete each individual blob.
By default, any generation information in the list of blobs is ignored, and the
live versions of all blobs are deleted. Set preserve_generation
to True
if blob generation should instead be propagated from the list of blobs.
If user_project
is set, bills the API request to that project.
Exceptions | |
---|---|
Type | Description |
NotFound |
(if on_error is not passed). |
disable_logging
disable_logging()
Disable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#disabling
disable_website
disable_website()
Disable the website configuration for this bucket.
This is really just a shortcut for setting the website-related
attributes to None
.
enable_logging
enable_logging(bucket_name, object_prefix="")
Enable access logging for this bucket.
Parameters | |
---|---|
Name | Description |
bucket_name |
str
name of bucket in which to store access logs |
object_prefix |
str
prefix for access log filenames |
exists
exists(client=None, timeout=60, if_etag_match=None, if_etag_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.retry_unary.Retry object>)
Determines whether or not this bucket exists.
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
bool |
True if the bucket exists in Cloud Storage. |
from_string
from_string(uri, client=None)
Get a constructor for bucket object by URI.
from google.cloud import storage
from google.cloud.storage.bucket import Bucket
client = storage.Client()
bucket = Bucket.from_string("gs://bucket", client=client)
Parameters | |
---|---|
Name | Description |
uri |
str
The bucket uri pass to get bucket object. |
client |
Client or
(Optional) The client to use. Application code should always pass |
Returns | |
---|---|
Type | Description |
Bucket |
The bucket object created. |
from_uri
from_uri(uri, client=None)
Get a constructor for bucket object by URI.
from google.cloud import storage
from google.cloud.storage.bucket import Bucket
client = storage.Client()
bucket = Bucket.from_uri("gs://bucket", client=client)
Parameters | |
---|---|
Name | Description |
uri |
str
The bucket uri pass to get bucket object. |
client |
Client or
(Optional) The client to use. Application code should always pass |
Returns | |
---|---|
Type | Description |
Bucket |
The bucket object created. |
generate_signed_url
generate_signed_url(
expiration=None,
api_access_endpoint=None,
method="GET",
headers=None,
query_parameters=None,
client=None,
credentials=None,
version=None,
virtual_hosted_style=False,
bucket_bound_hostname=None,
scheme="http",
)
Generates a signed URL for this bucket.
Ifbucket_bound_hostname
is set as an argument of api_access_endpoint
,
https
works only if using a CDN
.
Parameters | |
---|---|
Name | Description |
expiration |
Union[Integer, datetime.datetime, datetime.timedelta]
Point in time when the signed URL should expire. If a |
api_access_endpoint |
str
(Optional) URI base, for instance "https://storage.googleapis.com". If not specified, the client's api_endpoint will be used. Incompatible with bucket_bound_hostname. |
method |
str
The HTTP verb that will be used when requesting the URL. |
headers |
dict
(Optional) Additional HTTP headers to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers Requests using the signed URL must pass the specified header (name and value) with each request for the URL. |
query_parameters |
dict
(Optional) Additional query parameters to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers#query |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
credentials |
The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. |
version |
str
(Optional) The version of signed credential to create. Must be one of 'v2' 'v4'. |
virtual_hosted_style |
bool
(Optional) If true, then construct the URL relative the bucket's virtual hostname, e.g., '
|
bucket_bound_hostname |
str
(Optional) If passed, then construct the URL relative to the bucket-bound hostname. Value can be a bare or with scheme, e.g., 'example.com' or 'http://example.com'. Incompatible with api_access_endpoint and virtual_hosted_style. See: https://cloud.google.com/storage/docs/request-endpoints#cname |
scheme |
str
(Optional) If |
Exceptions | |
---|---|
Type | Description |
`ValueError |
when version is invalid or mutually exclusive arguments are used. |
`TypeError |
when expiration is not a valid type. |
`AttributeError |
if credentials is not an instance of google.auth.credentials.Signing . |
Returns | |
---|---|
Type | Description |
str |
A signed URL you can use to access the resource until expiration. |
generate_upload_policy
generate_upload_policy(conditions, expiration=None, client=None)
Create a signed upload policy for uploading objects.
This method generates and signs a policy document. You can use
policy documents
to allow visitors to a website to upload files to
Google Cloud Storage without giving them direct write access.
See a code sample.
Parameters | |
---|---|
Name | Description |
expiration |
datetime
(Optional) Expiration in UTC. If not specified, the policy will expire in 1 hour. |
conditions |
list
A list of conditions as described in the |
client |
Client
(Optional) The client to use. If not passed, falls back to the |
Returns | |
---|---|
Type | Description |
dict |
A dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature. |
get_blob
get_blob(blob_name, client=None, encryption_key=None, generation=None, if_etag_match=None, if_etag_not_match=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>, soft_deleted=None, **kwargs)
Get a blob object by name.
See a code sample on how to retrieve metadata of an object.
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
Blob or None |
The blob object if it exists, otherwise None. |
get_iam_policy
get_iam_policy(client=None, requested_policy_version=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>)
Retrieve the IAM policy for the bucket.
See API reference docs and a code sample.
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
|
the policy instance, based on the resource returned from the getIamPolicy API request. |
get_logging
get_logging()
Return info about access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#status
Returns | |
---|---|
Type | Description |
dict or None |
a dict w/ keys, logBucket and logObjectPrefix (if logging is enabled), or None (if not). |
get_notification
get_notification(notification_id, client=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>)
Get Pub / Sub notification for this bucket.
See API reference docs and a code sample.
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
|
notification instance. |
list_blobs
list_blobs(max_results=None, page_token=None, prefix=None, delimiter=None, start_offset=None, end_offset=None, include_trailing_delimiter=None, versions=None, projection='noAcl', fields=None, client=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>, match_glob=None, include_folders_as_prefixes=None, soft_deleted=None, page_size=None)
Return an iterator used to find blobs in the bucket.
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
|
Iterator of all Blob in this bucket matching the arguments. |
list_notifications
list_notifications(client=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>)
List Pub / Sub notifications for this bucket.
See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
list of |
notification instances |
lock_retention_policy
lock_retention_policy(client=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>)
Lock the bucket's retention policy.
Exceptions | |
---|---|
Type | Description |
ValueError |
if the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket's retention policy is already locked. |
make_private
make_private(recursive=False, future=False, client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Update bucket's ACL, revoking read access for anonymous users.
Exceptions | |
---|---|
Type | Description |
ValueError |
If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs and call make_private for each blob. |
make_public
make_public(recursive=False, future=False, client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Update bucket's ACL, granting read access to anonymous users.
Exceptions | |
---|---|
Type | Description |
ValueError |
If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs and call make_public for each blob. |
notification
notification(
topic_name=None,
topic_project=None,
custom_attributes=None,
event_types=None,
blob_name_prefix=None,
payload_format="NONE",
notification_id=None,
)
Factory: create a notification resource for the bucket.
See: .BucketNotification
for parameters.
patch
patch(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Sends all changed properties in a PATCH request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.
path_helper
path_helper(bucket_name)
Relative URL path for a bucket.
Parameter | |
---|---|
Name | Description |
bucket_name |
str
The bucket name in the path. |
Returns | |
---|---|
Type | Description |
str |
The relative URL path for bucket_name . |
reload
reload(client=None, projection='noAcl', timeout=60, if_etag_match=None, if_etag_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.retry_unary.Retry object>, soft_deleted=None)
Reload properties from Cloud Storage.
If user_project
is set, bills the API request to that project.
rename_blob
rename_blob(blob, new_name, client=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Rename the given blob using copy and delete operations.
If user_project
is set, bills the API request to that project.
Effectively, copies blob to the same bucket with a new name, then deletes the blob.
Returns | |
---|---|
Type | Description |
|
The newly-renamed blob. |
restore_blob
restore_blob(blob_name, client=None, generation=None, copy_source_acl=None, projection=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Restores a soft-deleted object.
If user_project
is set on the bucket, bills the API request to that project.
Returns | |
---|---|
Type | Description |
Blob |
The restored Blob. |
set_iam_policy
set_iam_policy(policy, client=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Update the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
|
the policy instance, based on the resource returned from the setIamPolicy API request. |
test_iam_permissions
test_iam_permissions(permissions, client=None, timeout=60, retry=<google.api_core.retry.retry_unary.Retry object>)
API call: test permissions
See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions
If user_project
is set, bills the API request to that project.
Returns | |
---|---|
Type | Description |
list of string |
the permissions returned by the testIamPermissions API request. |
update
update(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Sends all properties in a PUT request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.