Buckets#

Create / interact with Google Cloud Storage buckets.

class google.cloud.storage.bucket.Bucket(client, name=None, user_project=None)[source]#

Bases: google.cloud.storage._helpers._PropertyMixin

A class representing a Bucket on Cloud Storage.

Parameters
  • client (google.cloud.storage.client.Client) – A client which holds credentials and project configuration for the bucket (which requires a project).

  • name (str) – The name of the bucket. Bucket names must start and end with a number or letter.

  • user_project (str) – (Optional) the project ID to be billed for API requests made via this instance.

COLDLINE_STORAGE_CLASS = 'COLDLINE'#

Storage class for objects accessed at most once per year.

DUAL_REGION_LOCATION_TYPE = 'dual-region'#

data will be stored within two primary regions.

Provides high availability and low latency across two regions.

Type

Location type

DURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS = 'DURABLE_REDUCED_AVAILABILITY'#

Legacy storage class.

Similar to NEARLINE_STORAGE_CLASS.

MULTI_REGIONAL_LEGACY_STORAGE_CLASS = 'MULTI_REGIONAL'#

Legacy storage class.

Alias for STANDARD_STORAGE_CLASS.

Implies MULTI_REGION_LOCATION_TYPE for location_type.

MULTI_REGION_LOCATION_TYPE = 'multi-region'#

data will be replicated across regions in a multi-region.

Provides highest availability across largest area.

Type

Location type

NEARLINE_STORAGE_CLASS = 'NEARLINE'#

Storage class for objects accessed at most once per month.

REGIONAL_LEGACY_STORAGE_CLASS = 'REGIONAL'#

Legacy storage class.

Alias for STANDARD_STORAGE_CLASS.

Implies REGION_LOCATION_TYPE for location_type.

REGION_LOCATION_TYPE = 'region'#

data will be stored within a single region.

Provides lowest latency within a single region.

Type

Location type

STANDARD_STORAGE_CLASS = 'STANDARD'#

Storage class for objects accessed more than once per month.

property acl#

Create our ACL on demand.

add_lifecycle_delete_rule(**kw)[source]#

Add a “delete” rule to lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

https://cloud.google.com/storage/docs/json_api/v1/buckets

    bucket = client.get_bucket("my-bucket")
    bucket.add_lifecycle_rule_delete(age=2)
    bucket.patch()
Params kw

arguments passed to LifecycleRuleConditions.

add_lifecycle_set_storage_class_rule(storage_class, **kw)[source]#

Add a “delete” rule to lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

https://cloud.google.com/storage/docs/json_api/v1/buckets

    bucket = client.get_bucket("my-bucket")
    bucket.add_lifecycle_rule_set_storage_class(
        "COLD_LINE", matches_storage_class=["NEARLINE"]
    )
    bucket.patch()
Parameters

storage_class (str, one of _STORAGE_CLASSES.) – new storage class to assign to matching items.

Params kw

arguments passed to LifecycleRuleConditions.

blob(blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None)[source]#

Factory constructor for blob object.

Note

This will not make an HTTP request; it simply instantiates a blob object owned by this bucket.

Parameters
  • blob_name (str) – The name of the blob to be instantiated.

  • chunk_size (int) – The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

  • encryption_key (bytes) – Optional 32 byte encryption key for customer-supplied encryption.

  • kms_key_name (str) – Optional resource name of KMS key used to encrypt blob’s content.

  • generation (long) – Optional. If present, selects a specific revision of this object.

Return type

google.cloud.storage.blob.Blob

Returns

The blob object created.

clear_lifecyle_rules()[source]#

Set lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

https://cloud.google.com/storage/docs/json_api/v1/buckets

property client#

The client bound to this bucket.

configure_website(main_page_suffix=None, not_found_page=None)[source]#

Configure website-related properties.

See https://cloud.google.com/storage/docs/hosting-static-website

Note

This (apparently) only works if your bucket name is a domain name (and to do that, you need to get approved somehow…).

If you want this bucket to host a website, just provide the name of an index page and a page to use when a blob isn’t found:

    client = storage.Client()
    bucket = client.get_bucket(bucket_name)
    bucket.configure_website("index.html", "404.html")

You probably should also make the whole bucket public:

    bucket.make_public(recursive=True, future=True)

This says: “Make the bucket public, and all the stuff already in the bucket, and anything else I add to the bucket. Just make it all public.”

Parameters
  • main_page_suffix (str) – The page to use as the main page of a directory. Typically something like index.html.

  • not_found_page (str) – The file to use when a page isn’t found.

copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None)[source]#

Copy the given blob to the given bucket, optionally with a new name.

If user_project is set, bills the API request to that project.

Parameters
  • blob (google.cloud.storage.blob.Blob) – The blob to be copied.

  • destination_bucket (google.cloud.storage.bucket.Bucket) – The bucket into which the blob should be copied.

  • new_name (str) – (optional) the new name for the copied file.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

  • preserve_acl (bool) – Optional. Copies ACL from old blob to new blob. Default: True.

  • source_generation (long) – Optional. The generation of the blob to be copied.

Return type

google.cloud.storage.blob.Blob

Returns

The new Blob.

property cors#

Retrieve or set CORS policies configured for this bucket.

See http://www.w3.org/TR/cors/ and

https://cloud.google.com/storage/docs/json_api/v1/buckets

Note

The getter for this property returns a list which contains copies of the bucket’s CORS policy mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:

>>> policies = bucket.cors
>>> policies.append({'origin': '/foo', ...})
>>> policies[1]['maxAgeSeconds'] = 3600
>>> del policies[0]
>>> bucket.cors = policies
>>> bucket.update()
Setter

Set CORS policies for this bucket.

Getter

Gets the CORS policies for this bucket.

Return type

list of dictionaries

Returns

A sequence of mappings describing each CORS policy.

create(client=None, project=None, location=None)[source]#

Creates current bucket.

If the bucket already exists, will raise google.cloud.exceptions.Conflict.

This implements “storage.buckets.insert”.

If user_project is set, bills the API request to that project.

Parameters
  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

  • project (str) – Optional. The project under which the bucket is to be created. If not passed, uses the project set on the client.

  • location (str) – Optional. The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations

Raises
property default_event_based_hold#

Are uploaded objects automatically placed under an even-based hold?

If True, uploaded objects will be placed under an event-based hold to be released at a future time. When released an object will then begin the retention period determined by the policy retention period for the object bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

If the property is not set locally, returns None.

Return type

bool or NoneType

property default_kms_key_name#

Retrieve / set default KMS encryption key for objects in the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Setter

Set default KMS encryption key for items in this bucket.

Getter

Get default KMS encryption key for items in this bucket.

Return type

str

Returns

Default KMS encryption key, or None if not set.

property default_object_acl#

Create our defaultObjectACL on demand.

delete(force=False, client=None)[source]#

Delete this bucket.

The bucket must be empty in order to submit a delete request. If force=True is passed, this will first attempt to delete all the objects / blobs in the bucket (i.e. try to empty the bucket).

If the bucket doesn’t exist, this will raise google.cloud.exceptions.NotFound. If the bucket is not empty (and force=False), will raise google.cloud.exceptions.Conflict.

If force=True and the bucket contains more than 256 objects / blobs this will cowardly refuse to delete the objects (or the bucket). This is to prevent accidental bucket deletion and to prevent extremely long runtime of this method.

If user_project is set, bills the API request to that project.

Parameters
  • force (bool) – If True, empties the bucket’s objects then deletes it.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Raises

ValueError if force is True and the bucket contains more than 256 objects / blobs.

delete_blob(blob_name, client=None, generation=None)[source]#

Deletes a blob from the current bucket.

If the blob isn’t found (backend 404), raises a google.cloud.exceptions.NotFound.

For example:

    from google.cloud.exceptions import NotFound

    client = storage.Client()
    bucket = client.get_bucket("my-bucket")
    blobs = list(bucket.list_blobs())
    assert len(blobs) > 0
    # [<Blob: my-bucket, my-file.txt>]
    bucket.delete_blob("my-file.txt")
    try:
        bucket.delete_blob("doesnt-exist")
    except NotFound:
        pass

If user_project is set, bills the API request to that project.

Parameters
  • blob_name (str) – A blob name to delete.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

  • generation (long) – Optional. If present, permanently deletes a specific revision of this object.

Raises

google.cloud.exceptions.NotFound (to suppress the exception, call delete_blobs, passing a no-op on_error callback, e.g.:

    bucket.delete_blobs([blob], on_error=lambda blob: None)
delete_blobs(blobs, on_error=None, client=None)[source]#

Deletes a list of blobs from the current bucket.

Uses delete_blob() to delete each individual blob.

If user_project is set, bills the API request to that project.

Parameters
  • blobs (list) – A list of Blob-s or blob names to delete.

  • on_error (callable) – (Optional) Takes single argument: blob. Called called once for each blob raising NotFound; otherwise, the exception is propagated.

  • client (Client) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

Raises

NotFound (if on_error is not passed).

disable_logging()[source]#

Disable access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs#disabling

disable_website()[source]#

Disable the website configuration for this bucket.

This is really just a shortcut for setting the website-related attributes to None.

enable_logging(bucket_name, object_prefix='')[source]#

Enable access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs

Parameters
  • bucket_name (str) – name of bucket in which to store access logs

  • object_prefix (str) – prefix for access log filenames

property etag#

Retrieve the ETag for the bucket.

See https://tools.ietf.org/html/rfc2616#section-3.11 and

https://cloud.google.com/storage/docs/json_api/v1/buckets

Return type

str or NoneType

Returns

The bucket etag or None if the bucket’s resource has not been loaded from the server.

exists(client=None)[source]#

Determines whether or not this bucket exists.

If user_project is set, bills the API request to that project.

Parameters

client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

bool

Returns

True if the bucket exists in Cloud Storage.

generate_signed_url(expiration=None, api_access_endpoint='https://storage.googleapis.com', method='GET', headers=None, query_parameters=None, client=None, credentials=None, version=None)[source]#

Generates a signed URL for this bucket.

Note

If you are on Google Compute Engine, you can’t generate a signed URL using GCE service account. Follow Issue 50 for updates on this. If you’d like to be able to generate a signed URL from GCE, you can use a standard service account from a JSON file rather than a GCE service account.

If you have a bucket that you want to allow access to for a set amount of time, you can use this method to generate a URL that is only valid within a certain time period.

This is particularly useful if you don’t want publicly accessible buckets, but don’t want to require users to explicitly log in.

Parameters
  • expiration (Union[Integer, datetime.datetime, datetime.timedelta]) – Point in time when the signed URL should expire.

  • api_access_endpoint (str) – Optional URI base.

  • method (str) – The HTTP verb that will be used when requesting the URL.

  • headers (dict) – (Optional) Additional HTTP headers to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers Requests using the signed URL must pass the specified header (name and value) with each request for the URL.

  • query_parameters (dict) – (Optional) Additional query paramtersto be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers#query

  • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the blob’s bucket.

  • credentials (oauth2client.client.OAuth2Credentials or NoneType) – (Optional) The OAuth2 credentials to use to sign the URL. Defaults to the credentials stored on the client used.

  • version (str) – (Optional) The version of signed credential to create. Must be one of ‘v2’ | ‘v4’.

Raises

ValueError when version is invalid.

Raises

TypeError when expiration is not a valid type.

Raises

AttributeError if credentials is not an instance of google.auth.credentials.Signing.

Return type

str

Returns

A signed URL you can use to access the resource until expiration.

generate_upload_policy(conditions, expiration=None, client=None)[source]#

Create a signed upload policy for uploading objects.

This method generates and signs a policy document. You can use policy documents to allow visitors to a website to upload files to Google Cloud Storage without giving them direct write access.

For example:

    bucket = client.bucket("my-bucket")
    conditions = [["starts-with", "$key", ""], {"acl": "public-read"}]

    policy = bucket.generate_upload_policy(conditions)

    # Generate an upload form using the form fields.
    policy_fields = "".join(
        '<input type="hidden" name="{key}" value="{value}">'.format(
            key=key, value=value
        )
        for key, value in policy.items()
    )

    upload_form = (
        '<form action="http://{bucket_name}.storage.googleapis.com"'
        '   method="post" enctype="multipart/form-data">'
        '<input type="text" name="key" value="my-test-key">'
        '<input type="hidden" name="bucket" value="{bucket_name}">'
        '<input type="hidden" name="acl" value="public-read">'
        '<input name="file" type="file">'
        '<input type="submit" value="Upload">'
        "{policy_fields}"
        "</form>"
    ).format(bucket_name=bucket.name, policy_fields=policy_fields)

    print(upload_form)
Parameters
  • expiration (datetime) – Optional expiration in UTC. If not specified, the policy will expire in 1 hour.

  • conditions (list) – A list of conditions as described in the policy documents documentation.

  • client (Client) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

dict

Returns

A dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature.

get_blob(blob_name, client=None, encryption_key=None, generation=None, **kwargs)[source]#

Get a blob object by name.

This will return None if the blob doesn’t exist:

    client = storage.Client()
    bucket = client.get_bucket("my-bucket")
    assert isinstance(bucket.get_blob("/path/to/blob.txt"), Blob)
    # <Blob: my-bucket, /path/to/blob.txt>
    assert not bucket.get_blob("/does-not-exist.txt")
    # None

If user_project is set, bills the API request to that project.

Parameters
  • blob_name (str) – The name of the blob to retrieve.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

  • encryption_key (bytes) – Optional 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied.

  • generation (long) – Optional. If present, selects a specific revision of this object.

  • kwargs – Keyword arguments to pass to the Blob constructor.

Return type

google.cloud.storage.blob.Blob or None

Returns

The blob object if it exists, otherwise None.

get_iam_policy(client=None)[source]#

Retrieve the IAM policy for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy

If user_project is set, bills the API request to that project.

Parameters

client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

google.api_core.iam.Policy

Returns

the policy instance, based on the resource returned from the getIamPolicy API request.

get_logging()[source]#

Return info about access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs#status

Return type

dict or None

Returns

a dict w/ keys, logBucket and logObjectPrefix (if logging is enabled), or None (if not).

property iam_configuration#

Retrieve IAM configuration for this bucket.

Return type

IAMConfiguration

Returns

an instance for managing the bucket’s IAM configuration.

property id#

Retrieve the ID for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Return type

str or NoneType

Returns

The ID of the bucket or None if the bucket’s resource has not been loaded from the server.

property labels#

Retrieve or set labels assigned to this bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels

Note

The getter for this property returns a dict which is a copy of the bucket’s labels. Mutating that dict has no effect unless you then re-assign the dict via the setter. E.g.:

>>> labels = bucket.labels
>>> labels['new_key'] = 'some-label'
>>> del labels['old_key']
>>> bucket.labels = labels
>>> bucket.update()
Setter

Set labels for this bucket.

Getter

Gets the labels for this bucket.

Return type

dict

Returns

Name-value pairs (string->string) labelling the bucket.

property lifecycle_rules#

Retrieve or set lifecycle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

https://cloud.google.com/storage/docs/json_api/v1/buckets

Note

The getter for this property returns a list which contains copies of the bucket’s lifecycle rules mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:

>>> rules = bucket.lifecycle_rules
>>> rules.append({'origin': '/foo', ...})
>>> rules[1]['rule']['action']['type'] = 'Delete'
>>> del rules[0]
>>> bucket.lifecycle_rules = rules
>>> bucket.update()
Setter

Set lifestyle rules for this bucket.

Getter

Gets the lifestyle rules for this bucket.

Return type

generator(dict)

Returns

A sequence of mappings describing each lifecycle rule.

list_blobs(max_results=None, page_token=None, prefix=None, delimiter=None, versions=None, projection='noAcl', fields=None, client=None)[source]#

Return an iterator used to find blobs in the bucket.

Note

Direct use of this method is deprecated. Use Client.list_blobs instead.

If user_project is set, bills the API request to that project.

Parameters
  • max_results (int) – (Optional) The maximum number of blobs in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API.

  • page_token (str) – (Optional) If present, return the next batch of blobs, using the value, which must correspond to the nextPageToken value returned in the previous response. Deprecated: use the pages property of the returned iterator instead of manually passing the token.

  • prefix (str) – (Optional) prefix used to filter blobs.

  • delimiter (str) – (Optional) Delimiter, used with prefix to emulate hierarchy.

  • versions (bool) – (Optional) Whether object versions should be returned as separate blobs.

  • projection (str) – (Optional) If used, must be ‘full’ or ‘noAcl’. Defaults to 'noAcl'. Specifies the set of properties to return.

  • fields (str) – (Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned: 'items(name,contentLanguage),nextPageToken'. See: https://cloud.google.com/storage/docs/json_api/v1/parameters#fields

  • client (Client) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

Iterator

Returns

Iterator of all Blob in this bucket matching the arguments.

list_notifications(client=None)[source]#

List Pub / Sub notifications for this bucket.

See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list

If user_project is set, bills the API request to that project.

Parameters

client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

list of BucketNotification

Returns

notification instances

property location#

Retrieve location configured for this bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/bucket-locations

Returns None if the property has not been set before creation, or if the bucket’s resource has not been loaded from the server. :rtype: str or NoneType

property location_type#

Retrieve or set the location type for the bucket.

See https://cloud.google.com/storage/docs/storage-classes

Setter

Set the location type for this bucket.

Getter

Gets the the location type for this bucket.

Return type

str or NoneType

Returns

If set, one of MULTI_REGION_LOCATION_TYPE, REGION_LOCATION_TYPE, or DUAL_REGION_LOCATION_TYPE, else None.

lock_retention_policy(client=None)[source]#

Lock the bucket’s retention policy.

Raises

ValueError – if the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket’s retention policy is already locked.

make_private(recursive=False, future=False, client=None)[source]#

Update bucket’s ACL, revoking read access for anonymous users.

Parameters
  • recursive (bool) – If True, this will make all blobs inside the bucket private as well.

  • future (bool) – If True, this will make all objects created in the future private as well.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Raises

ValueError – If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs() and call make_private() for each blob.

make_public(recursive=False, future=False, client=None)[source]#

Update bucket’s ACL, granting read access to anonymous users.

Parameters
  • recursive (bool) – If True, this will make all blobs inside the bucket public as well.

  • future (bool) – If True, this will make all objects created in the future public as well.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Raises

ValueError – If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs() and call make_public() for each blob.

property metageneration#

Retrieve the metageneration for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Return type

int or NoneType

Returns

The metageneration of the bucket or None if the bucket’s resource has not been loaded from the server.

notification(topic_name, topic_project=None, custom_attributes=None, event_types=None, blob_name_prefix=None, payload_format='NONE')[source]#

Factory: create a notification resource for the bucket.

See: BucketNotification for parameters.

Return type

BucketNotification

property owner#

Retrieve info about the owner of the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Return type

dict or NoneType

Returns

Mapping of owner’s role/ID. Returns None if the bucket’s resource has not been loaded from the server.

patch(client=None)[source]#

Sends all changed properties in a PATCH request.

Updates the _properties with the response from the backend.

If user_project is set, bills the API request to that project.

Parameters

client (Client or NoneType) – the client to use. If not passed, falls back to the client stored on the current object.

property path#

The URL path to this bucket.

static path_helper(bucket_name)[source]#

Relative URL path for a bucket.

Parameters

bucket_name (str) – The bucket name in the path.

Return type

str

Returns

The relative URL path for bucket_name.

property project_number#

Retrieve the number of the project to which the bucket is assigned.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Return type

int or NoneType

Returns

The project number that owns the bucket or None if the bucket’s resource has not been loaded from the server.

reload(client=None)#

Reload properties from Cloud Storage.

If user_project is set, bills the API request to that project.

Parameters

client (Client or NoneType) – the client to use. If not passed, falls back to the client stored on the current object.

rename_blob(blob, new_name, client=None)[source]#

Rename the given blob using copy and delete operations.

If user_project is set, bills the API request to that project.

Effectively, copies blob to the same bucket with a new name, then deletes the blob.

Warning

This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation.

Parameters
  • blob (google.cloud.storage.blob.Blob) – The blob to be renamed.

  • new_name (str) – The new name for this blob.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

Blob

Returns

The newly-renamed blob.

property requester_pays#

Does the requester pay for API requests for this bucket?

See https://cloud.google.com/storage/docs/requester-pays for details.

Setter

Update whether requester pays for this bucket.

Getter

Query whether requester pays for this bucket.

Return type

bool

Returns

True if requester pays for API requests for the bucket, else False.

property retention_period#

Retrieve or set the retention period for items in the bucket.

Return type

int or NoneType

Returns

number of seconds to retain items after upload or release from event-based lock, or None if the property is not set locally.

property retention_policy_effective_time#

Retrieve the effective time of the bucket’s retention policy.

Return type

datetime.datetime or NoneType

Returns

point-in time at which the bucket’s retention policy is effective, or None if the property is not set locally.

property retention_policy_locked#

Retrieve whthere the bucket’s retention policy is locked.

Return type

bool

Returns

True if the bucket’s policy is locked, or else False if the policy is not locked, or the property is not set locally.

Retrieve the URI for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Return type

str or NoneType

Returns

The self link for the bucket or None if the bucket’s resource has not been loaded from the server.

set_iam_policy(policy, client=None)[source]#

Update the IAM policy for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy

If user_project is set, bills the API request to that project.

Parameters
  • policy (google.api_core.iam.Policy) – policy instance used to update bucket’s IAM policy.

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

google.api_core.iam.Policy

Returns

the policy instance, based on the resource returned from the setIamPolicy API request.

property storage_class#

Retrieve or set the storage class for the bucket.

See https://cloud.google.com/storage/docs/storage-classes

Setter

Set the storage class for this bucket.

Getter

Gets the the storage class for this bucket.

Return type

str or NoneType

Returns

If set, one of NEARLINE_STORAGE_CLASS, COLDLINE_STORAGE_CLASS, STANDARD_STORAGE_CLASS, MULTI_REGIONAL_LEGACY_STORAGE_CLASS, REGIONAL_LEGACY_STORAGE_CLASS, or DURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS, else None.

test_iam_permissions(permissions, client=None)[source]#

API call: test permissions

See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions

If user_project is set, bills the API request to that project.

Parameters
  • permissions (list of string) – the permissions to check

  • client (Client or NoneType) – Optional. The client to use. If not passed, falls back to the client stored on the current bucket.

Return type

list of string

Returns

the permissions returned by the testIamPermissions API request.

property time_created#

Retrieve the timestamp at which the bucket was created.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Return type

datetime.datetime or NoneType

Returns

Datetime object parsed from RFC3339 valid timestamp, or None if the bucket’s resource has not been loaded from the server.

update(client=None)#

Sends all properties in a PUT request.

Updates the _properties with the response from the backend.

If user_project is set, bills the API request to that project.

Parameters

client (Client or NoneType) – the client to use. If not passed, falls back to the client stored on the current object.

property user_project#

Project ID to be billed for API requests made via this bucket.

If unset, API requests are billed to the bucket owner.

Return type

str

property versioning_enabled#

Is versioning enabled for this bucket?

See https://cloud.google.com/storage/docs/object-versioning for details.

Setter

Update whether versioning is enabled for this bucket.

Getter

Query whether versioning is enabled for this bucket.

Return type

bool

Returns

True if enabled, else False.

class google.cloud.storage.bucket.IAMConfiguration(bucket, bucket_policy_only_enabled=False, bucket_policy_only_locked_time=None)[source]#

Bases: dict

Map a bucket’s IAM configuration.

Params bucket

Bucket for which this instance is the policy.

Params bucket_policy_only_enabled

(optional) whether the IAM-only policy is enabled for the bucket.

Params bucket_policy_only_locked_time

(optional) When the bucket’s IAM-only policy was ehabled. This value should normally only be set by the back-end API.

property bucket#

Bucket for which this instance is the policy.

Return type

Bucket

Returns

the instance’s bucket.

property bucket_policy_only_enabled#

If set, access checks only use bucket-level IAM policies or above.

Return type

bool

Returns

whether the bucket is configured to allow only IAM.

property bucket_policy_only_locked_time#

Deadline for changing bucket_policy_only_enabled from true to false.

If the bucket’s bucket_policy_only_enabled is true, this property is time time after which that setting becomes immutable.

If the bucket’s bucket_policy_only_enabled is false, this property is None.

Return type

Union[datetime.datetime, None]

Returns

(readonly) Time after which bucket_policy_only_enabled will be frozen as true.

clear() → None. Remove all items from D.#
copy() → a shallow copy of D#
classmethod from_api_repr(resource, bucket)[source]#

Factory: construct instance from resource.

Params bucket

Bucket for which this instance is the policy.

Parameters

resource (dict) – mapping as returned from API call.

Return type

IAMConfiguration

Returns

Instance created from resource.

fromkeys()#

Returns a new dict with keys from iterable and values equal to value.

get(k[, d]) → D[k] if k in D, else d. d defaults to None.#
items() → a set-like object providing a view on D's items#
keys() → a set-like object providing a view on D's keys#
pop(k[, d]) → v, remove specified key and return the corresponding value.#

If key is not found, d is returned if given, otherwise KeyError is raised

popitem() → (k, v), remove and return some (key, value) pair as a#

2-tuple; but raise KeyError if D is empty.

setdefault(k[, d]) → D.get(k,d), also set D[k]=d if k not in D#
update([E, ]**F) → None. Update D from dict/iterable E and F.#

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() → an object providing a view on D's values#
class google.cloud.storage.bucket.LifecycleRuleConditions(age=None, created_before=None, is_live=None, matches_storage_class=None, number_of_newer_versions=None, _factory=False)[source]#

Bases: dict

Map a single lifecycle rule for a bucket.

See: https://cloud.google.com/storage/docs/lifecycle

Parameters
  • age (int) – (optional) apply rule action to items whos age, in days, exceeds this value.

  • created_before (datetime.date) – (optional) apply rule action to items created before this date.

  • is_live (bool) – (optional) if true, apply rule action to non-versioned items, or to items with no newer versions. If false, apply rule action to versioned items with at least one newer version.

  • matches_storage_class (list(str), one or more of Bucket._STORAGE_CLASSES.) – (optional) apply rule action to items which whose storage class matches this value.

  • number_of_newer_versions (int) – (optional) apply rule action to versioned items having N newer versions.

Raises

ValueError – if no arguments are passed.

property age#

Conditon’s age value.

clear() → None. Remove all items from D.#
copy() → a shallow copy of D#
property created_before#

Conditon’s created_before value.

classmethod from_api_repr(resource)[source]#

Factory: construct instance from resource.

Parameters

resource (dict) – mapping as returned from API call.

Return type

LifecycleRuleConditions

Returns

Instance created from resource.

fromkeys()#

Returns a new dict with keys from iterable and values equal to value.

get(k[, d]) → D[k] if k in D, else d. d defaults to None.#
property is_live#

Conditon’s ‘is_live’ value.

items() → a set-like object providing a view on D's items#
keys() → a set-like object providing a view on D's keys#
property matches_storage_class#

Conditon’s ‘matches_storage_class’ value.

property number_of_newer_versions#

Conditon’s ‘number_of_newer_versions’ value.

pop(k[, d]) → v, remove specified key and return the corresponding value.#

If key is not found, d is returned if given, otherwise KeyError is raised

popitem() → (k, v), remove and return some (key, value) pair as a#

2-tuple; but raise KeyError if D is empty.

setdefault(k[, d]) → D.get(k,d), also set D[k]=d if k not in D#
update([E, ]**F) → None. Update D from dict/iterable E and F.#

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() → an object providing a view on D's values#
class google.cloud.storage.bucket.LifecycleRuleDelete(**kw)[source]#

Bases: dict

Map a lifecycle rule deleting matching items.

Params kw

arguments passed to LifecycleRuleConditions.

clear() → None. Remove all items from D.#
copy() → a shallow copy of D#
classmethod from_api_repr(resource)[source]#

Factory: construct instance from resource.

Parameters

resource (dict) – mapping as returned from API call.

Return type

LifecycleRuleDelete

Returns

Instance created from resource.

fromkeys()#

Returns a new dict with keys from iterable and values equal to value.

get(k[, d]) → D[k] if k in D, else d. d defaults to None.#
items() → a set-like object providing a view on D's items#
keys() → a set-like object providing a view on D's keys#
pop(k[, d]) → v, remove specified key and return the corresponding value.#

If key is not found, d is returned if given, otherwise KeyError is raised

popitem() → (k, v), remove and return some (key, value) pair as a#

2-tuple; but raise KeyError if D is empty.

setdefault(k[, d]) → D.get(k,d), also set D[k]=d if k not in D#
update([E, ]**F) → None. Update D from dict/iterable E and F.#

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() → an object providing a view on D's values#
class google.cloud.storage.bucket.LifecycleRuleSetStorageClass(storage_class, **kw)[source]#

Bases: dict

Map a lifecycle rule upating storage class of matching items.

Parameters

storage_class (str, one of Bucket._STORAGE_CLASSES.) – new storage class to assign to matching items.

Params kw

arguments passed to LifecycleRuleConditions.

clear() → None. Remove all items from D.#
copy() → a shallow copy of D#
classmethod from_api_repr(resource)[source]#

Factory: construct instance from resource.

Parameters

resource (dict) – mapping as returned from API call.

Return type

LifecycleRuleDelete

Returns

Instance created from resource.

fromkeys()#

Returns a new dict with keys from iterable and values equal to value.

get(k[, d]) → D[k] if k in D, else d. d defaults to None.#
items() → a set-like object providing a view on D's items#
keys() → a set-like object providing a view on D's keys#
pop(k[, d]) → v, remove specified key and return the corresponding value.#

If key is not found, d is returned if given, otherwise KeyError is raised

popitem() → (k, v), remove and return some (key, value) pair as a#

2-tuple; but raise KeyError if D is empty.

setdefault(k[, d]) → D.get(k,d), also set D[k]=d if k not in D#
update([E, ]**F) → None. Update D from dict/iterable E and F.#

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() → an object providing a view on D's values#