Package Methods (2.26.0)

Summary of entries of Methods for bigtable.

google.cloud.bigtable.app_profile.AppProfile.create

create(ignore_warnings=None)

Create this AppProfile.

See more: google.cloud.bigtable.app_profile.AppProfile.create

google.cloud.bigtable.app_profile.AppProfile.delete

delete(ignore_warnings=None)

Delete this AppProfile.

See more: google.cloud.bigtable.app_profile.AppProfile.delete

google.cloud.bigtable.app_profile.AppProfile.exists

exists()

Check whether the AppProfile already exists.

See more: google.cloud.bigtable.app_profile.AppProfile.exists

google.cloud.bigtable.app_profile.AppProfile.from_pb

from_pb(app_profile_pb, instance)

Creates an instance app_profile from a protobuf.

See more: google.cloud.bigtable.app_profile.AppProfile.from_pb

google.cloud.bigtable.app_profile.AppProfile.reload

reload()

Reload the metadata for this cluster.

See more: google.cloud.bigtable.app_profile.AppProfile.reload

google.cloud.bigtable.app_profile.AppProfile.update

update(ignore_warnings=None)

Update this app_profile.

See more: google.cloud.bigtable.app_profile.AppProfile.update

google.cloud.bigtable.backup.Backup.create

create(cluster_id=None)

Creates this backup within its instance.

See more: google.cloud.bigtable.backup.Backup.create

google.cloud.bigtable.backup.Backup.delete

delete()

Delete this Backup.

See more: google.cloud.bigtable.backup.Backup.delete

google.cloud.bigtable.backup.Backup.exists

exists()

Tests whether this Backup exists.

See more: google.cloud.bigtable.backup.Backup.exists

google.cloud.bigtable.backup.Backup.from_pb

from_pb(backup_pb, instance)

Creates a Backup instance from a protobuf message.

See more: google.cloud.bigtable.backup.Backup.from_pb

google.cloud.bigtable.backup.Backup.get

get()

Retrieves metadata of a pending or completed Backup.

See more: google.cloud.bigtable.backup.Backup.get

google.cloud.bigtable.backup.Backup.get_iam_policy

get_iam_policy()

Gets the IAM access control policy for this backup.

See more: google.cloud.bigtable.backup.Backup.get_iam_policy

google.cloud.bigtable.backup.Backup.reload

reload()

Refreshes the stored backup properties.

See more: google.cloud.bigtable.backup.Backup.reload

google.cloud.bigtable.backup.Backup.restore

restore(table_id, instance_id=None)

Creates a new Table by restoring from this Backup.

See more: google.cloud.bigtable.backup.Backup.restore

google.cloud.bigtable.backup.Backup.set_iam_policy

set_iam_policy(policy)

Sets the IAM access control policy for this backup.

See more: google.cloud.bigtable.backup.Backup.set_iam_policy

google.cloud.bigtable.backup.Backup.test_iam_permissions

test_iam_permissions(permissions)

Tests whether the caller has the given permissions for this backup.

See more: google.cloud.bigtable.backup.Backup.test_iam_permissions

google.cloud.bigtable.backup.Backup.update_expire_time

update_expire_time(new_expire_time)

Update the expire time of this Backup.

See more: google.cloud.bigtable.backup.Backup.update_expire_time

google.cloud.bigtable.batcher.MutationsBatcher.__enter__

__enter__()

Starting the MutationsBatcher as a context manager.

See more: google.cloud.bigtable.batcher.MutationsBatcher.enter

google.cloud.bigtable.batcher.MutationsBatcher.__exit__

__exit__(exc_type, exc_value, exc_traceback)

google.cloud.bigtable.batcher.MutationsBatcher.close

close()

google.cloud.bigtable.batcher.MutationsBatcher.flush

flush()

Sends the current batch to Cloud Bigtable synchronously.

See more: google.cloud.bigtable.batcher.MutationsBatcher.flush

google.cloud.bigtable.batcher.MutationsBatcher.mutate

mutate(row)

google.cloud.bigtable.batcher.MutationsBatcher.mutate_rows

mutate_rows(rows)

Add multiple rows to the batch.

See more: google.cloud.bigtable.batcher.MutationsBatcher.mutate_rows

google.cloud.bigtable.client.Client.instance

instance(instance_id, display_name=None, instance_type=None, labels=None)

Factory to create a instance associated with this client.

See more: google.cloud.bigtable.client.Client.instance

google.cloud.bigtable.client.Client.list_clusters

list_clusters()

List the clusters in the project.

See more: google.cloud.bigtable.client.Client.list_clusters

google.cloud.bigtable.client.Client.list_instances

list_instances()

List instances owned by the project.

See more: google.cloud.bigtable.client.Client.list_instances

google.cloud.bigtable.cluster.Cluster.create

create()

Create this cluster.

See more: google.cloud.bigtable.cluster.Cluster.create

google.cloud.bigtable.cluster.Cluster.delete

delete()

Delete this cluster.

See more: google.cloud.bigtable.cluster.Cluster.delete

google.cloud.bigtable.cluster.Cluster.disable_autoscaling

disable_autoscaling(serve_nodes)

Disable autoscaling by specifying the number of nodes.

See more: google.cloud.bigtable.cluster.Cluster.disable_autoscaling

google.cloud.bigtable.cluster.Cluster.exists

exists()

Check whether the cluster already exists.

See more: google.cloud.bigtable.cluster.Cluster.exists

google.cloud.bigtable.cluster.Cluster.from_pb

from_pb(cluster_pb, instance)

Creates a cluster instance from a protobuf.

See more: google.cloud.bigtable.cluster.Cluster.from_pb

google.cloud.bigtable.cluster.Cluster.reload

reload()

Reload the metadata for this cluster.

See more: google.cloud.bigtable.cluster.Cluster.reload

google.cloud.bigtable.cluster.Cluster.update

update()

Update this cluster.

See more: google.cloud.bigtable.cluster.Cluster.update

google.cloud.bigtable.column_family.ColumnFamily.create

create()

Create this column family.

See more: google.cloud.bigtable.column_family.ColumnFamily.create

google.cloud.bigtable.column_family.ColumnFamily.delete

delete()

Delete this column family.

See more: google.cloud.bigtable.column_family.ColumnFamily.delete

google.cloud.bigtable.column_family.ColumnFamily.to_pb

to_pb()

Converts the column family to a protobuf.

See more: google.cloud.bigtable.column_family.ColumnFamily.to_pb

google.cloud.bigtable.column_family.ColumnFamily.update

update()

Update this column family.

See more: google.cloud.bigtable.column_family.ColumnFamily.update

google.cloud.bigtable.column_family.GCRuleIntersection.to_pb

to_pb()

Converts the intersection into a single GC rule as a protobuf.

See more: google.cloud.bigtable.column_family.GCRuleIntersection.to_pb

google.cloud.bigtable.column_family.GCRuleUnion.to_pb

to_pb()

Converts the union into a single GC rule as a protobuf.

See more: google.cloud.bigtable.column_family.GCRuleUnion.to_pb

google.cloud.bigtable.column_family.MaxAgeGCRule.to_pb

to_pb()

Converts the garbage collection rule to a protobuf.

See more: google.cloud.bigtable.column_family.MaxAgeGCRule.to_pb

google.cloud.bigtable.column_family.MaxVersionsGCRule.to_pb

to_pb()

Converts the garbage collection rule to a protobuf.

See more: google.cloud.bigtable.column_family.MaxVersionsGCRule.to_pb

google.cloud.bigtable.data._async.client.BigtableDataClientAsync.close

close(timeout: float = 2.0)

google.cloud.bigtable.data._async.client.BigtableDataClientAsync.execute_query

execute_query(query: str, instance_id: str, *, parameters: Dict[str, ExecuteQueryValueType] | None = None, parameter_types: Dict[str, SqlType.Type] | None = None, app_profile_id: str | None = None, operation_timeout: float = 600, attempt_timeout: float | None = 20, retryable_errors: Sequence[type[Exception]] = (

google.cloud.bigtable.data._async.client.BigtableDataClientAsync.get_table

get_table(
    instance_id: str, table_id: str, *args, **kwargs
) -> google.cloud.bigtable.data._async.client.TableAsync

Returns a table instance for making data API requests.

See more: google.cloud.bigtable.data._async.client.BigtableDataClientAsync.get_table

google.cloud.bigtable.data._async.client.TableAsync

TableAsync(client: google.cloud.bigtable.data._async.client.BigtableDataClientAsync, instance_id: str, table_id: str, app_profile_id: typing.Optional[str] = None, *, default_read_rows_operation_timeout: float = 600, default_read_rows_attempt_timeout: float | None = 20, default_mutate_rows_operation_timeout: float = 600, default_mutate_rows_attempt_timeout: float | None = 60, default_operation_timeout: float = 60, default_attempt_timeout: float | None = 20, default_read_rows_retryable_errors: typing.Sequence[type[Exception]] = (

Initialize a Table instance.

See more: google.cloud.bigtable.data._async.client.TableAsync

google.cloud.bigtable.data._async.client.TableAsync.__aenter__

__aenter__()

Implement async context manager protocol.

See more: google.cloud.bigtable.data.async.client.TableAsync._aenter

google.cloud.bigtable.data._async.client.TableAsync.__aexit__

__aexit__(exc_type, exc_val, exc_tb)

Implement async context manager protocol.

See more: google.cloud.bigtable.data.async.client.TableAsync._aexit

google.cloud.bigtable.data._async.client.TableAsync.bulk_mutate_rows

bulk_mutate_rows(
    mutation_entries: list[google.cloud.bigtable.data.mutations.RowMutationEntry],
    *,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.MUTATE_ROWS,
    attempt_timeout: (
        float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.MUTATE_ROWS,
    retryable_errors: typing.Union[
        typing.Sequence[type[Exception]],
        google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
    ] = TABLE_DEFAULT.MUTATE_ROWS
)

Applies mutations for multiple rows in a single batched request.

See more: google.cloud.bigtable.data._async.client.TableAsync.bulk_mutate_rows

google.cloud.bigtable.data._async.client.TableAsync.check_and_mutate_row

check_and_mutate_row(
    row_key: str | bytes,
    predicate: google.cloud.bigtable.data.row_filters.RowFilter | None,
    *,
    true_case_mutations: typing.Optional[
        typing.Union[
            google.cloud.bigtable.data.mutations.Mutation,
            list[google.cloud.bigtable.data.mutations.Mutation],
        ]
    ] = None,
    false_case_mutations: typing.Optional[
        typing.Union[
            google.cloud.bigtable.data.mutations.Mutation,
            list[google.cloud.bigtable.data.mutations.Mutation],
        ]
    ] = None,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.DEFAULT
) -> bool

Mutates a row atomically based on the output of a predicate filter.

See more: google.cloud.bigtable.data._async.client.TableAsync.check_and_mutate_row

google.cloud.bigtable.data._async.client.TableAsync.close

close()

Called to close the Table instance and release any resources held by it.

See more: google.cloud.bigtable.data._async.client.TableAsync.close

google.cloud.bigtable.data._async.client.TableAsync.mutate_row

mutate_row(
    row_key: str | bytes,
    mutations: (
        list[google.cloud.bigtable.data.mutations.Mutation]
        | google.cloud.bigtable.data.mutations.Mutation
    ),
    *,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.DEFAULT,
    attempt_timeout: (
        float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.DEFAULT,
    retryable_errors: typing.Union[
        typing.Sequence[type[Exception]],
        google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
    ] = TABLE_DEFAULT.DEFAULT
)

google.cloud.bigtable.data._async.client.TableAsync.mutations_batcher

mutations_batcher(
    *,
    flush_interval: float | None = 5,
    flush_limit_mutation_count: int | None = 1000,
    flush_limit_bytes: int = 20971520,
    flow_control_max_mutation_count: int = 100000,
    flow_control_max_bytes: int = 104857600,
    batch_operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.MUTATE_ROWS,
    batch_attempt_timeout: (
        float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.MUTATE_ROWS,
    batch_retryable_errors: typing.Union[
        typing.Sequence[type[Exception]],
        google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
    ] = TABLE_DEFAULT.MUTATE_ROWS
) -> google.cloud.bigtable.data._async.mutations_batcher.MutationsBatcherAsync

Returns a new mutations batcher instance.

See more: google.cloud.bigtable.data._async.client.TableAsync.mutations_batcher

google.cloud.bigtable.data._async.client.TableAsync.read_modify_write_row

read_modify_write_row(
    row_key: str | bytes,
    rules: (
        google.cloud.bigtable.data.read_modify_write_rules.ReadModifyWriteRule
        | list[google.cloud.bigtable.data.read_modify_write_rules.ReadModifyWriteRule]
    ),
    *,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.DEFAULT
) -> google.cloud.bigtable.data.row.Row

Reads and modifies a row atomically according to input ReadModifyWriteRules, and returns the contents of all modified cells.

See more: google.cloud.bigtable.data._async.client.TableAsync.read_modify_write_row

google.cloud.bigtable.data._async.client.TableAsync.read_row

read_row(
    row_key: str | bytes,
    *,
    row_filter: typing.Optional[
        google.cloud.bigtable.data.row_filters.RowFilter
    ] = None,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    attempt_timeout: (
        float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    retryable_errors: typing.Union[
        typing.Sequence[type[Exception]],
        google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
    ] = TABLE_DEFAULT.READ_ROWS
) -> google.cloud.bigtable.data.row.Row | None

Read a single row from the table, based on the specified key.

See more: google.cloud.bigtable.data._async.client.TableAsync.read_row

google.cloud.bigtable.data._async.client.TableAsync.read_rows

read_rows(
    query: google.cloud.bigtable.data.read_rows_query.ReadRowsQuery,
    *,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    attempt_timeout: (
        float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    retryable_errors: typing.Union[
        typing.Sequence[type[Exception]],
        google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
    ] = TABLE_DEFAULT.READ_ROWS
) -> list[google.cloud.bigtable.data.row.Row]

Read a set of rows from the table, based on the specified query.

See more: google.cloud.bigtable.data._async.client.TableAsync.read_rows

google.cloud.bigtable.data._async.client.TableAsync.read_rows_sharded

read_rows_sharded(
    sharded_query: ShardedQuery,
    *,
    operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS,
    attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.READ_ROWS,
    retryable_errors: (
        Sequence[type[Exception]] | TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS
) -> list[Row]

Runs a sharded query in parallel, then return the results in a single list.

See more: google.cloud.bigtable.data._async.client.TableAsync.read_rows_sharded

google.cloud.bigtable.data._async.client.TableAsync.read_rows_stream

read_rows_stream(
    query: google.cloud.bigtable.data.read_rows_query.ReadRowsQuery,
    *,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    attempt_timeout: (
        float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    retryable_errors: typing.Union[
        typing.Sequence[type[Exception]],
        google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
    ] = TABLE_DEFAULT.READ_ROWS
) -> typing.AsyncIterable[google.cloud.bigtable.data.row.Row]

Read a set of rows from the table, based on the specified query.

See more: google.cloud.bigtable.data._async.client.TableAsync.read_rows_stream

google.cloud.bigtable.data._async.client.TableAsync.row_exists

row_exists(
    row_key: str | bytes,
    *,
    operation_timeout: (
        float | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    attempt_timeout: (
        float | None | google.cloud.bigtable.data._helpers.TABLE_DEFAULT
    ) = TABLE_DEFAULT.READ_ROWS,
    retryable_errors: typing.Union[
        typing.Sequence[type[Exception]],
        google.cloud.bigtable.data._helpers.TABLE_DEFAULT,
    ] = TABLE_DEFAULT.READ_ROWS
) -> bool

Return a boolean indicating whether the specified row exists in the table.

See more: google.cloud.bigtable.data._async.client.TableAsync.row_exists

google.cloud.bigtable.data._async.client.TableAsync.sample_row_keys

sample_row_keys(
    *,
    operation_timeout: float | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT,
    attempt_timeout: float | None | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT,
    retryable_errors: Sequence[type[Exception]] | TABLE_DEFAULT = TABLE_DEFAULT.DEFAULT
) -> RowKeySamples

Return a set of RowKeySamples that delimit contiguous sections of the table of approximately equal size.

See more: google.cloud.bigtable.data._async.client.TableAsync.sample_row_keys

google.cloud.bigtable.data._async.mutations_batcher.MutationsBatcherAsync.__aenter__

__aenter__()

google.cloud.bigtable.data._async.mutations_batcher.MutationsBatcherAsync.__aexit__

__aexit__(exc_type, exc, tb)

google.cloud.bigtable.data._async.mutations_batcher.MutationsBatcherAsync.append

append(mutation_entry: google.cloud.bigtable.data.mutations.RowMutationEntry)

Add a new set of mutations to the internal queue .

See more: google.cloud.bigtable.data._async.mutations_batcher.MutationsBatcherAsync.append

google.cloud.bigtable.data._async.mutations_batcher.MutationsBatcherAsync.close

close()

google.cloud.bigtable.data.exceptions.MutationsExceptionGroup.__new__

__new__(
    cls, excs: list[Exception], total_entries: int, message: typing.Optional[str] = None
)

google.cloud.bigtable.data.exceptions.MutationsExceptionGroup.from_truncated_lists

from_truncated_lists(
    first_list: list[Exception],
    last_list: list[Exception],
    total_excs: int,
    entry_count: int,
) -> google.cloud.bigtable.data.exceptions.MutationsExceptionGroup

Create a MutationsExceptionGroup from two lists of exceptions, representing a larger set that has been truncated.

See more: google.cloud.bigtable.data.exceptions.MutationsExceptionGroup.from_truncated_lists

google.cloud.bigtable.data.execute_query.ExecuteQueryIteratorAsync.close

close() -> None

google.cloud.bigtable.data.execute_query.ExecuteQueryIteratorAsync.metadata

metadata() -> (
    typing.Optional[google.cloud.bigtable.data.execute_query.metadata.Metadata]
)

Returns query metadata from the server or None if the iterator was explicitly closed.

See more: google.cloud.bigtable.data.execute_query.ExecuteQueryIteratorAsync.metadata

google.cloud.bigtable.data.mutations.Mutation.__str__

__str__() -> str

Return a string representation of the mutation.

See more: google.cloud.bigtable.data.mutations.Mutation.str

google.cloud.bigtable.data.mutations.Mutation.is_idempotent

is_idempotent() -> bool

Check if the mutation is idempotent.

See more: google.cloud.bigtable.data.mutations.Mutation.is_idempotent

google.cloud.bigtable.data.mutations.Mutation.size

size() -> int

Get the size of the mutation in bytes .

See more: google.cloud.bigtable.data.mutations.Mutation.size

google.cloud.bigtable.data.mutations.RowMutationEntry.is_idempotent

is_idempotent() -> bool

Check if all mutations in the entry are idempotent.

See more: google.cloud.bigtable.data.mutations.RowMutationEntry.is_idempotent

google.cloud.bigtable.data.mutations.RowMutationEntry.size

size() -> int

Get the size of the mutation entry in bytes.

See more: google.cloud.bigtable.data.mutations.RowMutationEntry.size

google.cloud.bigtable.data.mutations.SetCell.is_idempotent

is_idempotent() -> bool

Check if the mutation is idempotent.

See more: google.cloud.bigtable.data.mutations.SetCell.is_idempotent

google.cloud.bigtable.data.read_rows_query.ReadRowsQuery.__eq__

__eq__(other)

RowRanges are equal if they have the same row keys, row ranges, filter and limit, or if they both represent a full scan with the same filter and limit .

See more: google.cloud.bigtable.data.read_rows_query.ReadRowsQuery.eq

google.cloud.bigtable.data.read_rows_query.ReadRowsQuery.add_key

add_key(row_key: str | bytes)

google.cloud.bigtable.data.read_rows_query.ReadRowsQuery.add_range

add_range(row_range: google.cloud.bigtable.data.read_rows_query.RowRange)

Add a range of row keys to this query.

See more: google.cloud.bigtable.data.read_rows_query.ReadRowsQuery.add_range

google.cloud.bigtable.data.read_rows_query.ReadRowsQuery.shard

shard(shard_keys: RowKeySamples) -> ShardedQuery

Split this query into multiple queries that can be evenly distributed across nodes and run in parallel .

See more: google.cloud.bigtable.data.read_rows_query.ReadRowsQuery.shard

google.cloud.bigtable.data.read_rows_query.RowRange.__bool__

__bool__() -> bool

Empty RowRanges (representing a full table scan) are falsy, because they can be substituted with None.

See more: google.cloud.bigtable.data.read_rows_query.RowRange.bool

google.cloud.bigtable.data.read_rows_query.RowRange.__str__

__str__() -> str

Represent range as a string, e.g.

See more: google.cloud.bigtable.data.read_rows_query.RowRange.str

google.cloud.bigtable.data.row.Cell.__eq__

__eq__(other) -> bool

Implements == operator .

See more: google.cloud.bigtable.data.row.Cell.eq

google.cloud.bigtable.data.row.Cell.__hash__

__hash__()

Implements hash() function to fingerprint cell .

See more: google.cloud.bigtable.data.row.Cell.hash

google.cloud.bigtable.data.row.Cell.__int__

__int__() -> int

Allows casting cell to int Interprets value as a 64-bit big-endian signed integer, as expected by ReadModifyWrite increment rule .

See more: google.cloud.bigtable.data.row.Cell.int

google.cloud.bigtable.data.row.Cell.__lt__

__lt__(other) -> bool

Implements < operator .

See more: google.cloud.bigtable.data.row.Cell.lt

google.cloud.bigtable.data.row.Cell.__ne__

__ne__(other) -> bool

Implements != operator .

See more: google.cloud.bigtable.data.row.Cell.ne

google.cloud.bigtable.data.row.Cell.__repr__

__repr__()

Returns a string representation of the cell .

See more: google.cloud.bigtable.data.row.Cell.repr

google.cloud.bigtable.data.row.Cell.__str__

__str__() -> str

Allows casting cell to str Prints encoded byte string, same as printing value directly.

See more: google.cloud.bigtable.data.row.Cell.str

google.cloud.bigtable.data.row.Row

Row(key: bytes, cells: list[google.cloud.bigtable.data.row.Cell])

Row objects are not intended to be created by users.

See more: google.cloud.bigtable.data.row.Row

google.cloud.bigtable.data.row.Row.__contains__

__contains__(item)

Implements in operator.

See more: google.cloud.bigtable.data.row.Row.contains

google.cloud.bigtable.data.row.Row.__eq__

__eq__(other)

Implements == operator .

See more: google.cloud.bigtable.data.row.Row.eq

google.cloud.bigtable.data.row.Row.__getitem__

Implements [] indexing.

See more: google.cloud.bigtable.data.row.Row.getitem

google.cloud.bigtable.data.row.Row.__iter__

__iter__()

Allow iterating over all cells in the row .

See more: google.cloud.bigtable.data.row.Row.iter

google.cloud.bigtable.data.row.Row.__len__

__len__()

Returns the number of cells in the row .

See more: google.cloud.bigtable.data.row.Row.len

google.cloud.bigtable.data.row.Row.__ne__

__ne__(other) -> bool

Implements != operator .

See more: google.cloud.bigtable.data.row.Row.ne

google.cloud.bigtable.data.row.Row.__str__

__str__() -> str

Human-readable string representation::.

See more: google.cloud.bigtable.data.row.Row.str

google.cloud.bigtable.data.row.Row.get_cells

get_cells(
    family: typing.Optional[str] = None,
    qualifier: typing.Optional[typing.Union[str, bytes]] = None,
) -> list[google.cloud.bigtable.data.row.Cell]

Returns cells sorted in Bigtable native order:

  • Family lexicographically ascending
  • Qualifier ascending
  • Timestamp in reverse chronological order.

See more: google.cloud.bigtable.data.row.Row.get_cells

google.cloud.bigtable.instance.Instance.app_profile

app_profile(
    app_profile_id,
    routing_policy_type=None,
    description=None,
    cluster_id=None,
    multi_cluster_ids=None,
    allow_transactional_writes=None,
)

Factory to create AppProfile associated with this instance.

See more: google.cloud.bigtable.instance.Instance.app_profile

google.cloud.bigtable.instance.Instance.cluster

cluster(
    cluster_id,
    location_id=None,
    serve_nodes=None,
    default_storage_type=None,
    kms_key_name=None,
    min_serve_nodes=None,
    max_serve_nodes=None,
    cpu_utilization_percent=None,
)

Factory to create a cluster associated with this instance.

See more: google.cloud.bigtable.instance.Instance.cluster

google.cloud.bigtable.instance.Instance.create

create(
    location_id=None,
    serve_nodes=None,
    default_storage_type=None,
    clusters=None,
    min_serve_nodes=None,
    max_serve_nodes=None,
    cpu_utilization_percent=None,
)

Create this instance.

See more: google.cloud.bigtable.instance.Instance.create

google.cloud.bigtable.instance.Instance.delete

delete()

Delete this instance.

See more: google.cloud.bigtable.instance.Instance.delete

google.cloud.bigtable.instance.Instance.exists

exists()

Check whether the instance already exists.

See more: google.cloud.bigtable.instance.Instance.exists

google.cloud.bigtable.instance.Instance.from_pb

from_pb(instance_pb, client)

Creates an instance instance from a protobuf.

See more: google.cloud.bigtable.instance.Instance.from_pb

google.cloud.bigtable.instance.Instance.get_iam_policy

get_iam_policy(requested_policy_version=None)

Gets the access control policy for an instance resource.

See more: google.cloud.bigtable.instance.Instance.get_iam_policy

google.cloud.bigtable.instance.Instance.list_app_profiles

list_app_profiles()

Lists information about AppProfiles in an instance.

See more: google.cloud.bigtable.instance.Instance.list_app_profiles

google.cloud.bigtable.instance.Instance.list_clusters

list_clusters()

List the clusters in this instance.

See more: google.cloud.bigtable.instance.Instance.list_clusters

google.cloud.bigtable.instance.Instance.list_tables

list_tables()

List the tables in this instance.

See more: google.cloud.bigtable.instance.Instance.list_tables

google.cloud.bigtable.instance.Instance.reload

reload()

Reload the metadata for this instance.

See more: google.cloud.bigtable.instance.Instance.reload

google.cloud.bigtable.instance.Instance.set_iam_policy

set_iam_policy(policy)

Sets the access control policy on an instance resource.

See more: google.cloud.bigtable.instance.Instance.set_iam_policy

google.cloud.bigtable.instance.Instance.table

table(table_id, mutation_timeout=None, app_profile_id=None)

Factory to create a table associated with this instance.

See more: google.cloud.bigtable.instance.Instance.table

google.cloud.bigtable.instance.Instance.test_iam_permissions

test_iam_permissions(permissions)

Returns permissions that the caller has on the specified instance resource.

See more: google.cloud.bigtable.instance.Instance.test_iam_permissions

google.cloud.bigtable.instance.Instance.update

update()

Updates an instance within a project.

See more: google.cloud.bigtable.instance.Instance.update

google.cloud.bigtable.row.AppendRow.append_cell_value

append_cell_value(column_family_id, column, value)

Appends a value to an existing cell.

See more: google.cloud.bigtable.row.AppendRow.append_cell_value

google.cloud.bigtable.row.AppendRow.clear

clear()

Removes all currently accumulated modifications on current row.

See more: google.cloud.bigtable.row.AppendRow.clear

google.cloud.bigtable.row.AppendRow.commit

commit()

Makes a ReadModifyWriteRow API request.

See more: google.cloud.bigtable.row.AppendRow.commit

google.cloud.bigtable.row.AppendRow.increment_cell_value

increment_cell_value(column_family_id, column, int_value)

Increments a value in an existing cell.

See more: google.cloud.bigtable.row.AppendRow.increment_cell_value

google.cloud.bigtable.row.Cell.from_pb

from_pb(cell_pb)

Create a new cell from a Cell protobuf.

See more: google.cloud.bigtable.row.Cell.from_pb

google.cloud.bigtable.row.ConditionalRow.clear

clear()

Removes all currently accumulated mutations on the current row.

See more: google.cloud.bigtable.row.ConditionalRow.clear

google.cloud.bigtable.row.ConditionalRow.commit

commit()

Makes a CheckAndMutateRow API request.

See more: google.cloud.bigtable.row.ConditionalRow.commit

google.cloud.bigtable.row.ConditionalRow.delete

delete(state=True)

Deletes this row from the table.

See more: google.cloud.bigtable.row.ConditionalRow.delete

google.cloud.bigtable.row.ConditionalRow.delete_cell

delete_cell(column_family_id, column, time_range=None, state=True)

Deletes cell in this row.

See more: google.cloud.bigtable.row.ConditionalRow.delete_cell

google.cloud.bigtable.row.ConditionalRow.delete_cells

delete_cells(column_family_id, columns, time_range=None, state=True)

Deletes cells in this row.

See more: google.cloud.bigtable.row.ConditionalRow.delete_cells

google.cloud.bigtable.row.ConditionalRow.set_cell

set_cell(column_family_id, column, value, timestamp=None, state=True)

Sets a value in this row.

See more: google.cloud.bigtable.row.ConditionalRow.set_cell

google.cloud.bigtable.row.DirectRow.clear

clear()

Removes all currently accumulated mutations on the current row.

See more: google.cloud.bigtable.row.DirectRow.clear

google.cloud.bigtable.row.DirectRow.commit

commit()

Makes a MutateRow API request.

See more: google.cloud.bigtable.row.DirectRow.commit

google.cloud.bigtable.row.DirectRow.delete

delete()

Deletes this row from the table.

See more: google.cloud.bigtable.row.DirectRow.delete

google.cloud.bigtable.row.DirectRow.delete_cell

delete_cell(column_family_id, column, time_range=None)

Deletes cell in this row.

See more: google.cloud.bigtable.row.DirectRow.delete_cell

google.cloud.bigtable.row.DirectRow.delete_cells

delete_cells(column_family_id, columns, time_range=None)

Deletes cells in this row.

See more: google.cloud.bigtable.row.DirectRow.delete_cells

google.cloud.bigtable.row.DirectRow.get_mutations_size

get_mutations_size()

Gets the total mutations size for current row.

See more: google.cloud.bigtable.row.DirectRow.get_mutations_size

google.cloud.bigtable.row.DirectRow.set_cell

set_cell(column_family_id, column, value, timestamp=None)

Sets a value in this row.

See more: google.cloud.bigtable.row.DirectRow.set_cell

google.cloud.bigtable.row.InvalidChunk.with_traceback

Exception.with_traceback(tb) -- set self.traceback to tb and return self.

See more: google.cloud.bigtable.row.InvalidChunk.with_traceback

google.cloud.bigtable.row.PartialRowData.cell_value

cell_value(column_family_id, column, index=0)

Get a single cell value stored on this instance.

See more: google.cloud.bigtable.row.PartialRowData.cell_value

google.cloud.bigtable.row.PartialRowData.cell_values

cell_values(column_family_id, column, max_count=None)

Get a time series of cells stored on this instance.

See more: google.cloud.bigtable.row.PartialRowData.cell_values

google.cloud.bigtable.row.PartialRowData.find_cells

find_cells(column_family_id, column)

Get a time series of cells stored on this instance.

See more: google.cloud.bigtable.row.PartialRowData.find_cells

google.cloud.bigtable.row.PartialRowData.to_dict

to_dict()

Convert the cells to a dictionary.

See more: google.cloud.bigtable.row.PartialRowData.to_dict

google.cloud.bigtable.row_data.PartialRowsData.__iter__

__iter__()

Consume the ReadRowsResponse s from the stream.

See more: google.cloud.bigtable.row_data.PartialRowsData.iter

google.cloud.bigtable.row_data.PartialRowsData.cancel

cancel()

Cancels the iterator, closing the stream.

See more: google.cloud.bigtable.row_data.PartialRowsData.cancel

google.cloud.bigtable.row_data.PartialRowsData.consume_all

consume_all(max_loops=None)

Consume the streamed responses until there are no more.

See more: google.cloud.bigtable.row_data.PartialRowsData.consume_all

google.cloud.bigtable.row_filters.ApplyLabelFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.ApplyLabelFilter.to_pb

google.cloud.bigtable.row_filters.BlockAllFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.BlockAllFilter.to_pb

google.cloud.bigtable.row_filters.CellsColumnLimitFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.CellsColumnLimitFilter.to_pb

google.cloud.bigtable.row_filters.CellsRowLimitFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.CellsRowLimitFilter.to_pb

google.cloud.bigtable.row_filters.CellsRowOffsetFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.CellsRowOffsetFilter.to_pb

google.cloud.bigtable.row_filters.ColumnQualifierRegexFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.ColumnQualifierRegexFilter.to_pb

google.cloud.bigtable.row_filters.ColumnRangeFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.ColumnRangeFilter.to_pb

google.cloud.bigtable.row_filters.ConditionalRowFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.ConditionalRowFilter.to_pb

google.cloud.bigtable.row_filters.FamilyNameRegexFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.FamilyNameRegexFilter.to_pb

google.cloud.bigtable.row_filters.PassAllFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.PassAllFilter.to_pb

google.cloud.bigtable.row_filters.RowFilterChain.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.RowFilterChain.to_pb

google.cloud.bigtable.row_filters.RowFilterUnion.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.RowFilterUnion.to_pb

google.cloud.bigtable.row_filters.RowKeyRegexFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.RowKeyRegexFilter.to_pb

google.cloud.bigtable.row_filters.RowSampleFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.RowSampleFilter.to_pb

google.cloud.bigtable.row_filters.SinkFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.SinkFilter.to_pb

google.cloud.bigtable.row_filters.StripValueTransformerFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.StripValueTransformerFilter.to_pb

google.cloud.bigtable.row_filters.TimestampRange.to_pb

to_pb()

Converts the TimestampRange to a protobuf.

See more: google.cloud.bigtable.row_filters.TimestampRange.to_pb

google.cloud.bigtable.row_filters.TimestampRangeFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.TimestampRangeFilter.to_pb

google.cloud.bigtable.row_filters.ValueRangeFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.ValueRangeFilter.to_pb

google.cloud.bigtable.row_filters.ValueRegexFilter.to_pb

to_pb()

Converts the row filter to a protobuf.

See more: google.cloud.bigtable.row_filters.ValueRegexFilter.to_pb

google.cloud.bigtable.row_set.RowRange.get_range_kwargs

get_range_kwargs()

Convert row range object to dict which can be passed to google.bigtable.v2.RowRange add method.

See more: google.cloud.bigtable.row_set.RowRange.get_range_kwargs

google.cloud.bigtable.row_set.RowSet.add_row_key

add_row_key(row_key)

Add row key to row_keys list.

See more: google.cloud.bigtable.row_set.RowSet.add_row_key

google.cloud.bigtable.row_set.RowSet.add_row_range

add_row_range(row_range)

Add row_range to row_ranges list.

See more: google.cloud.bigtable.row_set.RowSet.add_row_range

google.cloud.bigtable.row_set.RowSet.add_row_range_from_keys

add_row_range_from_keys(
    start_key=None, end_key=None, start_inclusive=True, end_inclusive=False
)

Add row range to row_ranges list from the row keys.

See more: google.cloud.bigtable.row_set.RowSet.add_row_range_from_keys

google.cloud.bigtable.row_set.RowSet.add_row_range_with_prefix

add_row_range_with_prefix(row_key_prefix)

Add row range to row_ranges list that start with the row_key_prefix from the row keys.

See more: google.cloud.bigtable.row_set.RowSet.add_row_range_with_prefix

google.cloud.bigtable.table.ClusterState.__eq__

__eq__(other)

Checks if two ClusterState instances(self and other) are equal on the basis of instance variable 'replication_state'.

See more: google.cloud.bigtable.table.ClusterState.eq

google.cloud.bigtable.table.ClusterState.__ne__

__ne__(other)

Checks if two ClusterState instances(self and other) are not equal.

See more: google.cloud.bigtable.table.ClusterState.ne

google.cloud.bigtable.table.ClusterState.__repr__

__repr__()

Representation of cluster state instance as string value for cluster state.

See more: google.cloud.bigtable.table.ClusterState.repr

google.cloud.bigtable.table.Table.append_row

append_row(row_key)

Create a xref_AppendRow associated with this table.

See more: google.cloud.bigtable.table.Table.append_row

google.cloud.bigtable.table.Table.backup

backup(backup_id, cluster_id=None, expire_time=None)

Factory to create a Backup linked to this Table.

See more: google.cloud.bigtable.table.Table.backup

google.cloud.bigtable.table.Table.column_family

column_family(column_family_id, gc_rule=None)

Factory to create a column family associated with this table.

See more: google.cloud.bigtable.table.Table.column_family

google.cloud.bigtable.table.Table.conditional_row

conditional_row(row_key, filter_)

Create a xref_ConditionalRow associated with this table.

See more: google.cloud.bigtable.table.Table.conditional_row

google.cloud.bigtable.table.Table.create

create(initial_split_keys=[], column_families={})

Creates this table.

See more: google.cloud.bigtable.table.Table.create

google.cloud.bigtable.table.Table.delete

delete()

Delete this table.

See more: google.cloud.bigtable.table.Table.delete

google.cloud.bigtable.table.Table.direct_row

direct_row(row_key)

Create a xref_DirectRow associated with this table.

See more: google.cloud.bigtable.table.Table.direct_row

google.cloud.bigtable.table.Table.drop_by_prefix

drop_by_prefix(row_key_prefix, timeout=None)

google.cloud.bigtable.table.Table.exists

exists()

Check whether the table exists.

See more: google.cloud.bigtable.table.Table.exists

google.cloud.bigtable.table.Table.get_cluster_states

get_cluster_states()

List the cluster states owned by this table.

See more: google.cloud.bigtable.table.Table.get_cluster_states

google.cloud.bigtable.table.Table.get_encryption_info

get_encryption_info()

List the encryption info for each cluster owned by this table.

See more: google.cloud.bigtable.table.Table.get_encryption_info

google.cloud.bigtable.table.Table.get_iam_policy

get_iam_policy()

Gets the IAM access control policy for this table.

See more: google.cloud.bigtable.table.Table.get_iam_policy

google.cloud.bigtable.table.Table.list_backups

list_backups(cluster_id=None, filter_=None, order_by=None, page_size=0)

List Backups for this Table.

See more: google.cloud.bigtable.table.Table.list_backups

google.cloud.bigtable.table.Table.list_column_families

list_column_families()

List the column families owned by this table.

See more: google.cloud.bigtable.table.Table.list_column_families

google.cloud.bigtable.table.Table.mutate_rows

mutate_rows(rows, retry=

Mutates multiple rows in bulk.

See more: google.cloud.bigtable.table.Table.mutate_rows

google.cloud.bigtable.table.Table.mutations_batcher

mutations_batcher(flush_count=100, max_row_bytes=20971520)

Factory to create a mutation batcher associated with this instance.

See more: google.cloud.bigtable.table.Table.mutations_batcher

google.cloud.bigtable.table.Table.read_row

read_row(row_key, filter_=None)

Read a single row from this table.

See more: google.cloud.bigtable.table.Table.read_row

google.cloud.bigtable.table.Table.read_rows

read_rows(start_key=None, end_key=None, limit=None, filter_=None, end_inclusive=False, row_set=None, retry=

Read rows from this table.

See more: google.cloud.bigtable.table.Table.read_rows

google.cloud.bigtable.table.Table.restore

restore(new_table_id, cluster_id=None, backup_id=None, backup_name=None)

Creates a new Table by restoring from the Backup specified by either backup_id or backup_name.

See more: google.cloud.bigtable.table.Table.restore

google.cloud.bigtable.table.Table.row

row(row_key, filter_=None, append=False)

Factory to create a row associated with this table.

See more: google.cloud.bigtable.table.Table.row

google.cloud.bigtable.table.Table.sample_row_keys

sample_row_keys()

Read a sample of row keys in the table.

See more: google.cloud.bigtable.table.Table.sample_row_keys

google.cloud.bigtable.table.Table.set_iam_policy

set_iam_policy(policy)

Sets the IAM access control policy for this table.

See more: google.cloud.bigtable.table.Table.set_iam_policy

google.cloud.bigtable.table.Table.test_iam_permissions

test_iam_permissions(permissions)

Tests whether the caller has the given permissions for this table.

See more: google.cloud.bigtable.table.Table.test_iam_permissions

google.cloud.bigtable.table.Table.truncate

truncate(timeout=None)

Truncate the table.

See more: google.cloud.bigtable.table.Table.truncate

google.cloud.bigtable.table.Table.yield_rows

yield_rows(**kwargs)

Read rows from this table.

See more: google.cloud.bigtable.table.Table.yield_rows