Cloud Bigtable Admin V2 API - Class Google::Cloud::Bigtable::Admin::V2::Table (v1.8.0)

Reference documentation and code samples for the Cloud Bigtable Admin V2 API class Google::Cloud::Bigtable::Admin::V2::Table.

A collection of user data indexed by row, column, and timestamp. Each table is served using the resources of its parent cluster.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#automated_backup_policy

def automated_backup_policy() -> ::Google::Cloud::Bigtable::Admin::V2::Table::AutomatedBackupPolicy
Returns

#automated_backup_policy=

def automated_backup_policy=(value) -> ::Google::Cloud::Bigtable::Admin::V2::Table::AutomatedBackupPolicy
Parameter
Returns

#change_stream_config

def change_stream_config() -> ::Google::Cloud::Bigtable::Admin::V2::ChangeStreamConfig
Returns

#change_stream_config=

def change_stream_config=(value) -> ::Google::Cloud::Bigtable::Admin::V2::ChangeStreamConfig
Parameter
Returns

#cluster_states

def cluster_states() -> ::Google::Protobuf::Map{::String => ::Google::Cloud::Bigtable::Admin::V2::Table::ClusterState}
Returns
  • (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigtable::Admin::V2::Table::ClusterState}) — Output only. Map from cluster ID to per-cluster table state. If it could not be determined whether or not the table has data in a particular cluster (for example, if its zone is unavailable), then there will be an entry for the cluster with UNKNOWN replication_status. Views: REPLICATION_VIEW, ENCRYPTION_VIEW, FULL

#column_families

def column_families() -> ::Google::Protobuf::Map{::String => ::Google::Cloud::Bigtable::Admin::V2::ColumnFamily}
Returns
  • (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigtable::Admin::V2::ColumnFamily}) — The column families configured for this table, mapped by column family ID. Views: SCHEMA_VIEW, STATS_VIEW, FULL

#column_families=

def column_families=(value) -> ::Google::Protobuf::Map{::String => ::Google::Cloud::Bigtable::Admin::V2::ColumnFamily}
Parameter
  • value (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigtable::Admin::V2::ColumnFamily}) — The column families configured for this table, mapped by column family ID. Views: SCHEMA_VIEW, STATS_VIEW, FULL
Returns
  • (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigtable::Admin::V2::ColumnFamily}) — The column families configured for this table, mapped by column family ID. Views: SCHEMA_VIEW, STATS_VIEW, FULL

#deletion_protection

def deletion_protection() -> ::Boolean
Returns
  • (::Boolean) — Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited:

    • The table.
    • The column families in the table.
    • The instance containing the table.

    Note one can still delete the data stored in the table through Data APIs.

#deletion_protection=

def deletion_protection=(value) -> ::Boolean
Parameter
  • value (::Boolean) — Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited:

    • The table.
    • The column families in the table.
    • The instance containing the table.

    Note one can still delete the data stored in the table through Data APIs.

Returns
  • (::Boolean) — Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited:

    • The table.
    • The column families in the table.
    • The instance containing the table.

    Note one can still delete the data stored in the table through Data APIs.

#granularity

def granularity() -> ::Google::Cloud::Bigtable::Admin::V2::Table::TimestampGranularity
Returns

#granularity=

def granularity=(value) -> ::Google::Cloud::Bigtable::Admin::V2::Table::TimestampGranularity
Parameter
Returns

#name

def name() -> ::String
Returns
  • (::String) — The unique name of the table. Values are of the form projects/{project}/instances/{instance}/tables/[_a-zA-Z0-9][-_.a-zA-Z0-9]*. Views: NAME_ONLY, SCHEMA_VIEW, REPLICATION_VIEW, FULL

#name=

def name=(value) -> ::String
Parameter
  • value (::String) — The unique name of the table. Values are of the form projects/{project}/instances/{instance}/tables/[_a-zA-Z0-9][-_.a-zA-Z0-9]*. Views: NAME_ONLY, SCHEMA_VIEW, REPLICATION_VIEW, FULL
Returns
  • (::String) — The unique name of the table. Values are of the form projects/{project}/instances/{instance}/tables/[_a-zA-Z0-9][-_.a-zA-Z0-9]*. Views: NAME_ONLY, SCHEMA_VIEW, REPLICATION_VIEW, FULL

#restore_info

def restore_info() -> ::Google::Cloud::Bigtable::Admin::V2::RestoreInfo
Returns

#row_key_schema

def row_key_schema() -> ::Google::Cloud::Bigtable::Admin::V2::Type::Struct
Returns
  • (::Google::Cloud::Bigtable::Admin::V2::Type::Struct) —

    The row key schema for this table. The schema is used to decode the raw row key bytes into a structured format. The order of field declarations in this schema is important, as it reflects how the raw row key bytes are structured. Currently, this only affects how the key is read via a GoogleSQL query from the ExecuteQuery API.

    For a SQL query, the _key column is still read as raw bytes. But queries can reference the key fields by name, which will be decoded from _key using provided type and encoding. Queries that reference key fields will fail if they encounter an invalid row key.

    For example, if _key = "some_id#2024-04-30#\x00\x13\x00\xf3" with the following schema: { fields { field_name: "id" type { string { encoding: utf8_bytes \{} } } } fields { field_name: "date" type { string { encoding: utf8_bytes \{} } } } fields { field_name: "product_code" type { int64 { encoding: big_endian_bytes \{} } } } encoding { delimited_bytes { delimiter: "#" } } }

    The decoded key parts would be: id = "some_id", date = "2024-04-30", product_code = 1245427 The query "SELECT _key, product_code FROM table" will return two columns: /------------------------------------------------------\ | _key | product_code | | --------------------------------------|--------------| | "some_id#2024-04-30#\x00\x13\x00\xf3" | 1245427 | ------------------------------------------------------/

    The schema has the following invariants: (1) The decoded field values are order-preserved. For read, the field values will be decoded in sorted mode from the raw bytes. (2) Every field in the schema must specify a non-empty name. (3) Every field must specify a type with an associated encoding. The type is limited to scalar types only: Array, Map, Aggregate, and Struct are not allowed. (4) The field names must not collide with existing column family names and reserved keywords "_key" and "_timestamp".

    The following update operations are allowed for row_key_schema:

    • Update from an empty schema to a new schema.
    • Remove the existing schema. This operation requires setting the ignore_warnings flag to true, since it might be a backward incompatible change. Without the flag, the update request will fail with an INVALID_ARGUMENT error. Any other row key schema update operation (e.g. update existing schema columns names or types) is currently unsupported.

#row_key_schema=

def row_key_schema=(value) -> ::Google::Cloud::Bigtable::Admin::V2::Type::Struct
Parameter
  • value (::Google::Cloud::Bigtable::Admin::V2::Type::Struct) —

    The row key schema for this table. The schema is used to decode the raw row key bytes into a structured format. The order of field declarations in this schema is important, as it reflects how the raw row key bytes are structured. Currently, this only affects how the key is read via a GoogleSQL query from the ExecuteQuery API.

    For a SQL query, the _key column is still read as raw bytes. But queries can reference the key fields by name, which will be decoded from _key using provided type and encoding. Queries that reference key fields will fail if they encounter an invalid row key.

    For example, if _key = "some_id#2024-04-30#\x00\x13\x00\xf3" with the following schema: { fields { field_name: "id" type { string { encoding: utf8_bytes \{} } } } fields { field_name: "date" type { string { encoding: utf8_bytes \{} } } } fields { field_name: "product_code" type { int64 { encoding: big_endian_bytes \{} } } } encoding { delimited_bytes { delimiter: "#" } } }

    The decoded key parts would be: id = "some_id", date = "2024-04-30", product_code = 1245427 The query "SELECT _key, product_code FROM table" will return two columns: /------------------------------------------------------\ | _key | product_code | | --------------------------------------|--------------| | "some_id#2024-04-30#\x00\x13\x00\xf3" | 1245427 | ------------------------------------------------------/

    The schema has the following invariants: (1) The decoded field values are order-preserved. For read, the field values will be decoded in sorted mode from the raw bytes. (2) Every field in the schema must specify a non-empty name. (3) Every field must specify a type with an associated encoding. The type is limited to scalar types only: Array, Map, Aggregate, and Struct are not allowed. (4) The field names must not collide with existing column family names and reserved keywords "_key" and "_timestamp".

    The following update operations are allowed for row_key_schema:

    • Update from an empty schema to a new schema.
    • Remove the existing schema. This operation requires setting the ignore_warnings flag to true, since it might be a backward incompatible change. Without the flag, the update request will fail with an INVALID_ARGUMENT error. Any other row key schema update operation (e.g. update existing schema columns names or types) is currently unsupported.
Returns
  • (::Google::Cloud::Bigtable::Admin::V2::Type::Struct) —

    The row key schema for this table. The schema is used to decode the raw row key bytes into a structured format. The order of field declarations in this schema is important, as it reflects how the raw row key bytes are structured. Currently, this only affects how the key is read via a GoogleSQL query from the ExecuteQuery API.

    For a SQL query, the _key column is still read as raw bytes. But queries can reference the key fields by name, which will be decoded from _key using provided type and encoding. Queries that reference key fields will fail if they encounter an invalid row key.

    For example, if _key = "some_id#2024-04-30#\x00\x13\x00\xf3" with the following schema: { fields { field_name: "id" type { string { encoding: utf8_bytes \{} } } } fields { field_name: "date" type { string { encoding: utf8_bytes \{} } } } fields { field_name: "product_code" type { int64 { encoding: big_endian_bytes \{} } } } encoding { delimited_bytes { delimiter: "#" } } }

    The decoded key parts would be: id = "some_id", date = "2024-04-30", product_code = 1245427 The query "SELECT _key, product_code FROM table" will return two columns: /------------------------------------------------------\ | _key | product_code | | --------------------------------------|--------------| | "some_id#2024-04-30#\x00\x13\x00\xf3" | 1245427 | ------------------------------------------------------/

    The schema has the following invariants: (1) The decoded field values are order-preserved. For read, the field values will be decoded in sorted mode from the raw bytes. (2) Every field in the schema must specify a non-empty name. (3) Every field must specify a type with an associated encoding. The type is limited to scalar types only: Array, Map, Aggregate, and Struct are not allowed. (4) The field names must not collide with existing column family names and reserved keywords "_key" and "_timestamp".

    The following update operations are allowed for row_key_schema:

    • Update from an empty schema to a new schema.
    • Remove the existing schema. This operation requires setting the ignore_warnings flag to true, since it might be a backward incompatible change. Without the flag, the update request will fail with an INVALID_ARGUMENT error. Any other row key schema update operation (e.g. update existing schema columns names or types) is currently unsupported.