Types overview

AdaptMessageRequest

Message sent by the client to the adapter.
Fields
attachments

map (key: string, value: string)

Optional. Opaque request state passed by the client to the server.

payload

string (bytes format)

Optional. Uninterpreted bytes from the underlying wire protocol.

protocol

string

Required. Identifier for the underlying wire protocol.

AdaptMessageResponse

Message sent by the adapter to the client.
Fields
payload

string (bytes format)

Optional. Uninterpreted bytes from the underlying wire protocol.

stateUpdates

map (key: string, value: string)

Optional. Opaque state updates to be applied by the client.

AdapterSession

A session in the Cloud Spanner Adapter API.
Fields
name

string

Identifier. The name of the session. This is always system-assigned.

AddSplitPointsRequest

The request for AddSplitPoints.
Fields
initiator

string

Optional. A user-supplied tag associated with the split points. For example, "initial_data_load", "special_event_1". Defaults to "CloudAddSplitPointsAPI" if not specified. The length of the tag must not exceed 50 characters,else will be trimmed. Only valid UTF8 characters are allowed.

splitPoints[]

object (SplitPoints)

Required. The split points to add.

AsymmetricAutoscalingOption

AsymmetricAutoscalingOption specifies the scaling of replicas identified by the given selection.
Fields
overrides

object (AutoscalingConfigOverrides)

Optional. Overrides applied to the top-level autoscaling configuration for the selected replicas.

replicaSelection

object (InstanceReplicaSelection)

Required. Selects the replicas to which this AsymmetricAutoscalingOption applies. Only read-only replicas are supported.

AutoscalingConfig

Autoscaling configuration for an instance.
Fields
asymmetricAutoscalingOptions[]

object (AsymmetricAutoscalingOption)

Optional. Optional asymmetric autoscaling options. Replicas matching the replica selection criteria will be autoscaled independently from other replicas. The autoscaler will scale the replicas based on the utilization of replicas identified by the replica selection. Replica selections should not overlap with each other. Other replicas (those do not match any replica selection) will be autoscaled together and will have the same compute capacity allocated to them.

autoscalingLimits

object (AutoscalingLimits)

Required. Autoscaling limits for an instance.

autoscalingTargets

object (AutoscalingTargets)

Required. The autoscaling targets for an instance.

AutoscalingConfigOverrides

Overrides the top-level autoscaling configuration for the replicas identified by replica_selection. All fields in this message are optional. Any unspecified fields will use the corresponding values from the top-level autoscaling configuration.
Fields
autoscalingLimits

object (AutoscalingLimits)

Optional. If specified, overrides the min/max limit in the top-level autoscaling configuration for the selected replicas.

autoscalingTargetHighPriorityCpuUtilizationPercent

integer (int32 format)

Optional. If specified, overrides the autoscaling target high_priority_cpu_utilization_percent in the top-level autoscaling configuration for the selected replicas.

AutoscalingLimits

The autoscaling limits for the instance. Users can define the minimum and maximum compute capacity allocated to the instance, and the autoscaler will only scale within that range. Users can either use nodes or processing units to specify the limits, but should use the same unit to set both the min_limit and max_limit.
Fields
maxNodes

integer (int32 format)

Maximum number of nodes allocated to the instance. If set, this number should be greater than or equal to min_nodes.

maxProcessingUnits

integer (int32 format)

Maximum number of processing units allocated to the instance. If set, this number should be multiples of 1000 and be greater than or equal to min_processing_units.

minNodes

integer (int32 format)

Minimum number of nodes allocated to the instance. If set, this number should be greater than or equal to 1.

minProcessingUnits

integer (int32 format)

Minimum number of processing units allocated to the instance. If set, this number should be multiples of 1000.

AutoscalingTargets

The autoscaling targets for an instance.
Fields
highPriorityCpuUtilizationPercent

integer (int32 format)

Required. The target high priority cpu utilization percentage that the autoscaler should be trying to achieve for the instance. This number is on a scale from 0 (no utilization) to 100 (full utilization). The valid range is [10, 90] inclusive.

storageUtilizationPercent

integer (int32 format)

Required. The target storage utilization percentage that the autoscaler should be trying to achieve for the instance. This number is on a scale from 0 (no utilization) to 100 (full utilization). The valid range is [10, 99] inclusive.

Backup

A backup of a Cloud Spanner database.
Fields
backupSchedules[]

string

Output only. List of backup schedule URIs that are associated with creating this backup. This is only applicable for scheduled backups, and is empty for on-demand backups. To optimize for storage, whenever possible, multiple schedules are collapsed together to create one backup. In such cases, this field captures the list of all backup schedule URIs that are associated with creating this backup. If collapsing is not done, then this field captures the single backup schedule URI associated with creating this backup.

createTime

string (Timestamp format)

Output only. The time the CreateBackup request is received. If the request does not specify version_time, the version_time of the backup will be equivalent to the create_time.

database

string

Required for the CreateBackup operation. Name of the database from which this backup was created. This needs to be in the same instance as the backup. Values are of the form projects//instances//databases/.

databaseDialect

enum

Output only. The database dialect information for the backup.

Enum type. Can be one of the following:
DATABASE_DIALECT_UNSPECIFIED Default value. This value will create a database with the GOOGLE_STANDARD_SQL dialect.
GOOGLE_STANDARD_SQL GoogleSQL supported SQL.
POSTGRESQL PostgreSQL supported SQL.
encryptionInfo

object (EncryptionInfo)

Output only. The encryption information for the backup.

encryptionInformation[]

object (EncryptionInfo)

Output only. The encryption information for the backup, whether it is protected by one or more KMS keys. The information includes all Cloud KMS key versions used to encrypt the backup. The encryption_status field inside of each EncryptionInfo is not populated. At least one of the key versions must be available for the backup to be restored. If a key version is revoked in the middle of a restore, the restore behavior is undefined.

exclusiveSizeBytes

string (int64 format)

Output only. For a backup in an incremental backup chain, this is the storage space needed to keep the data that has changed since the previous backup. For all other backups, this is always the size of the backup. This value may change if backups on the same chain get deleted or expired. This field can be used to calculate the total storage space used by a set of backups. For example, the total space used by all backups of a database can be computed by summing up this field.

expireTime

string (Timestamp format)

Required for the CreateBackup operation. The expiration time of the backup, with microseconds granularity that must be at least 6 hours and at most 366 days from the time the CreateBackup request is processed. Once the expire_time has passed, the backup is eligible to be automatically deleted by Cloud Spanner to free the resources used by the backup.

freeableSizeBytes

string (int64 format)

Output only. The number of bytes that will be freed by deleting this backup. This value will be zero if, for example, this backup is part of an incremental backup chain and younger backups in the chain require that we keep its data. For backups not in an incremental backup chain, this is always the size of the backup. This value may change if backups on the same chain get created, deleted or expired.

incrementalBackupChainId

string

Output only. Populated only for backups in an incremental backup chain. Backups share the same chain id if and only if they belong to the same incremental backup chain. Use this field to determine which backups are part of the same incremental backup chain. The ordering of backups in the chain can be determined by ordering the backup version_time.

instancePartitions[]

object (BackupInstancePartition)

Output only. The instance partition(s) storing the backup. This is the same as the list of the instance partition(s) that the database had footprint in at the backup's version_time.

maxExpireTime

string (Timestamp format)

Output only. The max allowed expiration time of the backup, with microseconds granularity. A backup's expiration time can be configured in multiple APIs: CreateBackup, UpdateBackup, CopyBackup. When updating or copying an existing backup, the expiration time specified must be less than Backup.max_expire_time.

name

string

Output only for the CreateBackup operation. Required for the UpdateBackup operation. A globally unique identifier for the backup which cannot be changed. Values are of the form projects//instances//backups/a-z*[a-z0-9] The final segment of the name must be between 2 and 60 characters in length. The backup is stored in the location(s) specified in the instance configuration of the instance containing the backup, identified by the prefix of the backup name of the form projects//instances/.

oldestVersionTime

string (Timestamp format)

Output only. Data deleted at a time older than this is guaranteed not to be retained in order to support this backup. For a backup in an incremental backup chain, this is the version time of the oldest backup that exists or ever existed in the chain. For all other backups, this is the version time of the backup. This field can be used to understand what data is being retained by the backup system.

referencingBackups[]

string

Output only. The names of the destination backups being created by copying this source backup. The backup names are of the form projects//instances//backups/. Referencing backups may exist in different instances. The existence of any referencing backup prevents the backup from being deleted. When the copy operation is done (either successfully completed or cancelled or the destination backup is deleted), the reference to the backup is removed.

referencingDatabases[]

string

Output only. The names of the restored databases that reference the backup. The database names are of the form projects//instances//databases/. Referencing databases may exist in different instances. The existence of any referencing database prevents the backup from being deleted. When a restored database from the backup enters the READY state, the reference to the backup is removed.

sizeBytes

string (int64 format)

Output only. Size of the backup in bytes. For a backup in an incremental backup chain, this is the sum of the exclusive_size_bytes of itself and all older backups in the chain.

state

enum

Output only. The current state of the backup.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Not specified.
CREATING The pending backup is still being created. Operations on the backup may fail with FAILED_PRECONDITION in this state.
READY The backup is complete and ready for use.
versionTime

string (Timestamp format)

The backup will contain an externally consistent copy of the database at the timestamp specified by version_time. If version_time is not specified, the system will set version_time to the create_time of the backup.

BackupInfo

Information about a backup.
Fields
backup

string

Name of the backup.

createTime

string (Timestamp format)

The time the CreateBackup request was received.

sourceDatabase

string

Name of the database the backup was created from.

versionTime

string (Timestamp format)

The backup contains an externally consistent copy of source_database at the timestamp specified by version_time. If the CreateBackup request did not specify version_time, the version_time of the backup is equivalent to the create_time.

BackupInstancePartition

Instance partition information for the backup.
Fields
instancePartition

string

A unique identifier for the instance partition. Values are of the form projects//instances//instancePartitions/

BackupSchedule

BackupSchedule expresses the automated backup creation specification for a Spanner database.
Fields
encryptionConfig

object (CreateBackupEncryptionConfig)

Optional. The encryption configuration that is used to encrypt the backup. If this field is not specified, the backup uses the same encryption configuration as the database.

fullBackupSpec

object (FullBackupSpec)

The schedule creates only full backups.

incrementalBackupSpec

object (IncrementalBackupSpec)

The schedule creates incremental backup chains.

name

string

Identifier. Output only for the CreateBackupSchedule operation. Required for the UpdateBackupSchedule operation. A globally unique identifier for the backup schedule which cannot be changed. Values are of the form projects//instances//databases//backupSchedules/a-z*[a-z0-9] The final segment of the name must be between 2 and 60 characters in length.

retentionDuration

string (Duration format)

Optional. The retention duration of a backup that must be at least 6 hours and at most 366 days. The backup is eligible to be automatically deleted once the retention period has elapsed.

spec

object (BackupScheduleSpec)

Optional. The schedule specification based on which the backup creations are triggered.

updateTime

string (Timestamp format)

Output only. The timestamp at which the schedule was last updated. If the schedule has never been updated, this field contains the timestamp when the schedule was first created.

BackupScheduleSpec

Defines specifications of the backup schedule.
Fields
cronSpec

object (CrontabSpec)

Cron style schedule specification.

BatchCreateSessionsRequest

The request for BatchCreateSessions.
Fields
sessionCount

integer (int32 format)

Required. The number of sessions to be created in this batch call. The API can return fewer than the requested number of sessions. If a specific number of sessions are desired, the client can make additional calls to BatchCreateSessions (adjusting session_count as necessary).

sessionTemplate

object (Session)

Parameters to apply to each created session.

BatchCreateSessionsResponse

The response for BatchCreateSessions.
Fields
session[]

object (Session)

The freshly created sessions.

BatchWriteRequest

The request for BatchWrite.
Fields
excludeTxnFromChangeStreams

boolean

Optional. When exclude_txn_from_change_streams is set to true: * Modifications from all transactions in this batch write operation are not be recorded in change streams with DDL option allow_txn_exclusion=true that are tracking columns modified by these transactions. * Modifications from all transactions in this batch write operation are recorded in change streams with DDL option allow_txn_exclusion=false or not set that are tracking columns modified by these transactions. When exclude_txn_from_change_streams is set to false or not set, Modifications from all transactions in this batch write operation are recorded in all change streams that are tracking columns modified by these transactions.

mutationGroups[]

object (MutationGroup)

Required. The groups of mutations to be applied.

requestOptions

object (RequestOptions)

Common options for this request.

BatchWriteResponse

The result of applying a batch of mutations.
Fields
commitTimestamp

string (Timestamp format)

The commit timestamp of the transaction that applied this batch. Present if status is OK, absent otherwise.

indexes[]

integer (int32 format)

The mutation groups applied in this batch. The values index into the mutation_groups field in the corresponding BatchWriteRequest.

status

object (Status)

An OK status indicates success. Any other status indicates a failure.

BeginTransactionRequest

The request for BeginTransaction.
Fields
mutationKey

object (Mutation)

Optional. Required for read-write transactions on a multiplexed session that commit mutations but don't perform any reads or queries. You must randomly select one of the mutations from the mutation set and send it as a part of this request.

options

object (TransactionOptions)

Required. Options for the new transaction.

requestOptions

object (RequestOptions)

Common options for this request. Priority is ignored for this request. Setting the priority in this request_options struct doesn't do anything. To set the priority for a transaction, set it on the reads and writes that are part of this transaction instead.

Binding

Associates members, or principals, with a role.
Fields
condition

object (Expr)

The condition that is associated with this binding. If the condition evaluates to true, then this binding applies to the current request. If the condition evaluates to false, then this binding does not apply to the current request. However, a different role binding might grant the same role to one or more of the principals in this binding. To learn which resources support conditions in their IAM policies, see the IAM documentation.

members[]

string

Specifies the principals requesting access for a Google Cloud resource. members can have the following values: * allUsers: A special identifier that represents anyone who is on the internet; with or without a Google account. * allAuthenticatedUsers: A special identifier that represents anyone who is authenticated with a Google account or a service account. Does not include identities that come from external identity providers (IdPs) through identity federation. * user:{emailid}: An email address that represents a specific Google account. For example, alice@example.com . * serviceAccount:{emailid}: An email address that represents a Google service account. For example, my-other-app@appspot.gserviceaccount.com. * serviceAccount:{projectid}.svc.id.goog[{namespace}/{kubernetes-sa}]: An identifier for a Kubernetes service account. For example, my-project.svc.id.goog[my-namespace/my-kubernetes-sa]. * group:{emailid}: An email address that represents a Google group. For example, admins@example.com. * domain:{domain}: The G Suite domain (primary) that represents all the users of that domain. For example, google.com or example.com. * principal://iam.googleapis.com/locations/global/workforcePools/{pool_id}/subject/{subject_attribute_value}: A single identity in a workforce identity pool. * principalSet://iam.googleapis.com/locations/global/workforcePools/{pool_id}/group/{group_id}: All workforce identities in a group. * principalSet://iam.googleapis.com/locations/global/workforcePools/{pool_id}/attribute.{attribute_name}/{attribute_value}: All workforce identities with a specific attribute value. * principalSet://iam.googleapis.com/locations/global/workforcePools/{pool_id}/*: All identities in a workforce identity pool. * principal://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/subject/{subject_attribute_value}: A single identity in a workload identity pool. * principalSet://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/group/{group_id}: A workload identity pool group. * principalSet://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/attribute.{attribute_name}/{attribute_value}: All identities in a workload identity pool with a certain attribute. * principalSet://iam.googleapis.com/projects/{project_number}/locations/global/workloadIdentityPools/{pool_id}/*: All identities in a workload identity pool. * deleted:user:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a user that has been recently deleted. For example, alice@example.com?uid=123456789012345678901. If the user is recovered, this value reverts to user:{emailid} and the recovered user retains the role in the binding. * deleted:serviceAccount:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a service account that has been recently deleted. For example, my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901. If the service account is undeleted, this value reverts to serviceAccount:{emailid} and the undeleted service account retains the role in the binding. * deleted:group:{emailid}?uid={uniqueid}: An email address (plus unique identifier) representing a Google group that has been recently deleted. For example, admins@example.com?uid=123456789012345678901. If the group is recovered, this value reverts to group:{emailid} and the recovered group retains the role in the binding. * deleted:principal://iam.googleapis.com/locations/global/workforcePools/{pool_id}/subject/{subject_attribute_value}: Deleted single identity in a workforce identity pool. For example, deleted:principal://iam.googleapis.com/locations/global/workforcePools/my-pool-id/subject/my-subject-attribute-value.

role

string

Role that is assigned to the list of members, or principals. For example, roles/viewer, roles/editor, or roles/owner. For an overview of the IAM roles and permissions, see the IAM documentation. For a list of the available pre-defined roles, see here.

ChangeQuorumMetadata

Metadata type for the long-running operation returned by ChangeQuorum.
Fields
endTime

string (Timestamp format)

If set, the time at which this operation failed or was completed successfully.

request

object (ChangeQuorumRequest)

The request for ChangeQuorum.

startTime

string (Timestamp format)

Time the request was received.

ChangeQuorumRequest

The request for ChangeQuorum.
Fields
etag

string

Optional. The etag is the hash of the QuorumInfo. The ChangeQuorum operation is only performed if the etag matches that of the QuorumInfo in the current database resource. Otherwise the API returns an ABORTED error. The etag is used for optimistic concurrency control as a way to help prevent simultaneous change quorum requests that could create a race condition.

name

string

Required. Name of the database in which to apply ChangeQuorum. Values are of the form projects//instances//databases/.

quorumType

object (QuorumType)

Required. The type of this quorum.

CommitRequest

The request for Commit.
Fields
maxCommitDelay

string (Duration format)

Optional. The amount of latency this request is configured to incur in order to improve throughput. If this field isn't set, Spanner assumes requests are relatively latency sensitive and automatically determines an appropriate delay time. You can specify a commit delay value between 0 and 500 ms.

mutations[]

object (Mutation)

The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list.

precommitToken

object (MultiplexedSessionPrecommitToken)

Optional. If the read-write transaction was executed on a multiplexed session, then you must include the precommit token with the highest sequence number received in this transaction attempt. Failing to do so results in a FailedPrecondition error.

requestOptions

object (RequestOptions)

Common options for this request.

returnCommitStats

boolean

If true, then statistics related to the transaction is included in the CommitResponse. Default value is false.

singleUseTransaction

object (TransactionOptions)

Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if the CommitRequest is sent to Cloud Spanner more than once (for instance, due to retries in the application, or in the transport library), it's possible that the mutations are executed more than once. If this is undesirable, use BeginTransaction and Commit instead.

transactionId

string (bytes format)

Commit a previously-started transaction.

CommitResponse

The response for Commit.
Fields
commitStats

object (CommitStats)

The statistics about this Commit. Not returned by default. For more information, see CommitRequest.return_commit_stats.

commitTimestamp

string (Timestamp format)

The Cloud Spanner timestamp at which the transaction committed.

precommitToken

object (MultiplexedSessionPrecommitToken)

If specified, transaction has not committed yet. You must retry the commit with the new precommit token.

CommitStats

Additional statistics about a commit.
Fields
mutationCount

string (int64 format)

The total number of mutations for the transaction. Knowing the mutation_count value can help you maximize the number of mutations in a transaction and minimize the number of API round trips. You can also monitor this value to prevent transactions from exceeding the system limit. If the number of mutations exceeds the limit, the server returns INVALID_ARGUMENT.

ContextValue

A message representing context for a KeyRangeInfo, including a label, value, unit, and severity.
Fields
label

object (LocalizedString)

The label for the context value. e.g. "latency".

severity

enum

The severity of this context.

Enum type. Can be one of the following:
SEVERITY_UNSPECIFIED Required default value.
INFO Lowest severity level "Info".
WARNING Middle severity level "Warning".
ERROR Severity level signaling an error "Error"
FATAL Severity level signaling a non recoverable error "Fatal"
unit

string

The unit of the context value.

value

number (float format)

The value for the context.

CopyBackupEncryptionConfig

Encryption configuration for the copied backup.
Fields
encryptionType

enum

Required. The encryption type of the backup.

Enum type. Can be one of the following:
ENCRYPTION_TYPE_UNSPECIFIED Unspecified. Do not use.
USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION This is the default option for CopyBackup when encryption_config is not specified. For example, if the source backup is using Customer_Managed_Encryption, the backup will be using the same Cloud KMS key as the source backup.
GOOGLE_DEFAULT_ENCRYPTION Use Google default encryption.
CUSTOMER_MANAGED_ENCRYPTION Use customer managed encryption. If specified, either kms_key_name or kms_key_names must contain valid Cloud KMS key(s).
kmsKeyName

string

Optional. The Cloud KMS key that will be used to protect the backup. This field should be set only when encryption_type is CUSTOMER_MANAGED_ENCRYPTION. Values are of the form projects//locations//keyRings//cryptoKeys/.

kmsKeyNames[]

string

Optional. Specifies the KMS configuration for the one or more keys used to protect the backup. Values are of the form projects//locations//keyRings//cryptoKeys/. KMS keys specified can be in any order. The keys referenced by kms_key_names must fully cover all regions of the backup's instance configuration. Some examples: * For regional (single-region) instance configurations, specify a regional location KMS key. * For multi-region instance configurations of type GOOGLE_MANAGED, either specify a multi-region location KMS key or multiple regional location KMS keys that cover all regions in the instance configuration. * For an instance configuration of type USER_MANAGED, specify only regional location KMS keys to cover each region in the instance configuration. Multi-region location KMS keys aren't supported for USER_MANAGED type instance configurations.

CopyBackupMetadata

Metadata type for the operation returned by CopyBackup.
Fields
cancelTime

string (Timestamp format)

The time at which cancellation of CopyBackup operation was received. Operations.CancelOperation starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED.

name

string

The name of the backup being created through the copy operation. Values are of the form projects//instances//backups/.

progress

object (OperationProgress)

The progress of the CopyBackup operation.

sourceBackup

string

The name of the source backup that is being copied. Values are of the form projects//instances//backups/.

CopyBackupRequest

The request for CopyBackup.
Fields
backupId

string

Required. The id of the backup copy. The backup_id appended to parent forms the full backup_uri of the form projects//instances//backups/.

encryptionConfig

object (CopyBackupEncryptionConfig)

Optional. The encryption configuration used to encrypt the backup. If this field is not specified, the backup will use the same encryption configuration as the source backup by default, namely encryption_type = USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION.

expireTime

string (Timestamp format)

Required. The expiration time of the backup in microsecond granularity. The expiration time must be at least 6 hours and at most 366 days from the create_time of the source backup. Once the expire_time has passed, the backup is eligible to be automatically deleted by Cloud Spanner to free the resources used by the backup.

sourceBackup

string

Required. The source backup to be copied. The source backup needs to be in READY state for it to be copied. Once CopyBackup is in progress, the source backup cannot be deleted or cleaned up on expiration until CopyBackup is finished. Values are of the form: projects//instances//backups/.

CreateBackupEncryptionConfig

Encryption configuration for the backup to create.
Fields
encryptionType

enum

Required. The encryption type of the backup.

Enum type. Can be one of the following:
ENCRYPTION_TYPE_UNSPECIFIED Unspecified. Do not use.
USE_DATABASE_ENCRYPTION Use the same encryption configuration as the database. This is the default option when encryption_config is empty. For example, if the database is using Customer_Managed_Encryption, the backup will be using the same Cloud KMS key as the database.
GOOGLE_DEFAULT_ENCRYPTION Use Google default encryption.
CUSTOMER_MANAGED_ENCRYPTION Use customer managed encryption. If specified, kms_key_name must contain a valid Cloud KMS key.
kmsKeyName

string

Optional. The Cloud KMS key that will be used to protect the backup. This field should be set only when encryption_type is CUSTOMER_MANAGED_ENCRYPTION. Values are of the form projects//locations//keyRings//cryptoKeys/.

kmsKeyNames[]

string

Optional. Specifies the KMS configuration for the one or more keys used to protect the backup. Values are of the form projects//locations//keyRings//cryptoKeys/. The keys referenced by kms_key_names must fully cover all regions of the backup's instance configuration. Some examples: * For regional (single-region) instance configurations, specify a regional location KMS key. * For multi-region instance configurations of type GOOGLE_MANAGED, either specify a multi-region location KMS key or multiple regional location KMS keys that cover all regions in the instance configuration. * For an instance configuration of type USER_MANAGED, specify only regional location KMS keys to cover each region in the instance configuration. Multi-region location KMS keys aren't supported for USER_MANAGED type instance configurations.

CreateBackupMetadata

Metadata type for the operation returned by CreateBackup.
Fields
cancelTime

string (Timestamp format)

The time at which cancellation of this operation was received. Operations.CancelOperation starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED.

database

string

The name of the database the backup is created from.

name

string

The name of the backup being created.

progress

object (OperationProgress)

The progress of the CreateBackup operation.

CreateDatabaseMetadata

Metadata type for the operation returned by CreateDatabase.
Fields
database

string

The database being created.

CreateDatabaseRequest

The request for CreateDatabase.
Fields
createStatement

string

Required. A CREATE DATABASE statement, which specifies the ID of the new database. The database ID must conform to the regular expression a-z*[a-z0-9] and be between 2 and 30 characters in length. If the database ID is a reserved word or if it contains a hyphen, the database ID must be enclosed in backticks (`).

databaseDialect

enum

Optional. The dialect of the Cloud Spanner Database.

Enum type. Can be one of the following:
DATABASE_DIALECT_UNSPECIFIED Default value. This value will create a database with the GOOGLE_STANDARD_SQL dialect.
GOOGLE_STANDARD_SQL GoogleSQL supported SQL.
POSTGRESQL PostgreSQL supported SQL.
encryptionConfig

object (EncryptionConfig)

Optional. The encryption configuration for the database. If this field is not specified, Cloud Spanner will encrypt/decrypt all data at rest using Google default encryption.

extraStatements[]

string

Optional. A list of DDL statements to run inside the newly created database. Statements can create tables, indexes, etc. These statements execute atomically with the creation of the database: if there is an error in any statement, the database is not created.

protoDescriptors

string (bytes format)

Optional. Proto descriptors used by CREATE/ALTER PROTO BUNDLE statements in 'extra_statements'. Contains a protobuf-serialized google.protobuf.FileDescriptorSet descriptor set. To generate it, install and run protoc with --include_imports and --descriptor_set_out. For example, to generate for moon/shot/app.proto, run $protoc --proto_path=/app_path --proto_path=/lib_path \ --include_imports \ --descriptor_set_out=descriptors.data \ moon/shot/app.proto For more details, see protobuffer self description.

CreateInstanceConfigMetadata

Metadata type for the operation returned by CreateInstanceConfig.
Fields
cancelTime

string (Timestamp format)

The time at which this operation was cancelled.

instanceConfig

object (InstanceConfig)

The target instance configuration end state.

progress

object (InstanceOperationProgress)

The progress of the CreateInstanceConfig operation.

CreateInstanceConfigRequest

The request for CreateInstanceConfig.
Fields
instanceConfig

object (InstanceConfig)

Required. The InstanceConfig proto of the configuration to create. instance_config.name must be /instanceConfigs/. instance_config.base_config must be a Google-managed configuration name, e.g. /instanceConfigs/us-east1, /instanceConfigs/nam3.

instanceConfigId

string

Required. The ID of the instance configuration to create. Valid identifiers are of the form custom-[-a-z0-9]*[a-z0-9] and must be between 2 and 64 characters in length. The custom- prefix is required to avoid name conflicts with Google-managed configurations.

validateOnly

boolean

An option to validate, but not actually execute, a request, and provide the same response.

CreateInstanceMetadata

Metadata type for the operation returned by CreateInstance.
Fields
cancelTime

string (Timestamp format)

The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again.

endTime

string (Timestamp format)

The time at which this operation failed or was completed successfully.

expectedFulfillmentPeriod

enum

The expected fulfillment period of this create operation.

Enum type. Can be one of the following:
FULFILLMENT_PERIOD_UNSPECIFIED Not specified.
FULFILLMENT_PERIOD_NORMAL Normal fulfillment period. The operation is expected to complete within minutes.
FULFILLMENT_PERIOD_EXTENDED Extended fulfillment period. It can take up to an hour for the operation to complete.
instance

object (Instance)

The instance being created.

startTime

string (Timestamp format)

The time at which the CreateInstance request was received.

CreateInstancePartitionMetadata

Metadata type for the operation returned by CreateInstancePartition.
Fields
cancelTime

string (Timestamp format)

The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again.

endTime

string (Timestamp format)

The time at which this operation failed or was completed successfully.

instancePartition

object (InstancePartition)

The instance partition being created.

startTime

string (Timestamp format)

The time at which the CreateInstancePartition request was received.

CreateInstancePartitionRequest

The request for CreateInstancePartition.
Fields
instancePartition

object (InstancePartition)

Required. The instance partition to create. The instance_partition.name may be omitted, but if specified must be /instancePartitions/.

instancePartitionId

string

Required. The ID of the instance partition to create. Valid identifiers are of the form a-z*[a-z0-9] and must be between 2 and 64 characters in length.

CreateInstanceRequest

The request for CreateInstance.
Fields
instance

object (Instance)

Required. The instance to create. The name may be omitted, but if specified must be /instances/.

instanceId

string

Required. The ID of the instance to create. Valid identifiers are of the form a-z*[a-z0-9] and must be between 2 and 64 characters in length.

CreateSessionRequest

The request for CreateSession.
Fields
session

object (Session)

Required. The session to create.

CrontabSpec

CrontabSpec can be used to specify the version time and frequency at which the backup is created.
Fields
creationWindow

string (Duration format)

Output only. Scheduled backups contain an externally consistent copy of the database at the version time specified in schedule_spec.cron_spec. However, Spanner might not initiate the creation of the scheduled backups at that version time. Spanner initiates the creation of scheduled backups within the time window bounded by the version_time specified in schedule_spec.cron_spec and version_time + creation_window.

text

string

Required. Textual representation of the crontab. User can customize the backup frequency and the backup version time using the cron expression. The version time must be in UTC timezone. The backup will contain an externally consistent copy of the database at the version time. Full backups must be scheduled a minimum of 12 hours apart and incremental backups must be scheduled a minimum of 4 hours apart. Examples of valid cron specifications: * 0 2/12 * * * : every 12 hours at (2, 14) hours past midnight in UTC. * 0 2,14 * * * : every 12 hours at (2, 14) hours past midnight in UTC. * 0 */4 * * * : (incremental backups only) every 4 hours at (0, 4, 8, 12, 16, 20) hours past midnight in UTC. * 0 2 * * * : once a day at 2 past midnight in UTC. * 0 2 * * 0 : once a week every Sunday at 2 past midnight in UTC. * 0 2 8 * * : once a month on 8th day at 2 past midnight in UTC.

timeZone

string

Output only. The time zone of the times in CrontabSpec.text. Currently, only UTC is supported.

Database

A Cloud Spanner database.
Fields
createTime

string (Timestamp format)

Output only. If exists, the time at which the database creation started.

databaseDialect

enum

Output only. The dialect of the Cloud Spanner Database.

Enum type. Can be one of the following:
DATABASE_DIALECT_UNSPECIFIED Default value. This value will create a database with the GOOGLE_STANDARD_SQL dialect.
GOOGLE_STANDARD_SQL GoogleSQL supported SQL.
POSTGRESQL PostgreSQL supported SQL.
defaultLeader

string

Output only. The read-write region which contains the database's leader replicas. This is the same as the value of default_leader database option set using DatabaseAdmin.CreateDatabase or DatabaseAdmin.UpdateDatabaseDdl. If not explicitly set, this is empty.

earliestVersionTime

string (Timestamp format)

Output only. Earliest timestamp at which older versions of the data can be read. This value is continuously updated by Cloud Spanner and becomes stale the moment it is queried. If you are using this value to recover data, make sure to account for the time from the moment when the value is queried to the moment when you initiate the recovery.

enableDropProtection

boolean

Optional. Whether drop protection is enabled for this database. Defaults to false, if not set. For more details, please see how to prevent accidental database deletion.

encryptionConfig

object (EncryptionConfig)

Output only. For databases that are using customer managed encryption, this field contains the encryption configuration for the database. For databases that are using Google default or other types of encryption, this field is empty.

encryptionInfo[]

object (EncryptionInfo)

Output only. For databases that are using customer managed encryption, this field contains the encryption information for the database, such as all Cloud KMS key versions that are in use. The encryption_status field inside of each EncryptionInfo is not populated. For databases that are using Google default or other types of encryption, this field is empty. This field is propagated lazily from the backend. There might be a delay from when a key version is being used and when it appears in this field.

name

string

Required. The name of the database. Values are of the form projects//instances//databases/, where ` is as specified in theCREATE DATABASE` statement. This name can be passed to other API methods to identify the database.

quorumInfo

object (QuorumInfo)

Output only. Applicable only for databases that use dual-region instance configurations. Contains information about the quorum.

reconciling

boolean

Output only. If true, the database is being updated. If false, there are no ongoing update operations for the database.

restoreInfo

object (RestoreInfo)

Output only. Applicable only for restored databases. Contains information about the restore source.

state

enum

Output only. The current database state.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Not specified.
CREATING The database is still being created. Operations on the database may fail with FAILED_PRECONDITION in this state.
READY The database is fully created and ready for use.
READY_OPTIMIZING The database is fully created and ready for use, but is still being optimized for performance and cannot handle full load. In this state, the database still references the backup it was restore from, preventing the backup from being deleted. When optimizations are complete, the full performance of the database will be restored, and the database will transition to READY state.
versionRetentionPeriod

string

Output only. The period in which Cloud Spanner retains all versions of data for the database. This is the same as the value of version_retention_period database option set using UpdateDatabaseDdl. Defaults to 1 hour, if not set.

DatabaseRole

A Cloud Spanner database role.
Fields
name

string

Required. The name of the database role. Values are of the form projects//instances//databases//databaseRoles/ where ` is as specified in theCREATE ROLE` DDL statement.

DdlStatementActionInfo

Action information extracted from a DDL statement. This proto is used to display the brief info of the DDL statement for the operation UpdateDatabaseDdl.
Fields
action

string

The action for the DDL statement, e.g. CREATE, ALTER, DROP, GRANT, etc. This field is a non-empty string.

entityNames[]

string

The entity name(s) being operated on the DDL statement. E.g. 1. For statement "CREATE TABLE t1(...)", entity_names = ["t1"]. 2. For statement "GRANT ROLE r1, r2 ...", entity_names = ["r1", "r2"]. 3. For statement "ANALYZE", entity_names = [].

entityType

string

The entity type for the DDL statement, e.g. TABLE, INDEX, VIEW, etc. This field can be empty string for some DDL statement, e.g. for statement "ANALYZE", entity_type = "".

Delete

Arguments to delete operations.
Fields
keySet

object (KeySet)

Required. The primary keys of the rows within table to delete. The primary keys must be specified in the order in which they appear in the PRIMARY KEY() clause of the table's equivalent DDL statement (the DDL statement used to create the table). Delete is idempotent. The transaction will succeed even if some or all rows do not exist.

table

string

Required. The table whose rows will be deleted.

DerivedMetric

A message representing a derived metric.
Fields
denominator

object (LocalizedString)

The name of the denominator metric. e.g. "rows".

numerator

object (LocalizedString)

The name of the numerator metric. e.g. "latency".

DiagnosticMessage

A message representing the key visualizer diagnostic messages.
Fields
info

object (LocalizedString)

Information about this diagnostic information.

metric

object (LocalizedString)

The metric.

metricSpecific

boolean

Whether this message is specific only for the current metric. By default Diagnostics are shown for all metrics, regardless which metric is the currently selected metric in the UI. However occasionally a metric will generate so many messages that the resulting visual clutter becomes overwhelming. In this case setting this to true, will show the diagnostic messages for that metric only if it is the currently selected metric.

severity

enum

The severity of the diagnostic message.

Enum type. Can be one of the following:
SEVERITY_UNSPECIFIED Required default value.
INFO Lowest severity level "Info".
WARNING Middle severity level "Warning".
ERROR Severity level signaling an error "Error"
FATAL Severity level signaling a non recoverable error "Fatal"
shortMessage

object (LocalizedString)

The short message.

DirectedReadOptions

The DirectedReadOptions can be used to indicate which replicas or regions should be used for non-transactional reads or queries. DirectedReadOptions can only be specified for a read-only transaction, otherwise the API returns an INVALID_ARGUMENT error.
Fields
excludeReplicas

object (ExcludeReplicas)

Exclude_replicas indicates that specified replicas should be excluded from serving requests. Spanner doesn't route requests to the replicas in this list.

includeReplicas

object (IncludeReplicas)

Include_replicas indicates the order of replicas (as they appear in this list) to process the request. If auto_failover_disabled is set to true and all replicas are exhausted without finding a healthy replica, Spanner waits for a replica in the list to become available, requests might fail due to DEADLINE_EXCEEDED errors.

EncryptionConfig

Encryption configuration for a Cloud Spanner database.
Fields
kmsKeyName

string

The Cloud KMS key to be used for encrypting and decrypting the database. Values are of the form projects//locations//keyRings//cryptoKeys/.

kmsKeyNames[]

string

Specifies the KMS configuration for one or more keys used to encrypt the database. Values are of the form projects//locations//keyRings//cryptoKeys/. The keys referenced by kms_key_names must fully cover all regions of the database's instance configuration. Some examples: * For regional (single-region) instance configurations, specify a regional location KMS key. * For multi-region instance configurations of type GOOGLE_MANAGED, either specify a multi-region location KMS key or multiple regional location KMS keys that cover all regions in the instance configuration. * For an instance configuration of type USER_MANAGED, specify only regional location KMS keys to cover each region in the instance configuration. Multi-region location KMS keys aren't supported for USER_MANAGED type instance configurations.

EncryptionInfo

Encryption information for a Cloud Spanner database or backup.
Fields
encryptionStatus

object (Status)

Output only. If present, the status of a recent encrypt/decrypt call on underlying data for this database or backup. Regardless of status, data is always encrypted at rest.

encryptionType

enum

Output only. The type of encryption.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED Encryption type was not specified, though data at rest remains encrypted.
GOOGLE_DEFAULT_ENCRYPTION The data is encrypted at rest with a key that is fully managed by Google. No key version or status will be populated. This is the default state.
CUSTOMER_MANAGED_ENCRYPTION The data is encrypted at rest with a key that is managed by the customer. The active version of the key. kms_key_version will be populated, and encryption_status may be populated.
kmsKeyVersion

string

Output only. A Cloud KMS key version that is being used to protect the database or backup.

ExcludeReplicas

An ExcludeReplicas contains a repeated set of ReplicaSelection that should be excluded from serving requests.
Fields
replicaSelections[]

object (ReplicaSelection)

The directed read replica selector.

ExecuteBatchDmlRequest

The request for ExecuteBatchDml.
Fields
lastStatements

boolean

Optional. If set to true, this request marks the end of the transaction. After these statements execute, you must commit or abort the transaction. Attempts to execute any other requests against this transaction (including reads and queries) are rejected. Setting this option might cause some error reporting to be deferred until commit time (for example, validation of unique constraints). Given this, successful execution of statements shouldn't be assumed until a subsequent Commit call completes successfully.

requestOptions

object (RequestOptions)

Common options for this request.

seqno

string (int64 format)

Required. A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one succeeds. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction might be aborted. Replays of previously handled requests yield the same response as the first execution.

statements[]

object (Statement)

Required. The list of statements to execute in this batch. Statements are executed serially, such that the effects of statement i are visible to statement i+1. Each statement must be a DML statement. Execution stops at the first failed statement; the remaining statements are not executed. Callers must provide at least one statement.

transaction

object (TransactionSelector)

Required. The transaction to use. Must be a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction.

ExecuteBatchDmlResponse

The response for ExecuteBatchDml. Contains a list of ResultSet messages, one for each DML statement that has successfully executed, in the same order as the statements in the request. If a statement fails, the status in the response body identifies the cause of the failure. To check for DML statements that failed, use the following approach: 1. Check the status in the response message. The google.rpc.Code enum value OK indicates that all statements were executed successfully. 2. If the status was not OK, check the number of result sets in the response. If the response contains N ResultSet messages, then statement N+1 in the request failed. Example 1: * Request: 5 DML statements, all executed successfully. * Response: 5 ResultSet messages, with the status OK. Example 2: * Request: 5 DML statements. The third statement has a syntax error. * Response: 2 ResultSet messages, and a syntax error (INVALID_ARGUMENT) status. The number of ResultSet messages indicates that the third statement failed, and the fourth and fifth statements were not executed.
Fields
precommitToken

object (MultiplexedSessionPrecommitToken)

Optional. A precommit token is included if the read-write transaction is on a multiplexed session. Pass the precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction.

resultSets[]

object (ResultSet)

One ResultSet for each statement in the request that ran successfully, in the same order as the statements in the request. Each ResultSet does not contain any rows. The ResultSetStats in each ResultSet contain the number of rows modified by the statement. Only the first ResultSet in the response contains valid ResultSetMetadata.

status

object (Status)

If all DML statements are executed successfully, the status is OK. Otherwise, the error status of the first failed statement.

ExecuteSqlRequest

The request for ExecuteSql and ExecuteStreamingSql.
Fields
dataBoostEnabled

boolean

If this is for a partitioned query and this field is set to true, the request is executed with Spanner Data Boost independent compute resources. If the field is set to true but the request doesn't set partition_token, the API returns an INVALID_ARGUMENT error.

directedReadOptions

object (DirectedReadOptions)

Directed read options for this request.

lastStatement

boolean

Optional. If set to true, this statement marks the end of the transaction. After this statement executes, you must commit or abort the transaction. Attempts to execute any other requests against this transaction (including reads and queries) are rejected. For DML statements, setting this option might cause some error reporting to be deferred until commit time (for example, validation of unique constraints). Given this, successful execution of a DML statement shouldn't be assumed until a subsequent Commit call completes successfully.

paramTypes

map (key: string, value: object (Type))

It isn't always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type BYTES and values of type STRING both appear in params as JSON strings. In these cases, you can use param_types to specify the exact SQL type for some or all of the SQL statement parameters. See the definition of Type for more information about SQL types.

params

map (key: string, value: any)

Parameter names and values that bind to placeholders in the SQL string. A parameter placeholder consists of the @ character followed by the parameter name (for example, @firstName). Parameter names must conform to the naming requirements of identifiers as specified at https://cloud.google.com/spanner/docs/lexical#identifiers. Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example: "WHERE id > @msg_id AND id < @msg_id + 100" It's an error to execute a SQL statement with unbound parameters.

partitionToken

string (bytes format)

If present, results are restricted to the specified partition previously created using PartitionQuery. There must be an exact match for the values of fields common to this message and the PartitionQueryRequest message used to create this partition_token.

queryMode

enum

Used to control the amount of debugging information returned in ResultSetStats. If partition_token is set, query_mode can only be set to QueryMode.NORMAL.

Enum type. Can be one of the following:
NORMAL The default mode. Only the statement results are returned.
PLAN This mode returns only the query plan, without any results or execution statistics information.
PROFILE This mode returns the query plan, overall execution statistics, operator level execution statistics along with the results. This has a performance overhead compared to the other modes. It isn't recommended to use this mode for production traffic.
WITH_STATS This mode returns the overall (but not operator-level) execution statistics along with the results.
WITH_PLAN_AND_STATS This mode returns the query plan, overall (but not operator-level) execution statistics along with the results.
queryOptions

object (QueryOptions)

Query optimizer configuration to use for the given query.

requestOptions

object (RequestOptions)

Common options for this request.

resumeToken

string (bytes format)

If this request is resuming a previously interrupted SQL statement execution, resume_token should be copied from the last PartialResultSet yielded before the interruption. Doing this enables the new SQL statement execution to resume where the last one left off. The rest of the request parameters must exactly match the request that yielded this token.

seqno

string (int64 format)

A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one succeeds. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction can be aborted. Replays of previously handled requests yield the same response as the first execution. Required for DML statements. Ignored for queries.

sql

string

Required. The SQL string.

transaction

object (TransactionSelector)

The transaction to use. For queries, if none is provided, the default is a temporary read-only transaction with strong concurrency. Standard DML statements require a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. Partitioned DML requires an existing Partitioned DML transaction ID.

Expr

Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
Fields
description

string

Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.

expression

string

Textual representation of an expression in Common Expression Language syntax.

location

string

Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.

title

string

Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.

Field

Message representing a single field of a struct.
Fields
name

string

The name of the field. For reads, this is the column name. For SQL queries, it is the column alias (e.g., "Word" in the query "SELECT 'hello' AS Word"), or the column name (e.g., "ColName" in the query "SELECT ColName FROM Table"). Some columns might have an empty name (e.g., "SELECT UPPER(ColName)"). Note that a query result can contain multiple fields with the same name.

type

object (Type)

The type of the field.

FreeInstanceMetadata

Free instance specific metadata that is kept even after an instance has been upgraded for tracking purposes.
Fields
expireBehavior

enum

Specifies the expiration behavior of a free instance. The default of ExpireBehavior is REMOVE_AFTER_GRACE_PERIOD. This can be modified during or after creation, and before expiration.

Enum type. Can be one of the following:
EXPIRE_BEHAVIOR_UNSPECIFIED Not specified.
FREE_TO_PROVISIONED When the free instance expires, upgrade the instance to a provisioned instance.
REMOVE_AFTER_GRACE_PERIOD When the free instance expires, disable the instance, and delete it after the grace period passes if it has not been upgraded.
expireTime

string (Timestamp format)

Output only. Timestamp after which the instance will either be upgraded or scheduled for deletion after a grace period. ExpireBehavior is used to choose between upgrading or scheduling the free instance for deletion. This timestamp is set during the creation of a free instance.

upgradeTime

string (Timestamp format)

Output only. If present, the timestamp at which the free instance was upgraded to a provisioned instance.

GetDatabaseDdlResponse

The response for GetDatabaseDdl.
Fields
protoDescriptors

string (bytes format)

Proto descriptors stored in the database. Contains a protobuf-serialized google.protobuf.FileDescriptorSet. For more details, see protobuffer self description.

statements[]

string

A list of formatted DDL statements defining the schema of the database specified in the request.

GetIamPolicyRequest

Request message for GetIamPolicy method.
Fields
options

object (GetPolicyOptions)

OPTIONAL: A GetPolicyOptions object for specifying options to GetIamPolicy.

GetPolicyOptions

Encapsulates settings provided to GetIamPolicy.
Fields
requestedPolicyVersion

integer (int32 format)

Optional. The maximum policy version that will be used to format the policy. Valid values are 0, 1, and 3. Requests specifying an invalid value will be rejected. Requests for policies with any conditional role bindings must specify version 3. Policies with no conditional role bindings may specify any valid value or leave the field unset. The policy in the response might use the policy version that you specified, or it might use a lower policy version. For example, if you specify version 3, but the policy has no conditional role bindings, the response uses version 1. To learn which resources support conditions in their IAM policies, see the IAM documentation.

IncludeReplicas

An IncludeReplicas contains a repeated set of ReplicaSelection which indicates the order in which replicas should be considered.
Fields
autoFailoverDisabled

boolean

If true, Spanner doesn't route requests to a replica outside the <include_replicas list when all of the specified replicas are unavailable or unhealthy. Default value is false.

replicaSelections[]

object (ReplicaSelection)

The directed read replica selector.

IndexAdvice

Recommendation to add new indexes to run queries more efficiently.
Fields
ddl[]

string

Optional. DDL statements to add new indexes that will improve the query.

improvementFactor

number (double format)

Optional. Estimated latency improvement factor. For example if the query currently takes 500 ms to run and the estimated latency with new indexes is 100 ms this field will be 5.

IndexedHotKey

A message representing a (sparse) collection of hot keys for specific key buckets.
Fields
sparseHotKeys

map (key: string, value: integer (int32 format))

A (sparse) mapping from key bucket index to the index of the specific hot row key for that key bucket. The index of the hot row key can be translated to the actual row key via the ScanData.VisualizationData.indexed_keys repeated field.

IndexedKeyRangeInfos

A message representing a (sparse) collection of KeyRangeInfos for specific key buckets.
Fields
keyRangeInfos

map (key: string, value: object (KeyRangeInfos))

A (sparse) mapping from key bucket index to the KeyRangeInfos for that key bucket.

Instance

An isolated set of Cloud Spanner resources on which databases can be hosted.
Fields
autoscalingConfig

object (AutoscalingConfig)

Optional. The autoscaling configuration. Autoscaling is enabled if this field is set. When autoscaling is enabled, node_count and processing_units are treated as OUTPUT_ONLY fields and reflect the current compute capacity allocated to the instance.

config

string

Required. The name of the instance's configuration. Values are of the form projects//instanceConfigs/. See also InstanceConfig and ListInstanceConfigs.

createTime

string (Timestamp format)

Output only. The time at which the instance was created.

defaultBackupScheduleType

enum

Optional. Controls the default backup schedule behavior for new databases within the instance. By default, a backup schedule is created automatically when a new database is created in a new instance. Note that the AUTOMATIC value isn't permitted for free instances, as backups and backup schedules aren't supported for free instances. In the GetInstance or ListInstances response, if the value of default_backup_schedule_type isn't set, or set to NONE, Spanner doesn't create a default backup schedule for new databases in the instance.

Enum type. Can be one of the following:
DEFAULT_BACKUP_SCHEDULE_TYPE_UNSPECIFIED Not specified.
NONE A default backup schedule isn't created automatically when a new database is created in the instance.
AUTOMATIC A default backup schedule is created automatically when a new database is created in the instance. The default backup schedule creates a full backup every 24 hours. These full backups are retained for 7 days. You can edit or delete the default backup schedule once it's created.
displayName

string

Required. The descriptive name for this instance as it appears in UIs. Must be unique per project and between 4 and 30 characters in length.

edition

enum

Optional. The Edition of the current instance.

Enum type. Can be one of the following:
EDITION_UNSPECIFIED Edition not specified.
STANDARD Standard edition.
ENTERPRISE Enterprise edition.
ENTERPRISE_PLUS Enterprise Plus edition.
endpointUris[]

string

Deprecated. This field is not populated.

freeInstanceMetadata

object (FreeInstanceMetadata)

Free instance metadata. Only populated for free instances.

instanceType

enum

The InstanceType of the current instance.

Enum type. Can be one of the following:
INSTANCE_TYPE_UNSPECIFIED Not specified.
PROVISIONED Provisioned instances have dedicated resources, standard usage limits and support.
FREE_INSTANCE Free instances provide no guarantee for dedicated resources, [node_count, processing_units] should be 0. They come with stricter usage limits and limited support.
labels

map (key: string, value: string)

Cloud Labels are a flexible and lightweight mechanism for organizing cloud resources into groups that reflect a customer's organizational needs and deployment strategies. Cloud Labels can be used to filter collections of resources. They can be used to control how resource metrics are aggregated. And they can be used as arguments to policy management rules (e.g. route, firewall, load balancing, etc.). * Label keys must be between 1 and 63 characters long and must conform to the following regular expression: a-z{0,62}. * Label values must be between 0 and 63 characters long and must conform to the regular expression [a-z0-9_-]{0,63}. * No more than 64 labels can be associated with a given resource. See https://goo.gl/xmQnxf for more information on and examples of labels. If you plan to use labels in your own code, please note that additional characters may be allowed in the future. And so you are advised to use an internal label representation, such as JSON, which doesn't rely upon specific characters being disallowed. For example, representing labels as the string: name + "" + value would prove problematic if we were to allow "" in a future release.

name

string

Required. A unique identifier for the instance, which cannot be changed after the instance is created. Values are of the form projects//instances/a-z*[a-z0-9]. The final segment of the name must be between 2 and 64 characters in length.

nodeCount

integer (int32 format)

The number of nodes allocated to this instance. At most, one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. If autoscaling is enabled, node_count is treated as an OUTPUT_ONLY field and reflects the current number of nodes allocated to the instance. This might be zero in API responses for instances that are not yet in the READY state. If the instance has varying node count across replicas (achieved by setting asymmetric_autoscaling_options in the autoscaling configuration), the node_count set here is the maximum node count across all replicas. For more information, see Compute capacity, nodes, and processing units.

processingUnits

integer (int32 format)

The number of processing units allocated to this instance. At most, one of either processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. If autoscaling is enabled, processing_units is treated as an OUTPUT_ONLY field and reflects the current number of processing units allocated to the instance. This might be zero in API responses for instances that are not yet in the READY state. If the instance has varying processing units per replica (achieved by setting asymmetric_autoscaling_options in the autoscaling configuration), the processing_units set here is the maximum processing units across all replicas. For more information, see Compute capacity, nodes and processing units.

replicaComputeCapacity[]

object (ReplicaComputeCapacity)

Output only. Lists the compute capacity per ReplicaSelection. A replica selection identifies a set of replicas with common properties. Replicas identified by a ReplicaSelection are scaled with the same compute capacity.

state

enum

Output only. The current instance state. For CreateInstance, the state must be either omitted or set to CREATING. For UpdateInstance, the state must be either omitted or set to READY.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Not specified.
CREATING The instance is still being created. Resources may not be available yet, and operations such as database creation may not work.
READY The instance is fully created and ready to do work such as creating databases.
updateTime

string (Timestamp format)

Output only. The time at which the instance was most recently updated.

InstanceConfig

A possible configuration for a Cloud Spanner instance. Configurations define the geographic placement of nodes and their replication.
Fields
baseConfig

string

Base configuration name, e.g. projects//instanceConfigs/nam3, based on which this configuration is created. Only set for user-managed configurations. base_config must refer to a configuration of type GOOGLE_MANAGED in the same project as this configuration.

configType

enum

Output only. Whether this instance configuration is a Google-managed or user-managed configuration.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED Unspecified.
GOOGLE_MANAGED Google-managed configuration.
USER_MANAGED User-managed configuration.
displayName

string

The name of this instance configuration as it appears in UIs.

etag

string

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a instance configuration from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform instance configuration updates in order to avoid race conditions: An etag is returned in the response which contains instance configurations, and systems are expected to put that etag in the request to update instance configuration to ensure that their change is applied to the same version of the instance configuration. If no etag is provided in the call to update the instance configuration, then the existing instance configuration is overwritten blindly.

freeInstanceAvailability

enum

Output only. Describes whether free instances are available to be created in this instance configuration.

Enum type. Can be one of the following:
FREE_INSTANCE_AVAILABILITY_UNSPECIFIED Not specified.
AVAILABLE Indicates that free instances are available to be created in this instance configuration.
UNSUPPORTED Indicates that free instances are not supported in this instance configuration.
DISABLED Indicates that free instances are currently not available to be created in this instance configuration.
QUOTA_EXCEEDED Indicates that additional free instances cannot be created in this instance configuration because the project has reached its limit of free instances.
labels

map (key: string, value: string)

Cloud Labels are a flexible and lightweight mechanism for organizing cloud resources into groups that reflect a customer's organizational needs and deployment strategies. Cloud Labels can be used to filter collections of resources. They can be used to control how resource metrics are aggregated. And they can be used as arguments to policy management rules (e.g. route, firewall, load balancing, etc.). * Label keys must be between 1 and 63 characters long and must conform to the following regular expression: a-z{0,62}. * Label values must be between 0 and 63 characters long and must conform to the regular expression [a-z0-9_-]{0,63}. * No more than 64 labels can be associated with a given resource. See https://goo.gl/xmQnxf for more information on and examples of labels. If you plan to use labels in your own code, please note that additional characters may be allowed in the future. Therefore, you are advised to use an internal label representation, such as JSON, which doesn't rely upon specific characters being disallowed. For example, representing labels as the string: name + "" + value would prove problematic if we were to allow "" in a future release.

leaderOptions[]

string

Allowed values of the "default_leader" schema option for databases in instances that use this instance configuration.

name

string

A unique identifier for the instance configuration. Values are of the form projects//instanceConfigs/a-z*. User instance configuration must start with custom-.

optionalReplicas[]

object (ReplicaInfo)

Output only. The available optional replicas to choose from for user-managed configurations. Populated for Google-managed configurations.

quorumType

enum

Output only. The QuorumType of the instance configuration.

Enum type. Can be one of the following:
QUORUM_TYPE_UNSPECIFIED Quorum type not specified.
REGION An instance configuration tagged with REGION quorum type forms a write quorum in a single region.
DUAL_REGION An instance configuration tagged with the DUAL_REGION quorum type forms a write quorum with exactly two read-write regions in a multi-region configuration. This instance configuration requires failover in the event of regional failures.
MULTI_REGION An instance configuration tagged with the MULTI_REGION quorum type forms a write quorum from replicas that are spread across more than one region in a multi-region configuration.
reconciling

boolean

Output only. If true, the instance configuration is being created or updated. If false, there are no ongoing operations for the instance configuration.

replicas[]

object (ReplicaInfo)

The geographic placement of nodes in this instance configuration and their replication properties. To create user-managed configurations, input replicas must include all replicas in replicas of the base_config and include one or more replicas in the optional_replicas of the base_config.

state

enum

Output only. The current instance configuration state. Applicable only for USER_MANAGED configurations.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Not specified.
CREATING The instance configuration is still being created.
READY The instance configuration is fully created and ready to be used to create instances.
storageLimitPerProcessingUnit

string (int64 format)

Output only. The storage limit in bytes per processing unit.

InstanceOperationProgress

Encapsulates progress related information for a Cloud Spanner long running instance operations.
Fields
endTime

string (Timestamp format)

If set, the time at which this operation failed or was completed successfully.

progressPercent

integer (int32 format)

Percent completion of the operation. Values are between 0 and 100 inclusive.

startTime

string (Timestamp format)

Time the request was received.

InstancePartition

An isolated set of Cloud Spanner resources that databases can define placements on.
Fields
config

string

Required. The name of the instance partition's configuration. Values are of the form projects//instanceConfigs/. See also InstanceConfig and ListInstanceConfigs.

createTime

string (Timestamp format)

Output only. The time at which the instance partition was created.

displayName

string

Required. The descriptive name for this instance partition as it appears in UIs. Must be unique per project and between 4 and 30 characters in length.

etag

string

Used for optimistic concurrency control as a way to help prevent simultaneous updates of a instance partition from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform instance partition updates in order to avoid race conditions: An etag is returned in the response which contains instance partitions, and systems are expected to put that etag in the request to update instance partitions to ensure that their change will be applied to the same version of the instance partition. If no etag is provided in the call to update instance partition, then the existing instance partition is overwritten blindly.

name

string

Required. A unique identifier for the instance partition. Values are of the form projects//instances//instancePartitions/a-z*[a-z0-9]. The final segment of the name must be between 2 and 64 characters in length. An instance partition's name cannot be changed after the instance partition is created.

nodeCount

integer (int32 format)

The number of nodes allocated to this instance partition. Users can set the node_count field to specify the target number of nodes allocated to the instance partition. This may be zero in API responses for instance partitions that are not yet in state READY.

processingUnits

integer (int32 format)

The number of processing units allocated to this instance partition. Users can set the processing_units field to specify the target number of processing units allocated to the instance partition. This might be zero in API responses for instance partitions that are not yet in the READY state.

referencingBackups[]

string

Output only. Deprecated: This field is not populated. Output only. The names of the backups that reference this instance partition. Referencing backups should share the parent instance. The existence of any referencing backup prevents the instance partition from being deleted.

referencingDatabases[]

string

Output only. The names of the databases that reference this instance partition. Referencing databases should share the parent instance. The existence of any referencing database prevents the instance partition from being deleted.

state

enum

Output only. The current instance partition state.

Enum type. Can be one of the following:
STATE_UNSPECIFIED Not specified.
CREATING The instance partition is still being created. Resources may not be available yet, and operations such as creating placements using this instance partition may not work.
READY The instance partition is fully created and ready to do work such as creating placements and using in databases.
updateTime

string (Timestamp format)

Output only. The time at which the instance partition was most recently updated.

InstanceReplicaSelection

ReplicaSelection identifies replicas with common properties.
Fields
location

string

Required. Name of the location of the replicas (e.g., "us-central1").

Key

A split key.
Fields
keyParts[]

any

Required. The column values making up the split key.

KeyRange

KeyRange represents a range of rows in a table or index. A range has a start key and an end key. These keys can be open or closed, indicating if the range includes rows with that key. Keys are represented by lists, where the ith value in the list corresponds to the ith component of the table or index primary key. Individual values are encoded as described here. For example, consider the following table definition: CREATE TABLE UserEvents ( UserName STRING(MAX), EventDate STRING(10) ) PRIMARY KEY(UserName, EventDate); The following keys name rows in this table: "Bob", "2014-09-23" Since the UserEvents table's PRIMARY KEY clause names two columns, each UserEvents key has two elements; the first is the UserName, and the second is the EventDate. Key ranges with multiple components are interpreted lexicographically by component using the table or index key's declared sort order. For example, the following range returns all events for user "Bob" that occurred in the year 2015: "start_closed": ["Bob", "2015-01-01"] "end_closed": ["Bob", "2015-12-31"] Start and end keys can omit trailing key components. This affects the inclusion and exclusion of rows that exactly match the provided key components: if the key is closed, then rows that exactly match the provided components are included; if the key is open, then rows that exactly match are not included. For example, the following range includes all events for "Bob" that occurred during and after the year 2000: "start_closed": ["Bob", "2000-01-01"] "end_closed": ["Bob"] The next example retrieves all events for "Bob": "start_closed": ["Bob"] "end_closed": ["Bob"] To retrieve events before the year 2000: "start_closed": ["Bob"] "end_open": ["Bob", "2000-01-01"] The following range includes all rows in the table: "start_closed": [] "end_closed": [] This range returns all users whose UserName begins with any character from A to C: "start_closed": ["A"] "end_open": ["D"] This range returns all users whose UserName begins with B: "start_closed": ["B"] "end_open": ["C"] Key ranges honor column sort order. For example, suppose a table is defined as follows: CREATE TABLE DescendingSortedTable { Key INT64, ... ) PRIMARY KEY(Key DESC); The following range retrieves all rows with key values between 1 and 100 inclusive: "start_closed": ["100"] "end_closed": ["1"] Note that 100 is passed as the start, and 1 is passed as the end, because Key is a descending column in the schema.
Fields
endClosed[]

any

If the end is closed, then the range includes all rows whose first len(end_closed) key columns exactly match end_closed.

endOpen[]

any

If the end is open, then the range excludes rows whose first len(end_open) key columns exactly match end_open.

startClosed[]

any

If the start is closed, then the range includes all rows whose first len(start_closed) key columns exactly match start_closed.

startOpen[]

any

If the start is open, then the range excludes rows whose first len(start_open) key columns exactly match start_open.

KeyRangeInfo

A message representing information for a key range (possibly one key).
Fields
contextValues[]

object (ContextValue)

The list of context values for this key range.

endKeyIndex

integer (int32 format)

The index of the end key in indexed_keys.

info

object (LocalizedString)

Information about this key range, for all metrics.

keysCount

string (int64 format)

The number of keys this range covers.

metric

object (LocalizedString)

The name of the metric. e.g. "latency".

startKeyIndex

integer (int32 format)

The index of the start key in indexed_keys.

timeOffset

string (Duration format)

The time offset. This is the time since the start of the time interval.

unit

object (LocalizedString)

The unit of the metric. This is an unstructured field and will be mapped as is to the user.

value

number (float format)

The value of the metric.

KeyRangeInfos

A message representing a list of specific information for multiple key ranges.
Fields
infos[]

object (KeyRangeInfo)

The list individual KeyRangeInfos.

totalSize

integer (int32 format)

The total size of the list of all KeyRangeInfos. This may be larger than the number of repeated messages above. If that is the case, this number may be used to determine how many are not being shown.

KeySet

KeySet defines a collection of Cloud Spanner keys and/or key ranges. All the keys are expected to be in the same table or index. The keys need not be sorted in any particular way. If the same key is specified multiple times in the set (for example if two ranges, two keys, or a key and a range overlap), Cloud Spanner behaves as if the key were only specified once.
Fields
all

boolean

For convenience all can be set to true to indicate that this KeySet matches all keys in the table or index. Note that any keys specified in keys or ranges are only yielded once.

keys[]

array

A list of specific keys. Entries in keys should have exactly as many elements as there are columns in the primary or index key with which this KeySet is used. Individual key values are encoded as described here.

ranges[]

object (KeyRange)

A list of key ranges. See KeyRange for more information about key range specifications.

ListBackupOperationsResponse

The response for ListBackupOperations.
Fields
nextPageToken

string

next_page_token can be sent in a subsequent ListBackupOperations call to fetch more of the matching metadata.

operations[]

object (Operation)

The list of matching backup long-running operations. Each operation's name will be prefixed by the backup's name. The operation's metadata field type metadata.type_url describes the type of the metadata. Operations returned include those that are pending or have completed/failed/canceled within the last 7 days. Operations returned are ordered by operation.metadata.value.progress.start_time in descending order starting from the most recently started operation.

ListBackupSchedulesResponse

The response for ListBackupSchedules.
Fields
backupSchedules[]

object (BackupSchedule)

The list of backup schedules for a database.

nextPageToken

string

next_page_token can be sent in a subsequent ListBackupSchedules call to fetch more of the schedules.

ListBackupsResponse

The response for ListBackups.
Fields
backups[]

object (Backup)

The list of matching backups. Backups returned are ordered by create_time in descending order, starting from the most recent create_time.

nextPageToken

string

next_page_token can be sent in a subsequent ListBackups call to fetch more of the matching backups.

ListDatabaseOperationsResponse

The response for ListDatabaseOperations.
Fields
nextPageToken

string

next_page_token can be sent in a subsequent ListDatabaseOperations call to fetch more of the matching metadata.

operations[]

object (Operation)

The list of matching database long-running operations. Each operation's name will be prefixed by the database's name. The operation's metadata field type metadata.type_url describes the type of the metadata.

ListDatabaseRolesResponse

The response for ListDatabaseRoles.
Fields
databaseRoles[]

object (DatabaseRole)

Database roles that matched the request.

nextPageToken

string

next_page_token can be sent in a subsequent ListDatabaseRoles call to fetch more of the matching roles.

ListDatabasesResponse

The response for ListDatabases.
Fields
databases[]

object (Database)

Databases that matched the request.

nextPageToken

string

next_page_token can be sent in a subsequent ListDatabases call to fetch more of the matching databases.

ListInstanceConfigOperationsResponse

The response for ListInstanceConfigOperations.
Fields
nextPageToken

string

next_page_token can be sent in a subsequent ListInstanceConfigOperations call to fetch more of the matching metadata.

operations[]

object (Operation)

The list of matching instance configuration long-running operations. Each operation's name will be prefixed by the name of the instance configuration. The operation's metadata field type metadata.type_url describes the type of the metadata.

ListInstanceConfigsResponse

The response for ListInstanceConfigs.
Fields
instanceConfigs[]

object (InstanceConfig)

The list of requested instance configurations.

nextPageToken

string

next_page_token can be sent in a subsequent ListInstanceConfigs call to fetch more of the matching instance configurations.

ListInstancePartitionOperationsResponse

The response for ListInstancePartitionOperations.
Fields
nextPageToken

string

next_page_token can be sent in a subsequent ListInstancePartitionOperations call to fetch more of the matching metadata.

operations[]

object (Operation)

The list of matching instance partition long-running operations. Each operation's name will be prefixed by the instance partition's name. The operation's metadata field type metadata.type_url describes the type of the metadata.

unreachableInstancePartitions[]

string

The list of unreachable instance partitions. It includes the names of instance partitions whose operation metadata could not be retrieved within instance_partition_deadline.

ListInstancePartitionsResponse

The response for ListInstancePartitions.
Fields
instancePartitions[]

object (InstancePartition)

The list of requested instancePartitions.

nextPageToken

string

next_page_token can be sent in a subsequent ListInstancePartitions call to fetch more of the matching instance partitions.

unreachable[]

string

The list of unreachable instances or instance partitions. It includes the names of instances or instance partitions whose metadata could not be retrieved within instance_partition_deadline.

ListInstancesResponse

The response for ListInstances.
Fields
instances[]

object (Instance)

The list of requested instances.

nextPageToken

string

next_page_token can be sent in a subsequent ListInstances call to fetch more of the matching instances.

unreachable[]

string

The list of unreachable instances. It includes the names of instances whose metadata could not be retrieved within instance_deadline.

ListOperationsResponse

The response message for Operations.ListOperations.
Fields
nextPageToken

string

The standard List next-page token.

operations[]

object (Operation)

A list of operations that matches the specified filter in the request.

ListScansResponse

Response method from the ListScans method.
Fields
nextPageToken

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

scans[]

object (Scan)

Available scans based on the list query parameters.

ListSessionsResponse

The response for ListSessions.
Fields
nextPageToken

string

next_page_token can be sent in a subsequent ListSessions call to fetch more of the matching sessions.

sessions[]

object (Session)

The list of requested sessions.

LocalizedString

A message representing a user-facing string whose value may need to be translated before being displayed.
Fields
args

map (key: string, value: string)

A map of arguments used when creating the localized message. Keys represent parameter names which may be used by the localized version when substituting dynamic values.

message

string

The canonical English version of this message. If no token is provided or the front-end has no message associated with the token, this text will be displayed as-is.

token

string

The token identifying the message, e.g. 'METRIC_READ_CPU'. This should be unique within the service.

Metric

A message representing the actual monitoring data, values for each key bucket over time, of a metric.
Fields
aggregation

enum

The aggregation function used to aggregate each key bucket

Enum type. Can be one of the following:
AGGREGATION_UNSPECIFIED Required default value.
MAX Use the maximum of all values.
SUM Use the sum of all values.
category

object (LocalizedString)

The category of the metric, e.g. "Activity", "Alerts", "Reads", etc.

derived

object (DerivedMetric)

The references to numerator and denominator metrics for a derived metric.

displayLabel

object (LocalizedString)

The displayed label of the metric.

hasNonzeroData

boolean

Whether the metric has any non-zero data.

hotValue

number (float format)

The value that is considered hot for the metric. On a per metric basis hotness signals high utilization and something that might potentially be a cause for concern by the end user. hot_value is used to calibrate and scale visual color scales.

indexedHotKeys

map (key: string, value: object (IndexedHotKey))

The (sparse) mapping from time index to an IndexedHotKey message, representing those time intervals for which there are hot keys.

indexedKeyRangeInfos

map (key: string, value: object (IndexedKeyRangeInfos))

The (sparse) mapping from time interval index to an IndexedKeyRangeInfos message, representing those time intervals for which there are informational messages concerning key ranges.

info

object (LocalizedString)

Information about the metric.

matrix

object (MetricMatrix)

The data for the metric as a matrix.

unit

object (LocalizedString)

The unit of the metric.

visible

boolean

Whether the metric is visible to the end user.

MetricMatrix

A message representing a matrix of floats.
Fields
rows[]

object (MetricMatrixRow)

The rows of the matrix.

MetricMatrixRow

A message representing a row of a matrix of floats.
Fields
cols[]

number (float format)

The columns of the row.

MoveInstanceRequest

The request for MoveInstance.
Fields
targetConfig

string

Required. The target instance configuration where to move the instance. Values are of the form projects//instanceConfigs/.

MultiplexedSessionPrecommitToken

When a read-write transaction is executed on a multiplexed session, this precommit token is sent back to the client as a part of the Transaction message in the BeginTransaction response and also as a part of the ResultSet and PartialResultSet responses.
Fields
precommitToken

string (bytes format)

Opaque precommit token.

seqNum

integer (int32 format)

An incrementing seq number is generated on every precommit token that is returned. Clients should remember the precommit token with the highest sequence number from the current transaction attempt.

Mutation

A modification to one or more Cloud Spanner rows. Mutations can be applied to a Cloud Spanner database by sending them in a Commit call.
Fields
delete

object (Delete)

Delete rows from a table. Succeeds whether or not the named rows were present.

insert

object (Write)

Insert new rows in a table. If any of the rows already exist, the write or transaction fails with error ALREADY_EXISTS.

insertOrUpdate

object (Write)

Like insert, except that if the row already exists, then its column values are overwritten with the ones provided. Any column values not explicitly written are preserved. When using insert_or_update, just as when using insert, all NOT NULL columns in the table must be given a value. This holds true even when the row already exists and will therefore actually be updated.

replace

object (Write)

Like insert, except that if the row already exists, it is deleted, and the column values provided are inserted instead. Unlike insert_or_update, this means any values not explicitly written become NULL. In an interleaved table, if you create the child table with the ON DELETE CASCADE annotation, then replacing a parent row also deletes the child rows. Otherwise, you must delete the child rows before you replace the parent row.

update

object (Write)

Update existing rows in a table. If any of the rows does not already exist, the transaction fails with error NOT_FOUND.

MutationGroup

A group of mutations to be committed together. Related mutations should be placed in a group. For example, two mutations inserting rows with the same primary key prefix in both parent and child tables are related.
Fields
mutations[]

object (Mutation)

Required. The mutations in this group.

Operation

This resource represents a long-running operation that is the result of a network API call.
Fields
done

boolean

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

error

object (Status)

The error result of the operation in case of failure or cancellation.

metadata

map (key: string, value: any)

Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.

name

string

The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.

response

map (key: string, value: any)

The normal, successful response of the operation. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.

OperationProgress

Encapsulates progress related information for a Cloud Spanner long running operation.
Fields
endTime

string (Timestamp format)

If set, the time at which this operation failed or was completed successfully.

progressPercent

integer (int32 format)

Percent completion of the operation. Values are between 0 and 100 inclusive.

startTime

string (Timestamp format)

Time the request was received.

OptimizeRestoredDatabaseMetadata

Metadata type for the long-running operation used to track the progress of optimizations performed on a newly restored database. This long-running operation is automatically created by the system after the successful completion of a database restore, and cannot be cancelled.
Fields
name

string

Name of the restored database being optimized.

progress

object (OperationProgress)

The progress of the post-restore optimizations.

PartialResultSet

Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.
Fields
chunkedValue

boolean

If true, then the final value in values is chunked, and must be combined with more values from subsequent PartialResultSets to obtain a complete field value.

last

boolean

Optional. Indicates whether this is the last PartialResultSet in the stream. The server might optionally set this field. Clients shouldn't rely on this field being set in all cases.

metadata

object (ResultSetMetadata)

Metadata about the result set, such as row type information. Only present in the first response.

precommitToken

object (MultiplexedSessionPrecommitToken)

Optional. A precommit token is included if the read-write transaction has multiplexed sessions enabled. Pass the precommit token with the highest sequence number from this transaction attempt to the Commit request for this transaction.

resumeToken

string (bytes format)

Streaming calls might be interrupted for a variety of reasons, such as TCP connection loss. If this occurs, the stream of results can be resumed by re-sending the original request and including resume_token. Note that executing any other transaction in the same session invalidates the token.

stats

object (ResultSetStats)

Query plan and execution statistics for the statement that produced this streaming result set. These can be requested by setting ExecuteSqlRequest.query_mode and are sent only once with the last response in the stream. This field is also present in the last response for DML statements.

values[]

any

A streamed result set consists of a stream of values, which might be split into many PartialResultSet messages to accommodate large rows and/or large values. Every N complete values defines a row, where N is equal to the number of entries in metadata.row_type.fields. Most values are encoded based on type as described here. It's possible that the last value in values is "chunked", meaning that the rest of the value is sent in subsequent PartialResultSet(s). This is denoted by the chunked_value field. Two or more chunked values can be merged to form a complete value as follows: * bool/number/null: can't be chunked * string: concatenate the strings * list: concatenate the lists. If the last element in a list is a string, list, or object, merge it with the first element in the next list by applying these rules recursively. * object: concatenate the (field name, field value) pairs. If a field name is duplicated, then apply these rules recursively to merge the field values. Some examples of merging: Strings are concatenated. "foo", "bar" => "foobar" Lists of non-strings are concatenated. [2, 3], [4] => [2, 3, 4] Lists are concatenated, but the last and first elements are merged because they are strings. ["a", "b"], ["c", "d"] => ["a", "bc", "d"] Lists are concatenated, but the last and first elements are merged because they are lists. Recursively, the last and first elements of the inner lists are merged because they are strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] Non-overlapping object fields are combined. {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} Overlapping object fields are merged. {"a": "1"}, {"a": "2"} => {"a": "12"} Examples of merging objects containing lists of strings. {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]} For a more complete example, suppose a streaming SQL query is yielding a result set whose rows contain a single string field. The following PartialResultSets might be yielded: { "metadata": { ... } "values": ["Hello", "W"] "chunked_value": true "resume_token": "Af65..." } { "values": ["orl"] "chunked_value": true } { "values": ["d"] "resume_token": "Zx1B..." } This sequence of PartialResultSets encodes two rows, one containing the field value "Hello", and a second containing the field value "World" = "W" + "orl" + "d". Not all PartialResultSets contain a resume_token. Execution can only be resumed from a previously yielded resume_token. For the above sequence of PartialResultSets, resuming the query with "resume_token": "Af65..." yields results from the PartialResultSet with value "orl".

Partition

Information returned for each partition returned in a PartitionResponse.
Fields
partitionToken

string (bytes format)

This token can be passed to Read, StreamingRead, ExecuteSql, or ExecuteStreamingSql requests to restrict the results to those identified by this partition token.

PartitionOptions

Options for a PartitionQueryRequest and PartitionReadRequest.
Fields
maxPartitions

string (int64 format)

Note: This hint is currently ignored by PartitionQuery and PartitionRead requests. The desired maximum number of partitions to return. For example, this might be set to the number of workers available. The default for this option is currently 10,000. The maximum value is currently 200,000. This is only a hint. The actual number of partitions returned can be smaller or larger than this maximum count request.

partitionSizeBytes

string (int64 format)

Note: This hint is currently ignored by PartitionQuery and PartitionRead requests. The desired data size for each partition generated. The default for this option is currently 1 GiB. This is only a hint. The actual size of each partition can be smaller or larger than this size request.

PartitionQueryRequest

The request for PartitionQuery
Fields
paramTypes

map (key: string, value: object (Type))

It isn't always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type BYTES and values of type STRING both appear in params as JSON strings. In these cases, param_types can be used to specify the exact SQL type for some or all of the SQL query parameters. See the definition of Type for more information about SQL types.

params

map (key: string, value: any)

Parameter names and values that bind to placeholders in the SQL string. A parameter placeholder consists of the @ character followed by the parameter name (for example, @firstName). Parameter names can contain letters, numbers, and underscores. Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example: "WHERE id > @msg_id AND id < @msg_id + 100" It's an error to execute a SQL statement with unbound parameters.

partitionOptions

object (PartitionOptions)

Additional options that affect how many partitions are created.

sql

string

Required. The query request to generate partitions for. The request fails if the query isn't root partitionable. For a query to be root partitionable, it needs to satisfy a few conditions. For example, if the query execution plan contains a distributed union operator, then it must be the first operator in the plan. For more information about other conditions, see Read data in parallel. The query request must not contain DML commands, such as INSERT, UPDATE, or DELETE. Use ExecuteStreamingSql with a PartitionedDml transaction for large, partition-friendly DML operations.

transaction

object (TransactionSelector)

Read-only snapshot transactions are supported, read and write and single-use transactions are not.

PartitionReadRequest

The request for PartitionRead
Fields
columns[]

string

The columns of table to be returned for each row matching this request.

index

string

If non-empty, the name of an index on table. This index is used instead of the table primary key when interpreting key_set and sorting result rows. See key_set for further information.

keySet

object (KeySet)

Required. key_set identifies the rows to be yielded. key_set names the primary keys of the rows in table to be yielded, unless index is present. If index is present, then key_set instead names index keys in index. It isn't an error for the key_set to name rows that don't exist in the database. Read yields nothing for nonexistent rows.

partitionOptions

object (PartitionOptions)

Additional options that affect how many partitions are created.

table

string

Required. The name of the table in the database to be read.

transaction

object (TransactionSelector)

Read only snapshot transactions are supported, read/write and single use transactions are not.

PartitionResponse

The response for PartitionQuery or PartitionRead
Fields
partitions[]

object (Partition)

Partitions created by this request.

transaction

object (Transaction)

Transaction created by this request.

PlanNode

Node information for nodes appearing in a QueryPlan.plan_nodes.
Fields
childLinks[]

object (ChildLink)

List of child node indexes and their relationship to this parent.

displayName

string

The display name for the node.

executionStats

map (key: string, value: any)

The execution statistics associated with the node, contained in a group of key-value pairs. Only present if the plan was returned as a result of a profile query. For example, number of executions, number of rows/time per execution etc.

index

integer (int32 format)

The PlanNode's index in node list.

kind

enum

Used to determine the type of node. May be needed for visualizing different kinds of nodes differently. For example, If the node is a SCALAR node, it will have a condensed representation which can be used to directly embed a description of the node in its parent.

Enum type. Can be one of the following:
KIND_UNSPECIFIED Not specified.
RELATIONAL Denotes a Relational operator node in the expression tree. Relational operators represent iterative processing of rows during query execution. For example, a TableScan operation that reads rows from a table.
SCALAR Denotes a Scalar node in the expression tree. Scalar nodes represent non-iterable entities in the query plan. For example, constants or arithmetic operators appearing inside predicate expressions or references to column names.
metadata

map (key: string, value: any)

Attributes relevant to the node contained in a group of key-value pairs. For example, a Parameter Reference node could have the following information in its metadata: { "parameter_reference": "param1", "parameter_type": "array" }

shortRepresentation

object (ShortRepresentation)

Condensed representation for SCALAR nodes.

Policy

An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.

A Policy is a collection of bindings. A binding binds one or more members, or principals, to a single role. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role.

For some types of Google Cloud resources, a binding can also specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation.

JSON example:

    {
      "bindings": [
        {
          "role": "roles/resourcemanager.organizationAdmin",
          "members": [
            "user:mike@example.com",
            "group:admins@example.com",
            "domain:google.com",
            "serviceAccount:my-project-id@appspot.gserviceaccount.com"
          ]
        },
        {
          "role": "roles/resourcemanager.organizationViewer",
          "members": [
            "user:eve@example.com"
          ],
          "condition": {
            "title": "expirable access",
            "description": "Does not grant access after Sep 2020",
            "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')",
          }
        }
      ],
      "etag": "BwWWja0YfJA=",
      "version": 3
    }

YAML example:

    bindings:
    - members:
      - user:mike@example.com
      - group:admins@example.com
      - domain:google.com
      - serviceAccount:my-project-id@appspot.gserviceaccount.com
      role: roles/resourcemanager.organizationAdmin
    - members:
      - user:eve@example.com
      role: roles/resourcemanager.organizationViewer
      condition:
        title: expirable access
        description: Does not grant access after Sep 2020
        expression: request.time < timestamp('2020-10-01T00:00:00.000Z')
    etag: BwWWja0YfJA=
    version: 3

For a description of IAM and its features, see the IAM documentation.

Fields
bindings[]

object (Binding)

Associates a list of members, or principals, with a role. Optionally, may specify a condition that determines how and when the bindings are applied. Each of the bindings must contain at least one principal. The bindings in a Policy can refer to up to 1,500 principals; up to 250 of these principals can be Google groups. Each occurrence of a principal counts towards these limits. For example, if the bindings grant 50 different roles to user:alice@example.com, and not to any other principal, then you can add another 1,450 principals to the bindings in the Policy.

etag

string (bytes format)

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy. Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.

version

integer (int32 format)

Specifies the format of the policy. Valid values are 0, 1, and 3. Requests that specify an invalid value are rejected. Any operation that affects conditional role bindings must specify version 3. This requirement applies to the following operations: * Getting a policy that includes a conditional role binding * Adding a conditional role binding to a policy * Changing a conditional role binding in a policy * Removing any role binding, with or without a condition, from a policy that includes conditions Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost. If a policy does not include any conditions, operations on that policy may specify any valid version or leave the field unset. To learn which resources support conditions in their IAM policies, see the IAM documentation.

PrefixNode

A message representing a key prefix node in the key prefix hierarchy. for eg. Bigtable keyspaces are lexicographically ordered mappings of keys to values. Keys often have a shared prefix structure where users use the keys to organize data. Eg ///employee In this case Keysight will possibly use one node for a company and reuse it for all employees that fall under the company. Doing so improves legibility in the UI.
Fields
dataSourceNode

boolean

Whether this corresponds to a data_source name.

depth

integer (int32 format)

The depth in the prefix hierarchy.

endIndex

integer (int32 format)

The index of the end key bucket of the range that this node spans.

startIndex

integer (int32 format)

The index of the start key bucket of the range that this node spans.

word

string

The string represented by the prefix node.

QueryAdvisorResult

Output of query advisor analysis.
Fields
indexAdvice[]

object (IndexAdvice)

Optional. Index Recommendation for a query. This is an optional field and the recommendation will only be available when the recommendation guarantees significant improvement in query performance.

QueryOptions

Query optimizer configuration.
Fields
optimizerStatisticsPackage

string

An option to control the selection of optimizer statistics package. This parameter allows individual queries to use a different query optimizer statistics package. Specifying latest as a value instructs Cloud Spanner to use the latest generated statistics package. If not specified, Cloud Spanner uses the statistics package set at the database level options, or the latest package if the database option isn't set. The statistics package requested by the query has to be exempt from garbage collection. This can be achieved with the following DDL statement: sql ALTER STATISTICS SET OPTIONS (allow_gc=false) The list of available statistics packages can be queried from INFORMATION_SCHEMA.SPANNER_STATISTICS. Executing a SQL statement with an invalid optimizer statistics package or with a statistics package that allows garbage collection fails with an INVALID_ARGUMENT error.

optimizerVersion

string

An option to control the selection of optimizer version. This parameter allows individual queries to pick different query optimizer versions. Specifying latest as a value instructs Cloud Spanner to use the latest supported query optimizer version. If not specified, Cloud Spanner uses the optimizer version set at the database level options. Any other positive integer (from the list of supported optimizer versions) overrides the default optimizer version for query execution. The list of supported optimizer versions can be queried from SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement with an invalid optimizer version fails with an INVALID_ARGUMENT error. See https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer for more information on managing the query optimizer. The optimizer_version statement hint has precedence over this setting.

QueryPlan

Contains an ordered list of nodes appearing in the query plan.
Fields
planNodes[]

object (PlanNode)

The nodes in the query plan. Plan nodes are returned in pre-order starting with the plan root. Each PlanNode's id corresponds to its index in plan_nodes.

queryAdvice

object (QueryAdvisorResult)

Optional. The advise/recommendations for a query. Currently this field will be serving index recommendations for a query.

QuorumInfo

Information about the dual-region quorum.
Fields
etag

string

Output only. The etag is used for optimistic concurrency control as a way to help prevent simultaneous ChangeQuorum requests that might create a race condition.

initiator

enum

Output only. Whether this ChangeQuorum is Google or User initiated.

Enum type. Can be one of the following:
INITIATOR_UNSPECIFIED Unspecified.
GOOGLE ChangeQuorum initiated by Google.
USER ChangeQuorum initiated by User.
quorumType

object (QuorumType)

Output only. The type of this quorum. See QuorumType for more information about quorum type specifications.

startTime

string (Timestamp format)

Output only. The timestamp when the request was triggered.

QuorumType

Information about the database quorum type. This only applies to dual-region instance configs.
Fields
dualRegion

object (DualRegionQuorum)

Dual-region quorum type.

singleRegion

object (SingleRegionQuorum)

Single-region quorum type.

ReadOnly

Message type to initiate a read-only transaction.
Fields
exactStaleness

string (Duration format)

Executes all reads at a timestamp that is exact_staleness old. The timestamp is chosen soon after the read is started. Guarantees that all writes that have committed more than the specified number of seconds ago are visible. Because Cloud Spanner chooses the exact timestamp, this mode works even if the client's local clock is substantially skewed from Cloud Spanner commit timestamps. Useful for reading at nearby replicas without the distributed timestamp negotiation overhead of max_staleness.

maxStaleness

string (Duration format)

Read data at a timestamp >= NOW - max_staleness seconds. Guarantees that all writes that have committed more than the specified number of seconds ago are visible. Because Cloud Spanner chooses the exact timestamp, this mode works even if the client's local clock is substantially skewed from Cloud Spanner commit timestamps. Useful for reading the freshest data available at a nearby replica, while bounding the possible staleness if the local replica has fallen behind. Note that this option can only be used in single-use transactions.

minReadTimestamp

string (Timestamp format)

Executes all reads at a timestamp >= min_read_timestamp. This is useful for requesting fresher data than some previous read, or data that is fresh enough to observe the effects of some previously committed transaction whose timestamp is known. Note that this option can only be used in single-use transactions. A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

readTimestamp

string (Timestamp format)

Executes all reads at the given timestamp. Unlike other modes, reads at a specific timestamp are repeatable; the same read at the same timestamp always returns the same data. If the timestamp is in the future, the read will block until the specified timestamp, modulo the read's deadline. Useful for large scale consistent reads such as mapreduces, or for coordinating many reads against a consistent snapshot of the data. A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

returnReadTimestamp

boolean

If true, the Cloud Spanner-selected read timestamp is included in the Transaction message that describes the transaction.

strong

boolean

Read at a timestamp where all previously committed transactions are visible.

ReadRequest

The request for Read and StreamingRead.
Fields
columns[]

string

Required. The columns of table to be returned for each row matching this request.

dataBoostEnabled

boolean

If this is for a partitioned read and this field is set to true, the request is executed with Spanner Data Boost independent compute resources. If the field is set to true but the request doesn't set partition_token, the API returns an INVALID_ARGUMENT error.

directedReadOptions

object (DirectedReadOptions)

Directed read options for this request.

index

string

If non-empty, the name of an index on table. This index is used instead of the table primary key when interpreting key_set and sorting result rows. See key_set for further information.

keySet

object (KeySet)

Required. key_set identifies the rows to be yielded. key_set names the primary keys of the rows in table to be yielded, unless index is present. If index is present, then key_set instead names index keys in index. If the partition_token field is empty, rows are yielded in table primary key order (if index is empty) or index key order (if index is non-empty). If the partition_token field isn't empty, rows are yielded in an unspecified order. It isn't an error for the key_set to name rows that don't exist in the database. Read yields nothing for nonexistent rows.

limit

string (int64 format)

If greater than zero, only the first limit rows are yielded. If limit is zero, the default is no limit. A limit can't be specified if partition_token is set.

lockHint

enum

Optional. Lock Hint for the request, it can only be used with read-write transactions.

Enum type. Can be one of the following:
LOCK_HINT_UNSPECIFIED Default value. LOCK_HINT_UNSPECIFIED is equivalent to LOCK_HINT_SHARED.
LOCK_HINT_SHARED Acquire shared locks. By default when you perform a read as part of a read-write transaction, Spanner acquires shared read locks, which allows other reads to still access the data until your transaction is ready to commit. When your transaction is committing and writes are being applied, the transaction attempts to upgrade to an exclusive lock for any data you are writing. For more information about locks, see Lock modes.
LOCK_HINT_EXCLUSIVE Acquire exclusive locks. Requesting exclusive locks is beneficial if you observe high write contention, which means you notice that multiple transactions are concurrently trying to read and write to the same data, resulting in a large number of aborts. This problem occurs when two transactions initially acquire shared locks and then both try to upgrade to exclusive locks at the same time. In this situation both transactions are waiting for the other to give up their lock, resulting in a deadlocked situation. Spanner is able to detect this occurring and force one of the transactions to abort. However, this is a slow and expensive operation and results in lower performance. In this case it makes sense to acquire exclusive locks at the start of the transaction because then when multiple transactions try to act on the same data, they automatically get serialized. Each transaction waits its turn to acquire the lock and avoids getting into deadlock situations. Because the exclusive lock hint is just a hint, it shouldn't be considered equivalent to a mutex. In other words, you shouldn't use Spanner exclusive locks as a mutual exclusion mechanism for the execution of code outside of Spanner. Note: Request exclusive locks judiciously because they block others from reading that data for the entire transaction, rather than just when the writes are being performed. Unless you observe high write contention, you should use the default of shared read locks so you don't prematurely block other clients from reading the data that you're writing to.
orderBy

enum

Optional. Order for the returned rows. By default, Spanner returns result rows in primary key order except for PartitionRead requests. For applications that don't require rows to be returned in primary key (ORDER_BY_PRIMARY_KEY) order, setting ORDER_BY_NO_ORDER option allows Spanner to optimize row retrieval, resulting in lower latencies in certain cases (for example, bulk point lookups).

Enum type. Can be one of the following:
ORDER_BY_UNSPECIFIED Default value. ORDER_BY_UNSPECIFIED is equivalent to ORDER_BY_PRIMARY_KEY.
ORDER_BY_PRIMARY_KEY Read rows are returned in primary key order. In the event that this option is used in conjunction with the partition_token field, the API returns an INVALID_ARGUMENT error.
ORDER_BY_NO_ORDER Read rows are returned in any order.
partitionToken

string (bytes format)

If present, results are restricted to the specified partition previously created using PartitionRead. There must be an exact match for the values of fields common to this message and the PartitionReadRequest message used to create this partition_token.

requestOptions

object (RequestOptions)

Common options for this request.

resumeToken

string (bytes format)

If this request is resuming a previously interrupted read, resume_token should be copied from the last PartialResultSet yielded before the interruption. Doing this enables the new read to resume where the last read left off. The rest of the request parameters must exactly match the request that yielded this token.

table

string

Required. The name of the table in the database to be read.

transaction

object (TransactionSelector)

The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency.

ReadWrite

Message type to initiate a read-write transaction. Currently this transaction type has no options.
Fields
multiplexedSessionPreviousTransactionId

string (bytes format)

Optional. Clients should pass the transaction ID of the previous transaction attempt that was aborted if this transaction is being executed on a multiplexed session.

readLockMode

enum

Read lock mode for the transaction.

Enum type. Can be one of the following:
READ_LOCK_MODE_UNSPECIFIED Default value. * If isolation level is REPEATABLE_READ, then it is an error to specify read_lock_mode. Locking semantics default to OPTIMISTIC. No validation checks are done for reads, except to validate that the data that was served at the snapshot time is unchanged at commit time in the following cases: 1. reads done as part of queries that use SELECT FOR UPDATE 2. reads done as part of statements with a LOCK_SCANNED_RANGES hint 3. reads done as part of DML statements * At all other isolation levels, if read_lock_mode is the default value, then pessimistic read locks are used.
PESSIMISTIC Pessimistic lock mode. Read locks are acquired immediately on read. Semantics described only applies to SERIALIZABLE isolation.
OPTIMISTIC Optimistic lock mode. Locks for reads within the transaction are not acquired on read. Instead the locks are acquired on a commit to validate that read/queried data has not changed since the transaction started. Semantics described only applies to SERIALIZABLE isolation.

ReplicaComputeCapacity

ReplicaComputeCapacity describes the amount of server resources that are allocated to each replica identified by the replica selection.
Fields
nodeCount

integer (int32 format)

The number of nodes allocated to each replica. This may be zero in API responses for instances that are not yet in state READY.

processingUnits

integer (int32 format)

The number of processing units allocated to each replica. This may be zero in API responses for instances that are not yet in state READY.

replicaSelection

object (InstanceReplicaSelection)

Required. Identifies replicas by specified properties. All replicas in the selection have the same amount of compute capacity.

ReplicaInfo

(No description provided)
Fields
defaultLeaderLocation

boolean

If true, this location is designated as the default leader location where leader replicas are placed. See the region types documentation for more details.

location

string

The location of the serving resources, e.g., "us-central1".

type

enum

The type of replica.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED Not specified.
READ_WRITE Read-write replicas support both reads and writes. These replicas: * Maintain a full copy of your data. * Serve reads. * Can vote whether to commit a write. * Participate in leadership election. * Are eligible to become a leader.
READ_ONLY Read-only replicas only support reads (not writes). Read-only replicas: * Maintain a full copy of your data. * Serve reads. * Do not participate in voting to commit writes. * Are not eligible to become a leader.
WITNESS Witness replicas don't support reads but do participate in voting to commit writes. Witness replicas: * Do not maintain a full copy of data. * Do not serve reads. * Vote whether to commit writes. * Participate in leader election but are not eligible to become leader.

ReplicaSelection

The directed read replica selector. Callers must provide one or more of the following fields for replica selection: * location - The location must be one of the regions within the multi-region configuration of your database. * type - The type of the replica. Some examples of using replica_selectors are: * location:us-east1 --> The "us-east1" replica(s) of any available type is used to process the request. * type:READ_ONLY --> The "READ_ONLY" type replica(s) in the nearest available location are used to process the request. * location:us-east1 type:READ_ONLY --> The "READ_ONLY" type replica(s) in location "us-east1" is used to process the request.
Fields
location

string

The location or region of the serving requests, for example, "us-east1".

type

enum

The type of replica.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED Not specified.
READ_WRITE Read-write replicas support both reads and writes.
READ_ONLY Read-only replicas only support reads (not writes).

RequestOptions

Common request options for various APIs.
Fields
priority

enum

Priority for the request.

Enum type. Can be one of the following:
PRIORITY_UNSPECIFIED PRIORITY_UNSPECIFIED is equivalent to PRIORITY_HIGH.
PRIORITY_LOW This specifies that the request is low priority.
PRIORITY_MEDIUM This specifies that the request is medium priority.
PRIORITY_HIGH This specifies that the request is high priority.
requestTag

string

A per-request tag which can be applied to queries or reads, used for statistics collection. Both request_tag and transaction_tag can be specified for a read or query that belongs to a transaction. This field is ignored for requests where it's not applicable (for example, CommitRequest). Legal characters for request_tag values are all printable characters (ASCII 32 - 126) and the length of a request_tag is limited to 50 characters. Values that exceed this limit are truncated. Any leading underscore (_) characters are removed from the string.

transactionTag

string

A tag used for statistics collection about this transaction. Both request_tag and transaction_tag can be specified for a read or query that belongs to a transaction. The value of transaction_tag should be the same for all requests belonging to the same transaction. If this request doesn't belong to any transaction, transaction_tag is ignored. Legal characters for transaction_tag values are all printable characters (ASCII 32 - 126) and the length of a transaction_tag is limited to 50 characters. Values that exceed this limit are truncated. Any leading underscore (_) characters are removed from the string.

RestoreDatabaseEncryptionConfig

Encryption configuration for the restored database.
Fields
encryptionType

enum

Required. The encryption type of the restored database.

Enum type. Can be one of the following:
ENCRYPTION_TYPE_UNSPECIFIED Unspecified. Do not use.
USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION This is the default option when encryption_config is not specified.
GOOGLE_DEFAULT_ENCRYPTION Use Google default encryption.
CUSTOMER_MANAGED_ENCRYPTION Use customer managed encryption. If specified, kms_key_name must must contain a valid Cloud KMS key.
kmsKeyName

string

Optional. The Cloud KMS key that will be used to encrypt/decrypt the restored database. This field should be set only when encryption_type is CUSTOMER_MANAGED_ENCRYPTION. Values are of the form projects//locations//keyRings//cryptoKeys/.

kmsKeyNames[]

string

Optional. Specifies the KMS configuration for one or more keys used to encrypt the database. Values have the form projects//locations//keyRings//cryptoKeys/. The keys referenced by kms_key_names must fully cover all regions of the database's instance configuration. Some examples: * For regional (single-region) instance configurations, specify a regional location KMS key. * For multi-region instance configurations of type GOOGLE_MANAGED, either specify a multi-region location KMS key or multiple regional location KMS keys that cover all regions in the instance configuration. * For an instance configuration of type USER_MANAGED, specify only regional location KMS keys to cover each region in the instance configuration. Multi-region location KMS keys aren't supported for USER_MANAGED type instance configurations.

RestoreDatabaseMetadata

Metadata type for the long-running operation returned by RestoreDatabase.
Fields
backupInfo

object (BackupInfo)

Information about the backup used to restore the database.

cancelTime

string (Timestamp format)

The time at which cancellation of this operation was received. Operations.CancelOperation starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED.

name

string

Name of the database being created and restored to.

optimizeDatabaseOperationName

string

If exists, the name of the long-running operation that will be used to track the post-restore optimization process to optimize the performance of the restored database, and remove the dependency on the restore source. The name is of the form projects//instances//databases//operations/ where the is the name of database being created and restored to. The metadata type of the long-running operation is OptimizeRestoredDatabaseMetadata. This long-running operation will be automatically created by the system after the RestoreDatabase long-running operation completes successfully. This operation will not be created if the restore was not successful.

progress

object (OperationProgress)

The progress of the RestoreDatabase operation.

sourceType

enum

The type of the restore source.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED No restore associated.
BACKUP A backup was used as the source of the restore.

RestoreDatabaseRequest

The request for RestoreDatabase.
Fields
backup

string

Name of the backup from which to restore. Values are of the form projects//instances//backups/.

databaseId

string

Required. The id of the database to create and restore to. This database must not already exist. The database_id appended to parent forms the full database name of the form projects//instances//databases/.

encryptionConfig

object (RestoreDatabaseEncryptionConfig)

Optional. An encryption configuration describing the encryption type and key resources in Cloud KMS used to encrypt/decrypt the database to restore to. If this field is not specified, the restored database will use the same encryption configuration as the backup by default, namely encryption_type = USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION.

RestoreInfo

Information about the database restore.
Fields
backupInfo

object (BackupInfo)

Information about the backup used to restore the database. The backup may no longer exist.

sourceType

enum

The type of the restore source.

Enum type. Can be one of the following:
TYPE_UNSPECIFIED No restore associated.
BACKUP A backup was used as the source of the restore.

ResultSet

Results from Read or ExecuteSql.
Fields
metadata

object (ResultSetMetadata)

Metadata about the result set, such as row type information.

precommitToken

object (MultiplexedSessionPrecommitToken)

Optional. A precommit token is included if the read-write transaction is on a multiplexed session. Pass the precommit token with the highest sequence number from this transaction attempt to the Commit request for this transaction.

rows[]

array

Each element in rows is a row whose format is defined by metadata.row_type. The ith element in each row matches the ith field in metadata.row_type. Elements are encoded based on type as described here.

stats

object (ResultSetStats)

Query plan and execution statistics for the SQL statement that produced this result set. These can be requested by setting ExecuteSqlRequest.query_mode. DML statements always produce stats containing the number of rows modified, unless executed using the ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode. Other fields might or might not be populated, based on the ExecuteSqlRequest.query_mode.

ResultSetMetadata

Metadata about a ResultSet or PartialResultSet.
Fields
rowType

object (StructType)

Indicates the field names and types for the rows in the result set. For example, a SQL query like "SELECT UserId, UserName FROM Users" could return a row_type value like: "fields": [ { "name": "UserId", "type": { "code": "INT64" } }, { "name": "UserName", "type": { "code": "STRING" } }, ]

transaction

object (Transaction)

If the read or SQL query began a transaction as a side-effect, the information about the new transaction is yielded here.

undeclaredParameters

object (StructType)

A SQL query can be parameterized. In PLAN mode, these parameters can be undeclared. This indicates the field names and types for those undeclared parameters in the SQL query. For example, a SQL query like "SELECT * FROM Users where UserId = @userId and UserName = @userName " could return a undeclared_parameters value like: "fields": [ { "name": "UserId", "type": { "code": "INT64" } }, { "name": "UserName", "type": { "code": "STRING" } }, ]

ResultSetStats

Additional statistics about a ResultSet or PartialResultSet.
Fields
queryPlan

object (QueryPlan)

QueryPlan for the query associated with this result.

queryStats

map (key: string, value: any)

Aggregated statistics from the execution of the query. Only present when the query is profiled. For example, a query could return the statistics as follows: { "rows_returned": "3", "elapsed_time": "1.22 secs", "cpu_time": "1.19 secs" }

rowCountExact

string (int64 format)

Standard DML returns an exact count of rows that were modified.

rowCountLowerBound

string (int64 format)

Partitioned DML doesn't offer exactly-once semantics, so it returns a lower bound of the rows modified.

RollbackRequest

The request for Rollback.
Fields
transactionId

string (bytes format)

Required. The transaction to roll back.

Scan

Scan is a structure which describes Cloud Key Visualizer scan information.
Fields
details

map (key: string, value: any)

Additional information provided by the implementer.

endTime

string (Timestamp format)

The upper bound for when the scan is defined.

name

string

The unique name of the scan, specific to the Database service implementing this interface.

scanData

object (ScanData)

Output only. Cloud Key Visualizer scan data. Note, this field is not available to the ListScans method.

startTime

string (Timestamp format)

A range of time (inclusive) for when the scan is defined. The lower bound for when the scan is defined.

ScanData

ScanData contains Cloud Key Visualizer scan data used by the caller to construct a visualization.
Fields
data

object (VisualizationData)

Cloud Key Visualizer scan data. The range of time this information covers is captured via the above time range fields. Note, this field is not available to the ListScans method.

endTime

string (Timestamp format)

The upper bound for when the contained data is defined.

startTime

string (Timestamp format)

A range of time (inclusive) for when the contained data is defined. The lower bound for when the contained data is defined.

Session

A session in the Cloud Spanner API.
Fields
approximateLastUseTime

string (Timestamp format)

Output only. The approximate timestamp when the session is last used. It's typically earlier than the actual last use time.

createTime

string (Timestamp format)

Output only. The timestamp when the session is created.

creatorRole

string

The database role which created this session.

labels

map (key: string, value: string)

The labels for the session. * Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?. * Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?. * No more than 64 labels can be associated with a given session. See https://goo.gl/xmQnxf for more information on and examples of labels.

multiplexed

boolean

Optional. If true, specifies a multiplexed session. Use a multiplexed session for multiple, concurrent read-only operations. Don't use them for read-write transactions, partitioned reads, or partitioned queries. Use sessions.create to create multiplexed sessions. Don't use BatchCreateSessions to create a multiplexed session. You can't delete or list multiplexed sessions.

name

string

Output only. The name of the session. This is always system-assigned.

SetIamPolicyRequest

Request message for SetIamPolicy method.
Fields
policy

object (Policy)

REQUIRED: The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Google Cloud services (such as Projects) might reject them.

ShortRepresentation

Condensed representation of a node and its subtree. Only present for SCALAR PlanNode(s).
Fields
description

string

A string representation of the expression subtree rooted at this node.

subqueries

map (key: string, value: integer (int32 format))

A mapping of (subquery variable name) -> (subquery node id) for cases where the description string of this node references a SCALAR subquery contained in the expression subtree rooted at this node. The referenced SCALAR subquery may not necessarily be a direct child of this node.

SingleRegionQuorum

Message type for a single-region quorum.
Fields
servingLocation

string

Required. The location of the serving region, e.g. "us-central1". The location must be one of the regions within the dual-region instance configuration of your database. The list of valid locations is available using the GetInstanceConfig API. This should only be used if you plan to change quorum to the single-region quorum type.

SplitPoints

The split points of a table/index.
Fields
expireTime

string (Timestamp format)

Optional. The expiration timestamp of the split points. A timestamp in the past means immediate expiration. The maximum value can be 30 days in the future. Defaults to 10 days in the future if not specified.

index

string

The index to split. If specified, the table field must refer to the index's base table.

keys[]

object (Key)

Required. The list of split keys, i.e., the split boundaries.

table

string

The table to split.

Statement

A single DML statement.
Fields
paramTypes

map (key: string, value: object (Type))

It isn't always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type BYTES and values of type STRING both appear in params as JSON strings. In these cases, param_types can be used to specify the exact SQL type for some or all of the SQL statement parameters. See the definition of Type for more information about SQL types.

params

map (key: string, value: any)

Parameter names and values that bind to placeholders in the DML string. A parameter placeholder consists of the @ character followed by the parameter name (for example, @firstName). Parameter names can contain letters, numbers, and underscores. Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example: "WHERE id > @msg_id AND id < @msg_id + 100" It's an error to execute a SQL statement with unbound parameters.

sql

string

Required. The DML string.

Status

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Fields
code

integer (int32 format)

The status code, which should be an enum value of google.rpc.Code.

details[]

object

A list of messages that carry the error details. There is a common set of message types for APIs to use.

message

string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

StructType

StructType defines the fields of a STRUCT type.
Fields
fields[]

object (Field)

The list of fields that make up this struct. Order is significant, because values of this struct type are represented as lists, where the order of field values matches the order of fields in the StructType. In turn, the order of fields matches the order of columns in a read request, or the order of fields in the SELECT clause of a query.

TestIamPermissionsRequest

Request message for TestIamPermissions method.
Fields
permissions[]

string

REQUIRED: The set of permissions to check for 'resource'. Permissions with wildcards (such as '', 'spanner.', 'spanner.instances.*') are not allowed.

TestIamPermissionsResponse

Response message for TestIamPermissions method.
Fields
permissions[]

string

A subset of TestPermissionsRequest.permissions that the caller is allowed.

Transaction

A transaction.
Fields
id

string (bytes format)

id may be used to identify the transaction in subsequent Read, ExecuteSql, Commit, or Rollback calls. Single-use read-only transactions do not have IDs, because single-use transactions do not support multiple requests.

precommitToken

object (MultiplexedSessionPrecommitToken)

A precommit token will be included in the response of a BeginTransaction request if the read-write transaction is on a multiplexed session and a mutation_key was specified in the BeginTransaction. The precommit token with the highest sequence number from this transaction attempt should be passed to the Commit request for this transaction.

readTimestamp

string (Timestamp format)

For snapshot read-only transactions, the read timestamp chosen for the transaction. Not returned by default: see TransactionOptions.ReadOnly.return_read_timestamp. A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

TransactionOptions

Transactions:

Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction.

Transaction modes:

Cloud Spanner supports three transaction modes:

  1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry.

  2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See [TransactionOptions.ReadOnly.strong][google.spanner.v1.TransactionOptions#readonly] for more details.

  3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed.

For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed.

Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database.

Locking read-write transactions:

Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent.

Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by sessions.commit or sessions.rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it.

Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by sessions.commit. At any time before sessions.commit, the client can send a sessions.rollback request to abort the transaction.

Semantics:

Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns ABORTED, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner.

Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves.

Retrying aborted transactions:

When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous.

Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an ABORTED error.

Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying.

Idle transactions:

A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error ABORTED.

If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, SELECT 1) prevents the transaction from becoming idle.

Snapshot read-only transactions:

Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes.

Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions.

Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice.

Snapshot read-only transactions do not need to call sessions.commit or sessions.rollback (and in fact are not permitted to do so).

To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp.

The types of timestamp bound are:

  • Strong (the default).
  • Bounded staleness.
  • Exact staleness.

If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica.

Each type of timestamp bound is discussed in detail below.

Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction.

Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp.

Queries on change streams (see below for more details) must also specify the strong read timestamp bound.

See TransactionOptions.ReadOnly.strong.

Exact staleness:

These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished.

The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time.

These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results.

See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness.

Bounded staleness:

Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking.

All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results.

Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp.

As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica.

Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions.

See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp.

Old read timestamps and garbage collection:

Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error FAILED_PRECONDITION.

You can configure and extend the VERSION_RETENTION_PERIOD of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past.

Querying change Streams:

A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database.

When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the sessions.executeStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_.

All queries on change stream TVFs must be executed using the sessions.executeStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries.

In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries.

Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs.

Partitioned DML transactions:

Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions.

Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another.

To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time.

That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions.

  • The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table.

  • The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows.

  • Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as UPDATE table SET column = column + 1 as it could be run multiple times against some rows.

  • The partitions are committed automatically - there is no support for sessions.commit or sessions.rollback. If the call returns an error, or if the client issuing the sessions.executeSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows.

  • Partitioned DML transactions may only contain the execution of a single DML statement via sessions.executeSql or sessions.executeStreamingSql.

  • If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all.

Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.

Fields
excludeTxnFromChangeStreams

boolean

When exclude_txn_from_change_streams is set to true: * Modifications from this transaction will not be recorded in change streams with DDL option allow_txn_exclusion=true that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option allow_txn_exclusion=false or not set that are tracking columns modified by these transactions. When exclude_txn_from_change_streams is set to false or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. exclude_txn_from_change_streams may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an INVALID_ARGUMENT error.

isolationLevel

enum

Isolation level for the transaction.

Enum type. Can be one of the following:
ISOLATION_LEVEL_UNSPECIFIED Default value. If the value is not specified, the SERIALIZABLE isolation level is used.
SERIALIZABLE All transactions appear as if they executed in a serial order, even if some of the reads, writes, and other operations of distinct transactions actually occurred in parallel. Spanner assigns commit timestamps that reflect the order of committed transactions to implement this property. Spanner offers a stronger guarantee than serializability called external consistency. For further details, please refer to https://cloud.google.com/spanner/docs/true-time-external-consistency#serializability.
REPEATABLE_READ All reads performed during the transaction observe a consistent snapshot of the database, and the transaction will only successfully commit in the absence of conflicts between its updates and any concurrent updates that have occurred since that snapshot. Consequently, in contrast to SERIALIZABLE transactions, only write-write conflicts are detected in snapshot transactions. This isolation level does not support Read-only and Partitioned DML transactions. When REPEATABLE_READ is specified on a read-write transaction, the locking semantics default to OPTIMISTIC.
partitionedDml

object (PartitionedDml)

Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires spanner.databases.beginPartitionedDmlTransaction permission on the session resource.

readOnly

object (ReadOnly)

Transaction will not write. Authorization to begin a read-only transaction requires spanner.databases.beginReadOnlyTransaction permission on the session resource.

readWrite

object (ReadWrite)

Transaction may write. Authorization to begin a read-write transaction requires spanner.databases.beginOrRollbackReadWriteTransaction permission on the session resource.

TransactionSelector

This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions.
Fields
begin

object (TransactionOptions)

Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction.

id

string (bytes format)

Execute the read or SQL query in a previously-started transaction.

singleUse

object (TransactionOptions)

Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query.

Type

Type indicates the type of a Cloud Spanner value, as might be stored in a table cell or returned from an SQL query.
Fields
arrayElementType

object (Type)

If code == ARRAY, then array_element_type is the type of the array elements.

code

enum

Required. The TypeCode for this type.

Enum type. Can be one of the following:
TYPE_CODE_UNSPECIFIED Not specified.
BOOL Encoded as JSON true or false.
INT64 Encoded as string, in decimal format.
FLOAT64 Encoded as number, or the strings "NaN", "Infinity", or "-Infinity".
FLOAT32 Encoded as number, or the strings "NaN", "Infinity", or "-Infinity".
TIMESTAMP Encoded as string in RFC 3339 timestamp format. The time zone must be present, and must be "Z". If the schema has the column option allow_commit_timestamp=true, the placeholder string "spanner.commit_timestamp()" can be used to instruct the system to insert the commit timestamp associated with the transaction commit.
DATE Encoded as string in RFC 3339 date format.
STRING Encoded as string.
BYTES Encoded as a base64-encoded string, as described in RFC 4648, section 4.
ARRAY Encoded as list, where the list elements are represented according to array_element_type.
STRUCT Encoded as list, where list element i is represented according to [struct_type.fields[i]][google.spanner.v1.StructType.fields].
NUMERIC Encoded as string, in decimal format or scientific notation format. Decimal format: [+-]Digits[.[Digits]] or +-.Digits Scientific notation: [+-]Digits[.[Digits]][ExponentIndicator[+-]Digits] or +-.Digits[ExponentIndicator[+-]Digits] (ExponentIndicator is "e" or "E")
JSON Encoded as a JSON-formatted string as described in RFC 7159. The following rules are applied when parsing JSON input: - Whitespace characters are not preserved. - If a JSON object has duplicate keys, only the first key is preserved. - Members of a JSON object are not guaranteed to have their order preserved. - JSON array elements will have their order preserved.
PROTO Encoded as a base64-encoded string, as described in RFC 4648, section 4.
ENUM Encoded as string, in decimal format.
INTERVAL Encoded as string, in ISO8601 duration format - P[n]Y[n]M[n]DT[n]H[n]M[n[.fraction]]S where n is an integer. For example, P1Y2M3DT4H5M6.5S represents time duration of 1 year, 2 months, 3 days, 4 hours, 5 minutes, and 6.5 seconds.
protoTypeFqn

string

If code == PROTO or code == ENUM, then proto_type_fqn is the fully qualified name of the proto type representing the proto/enum definition.

structType

object (StructType)

If code == STRUCT, then struct_type provides type information for the struct's fields.

typeAnnotation

enum

The TypeAnnotationCode that disambiguates SQL type that Spanner will use to represent values of this type during query processing. This is necessary for some type codes because a single TypeCode can be mapped to different SQL types depending on the SQL dialect. type_annotation typically is not needed to process the content of a value (it doesn't affect serialization) and clients can ignore it on the read path.

Enum type. Can be one of the following:
TYPE_ANNOTATION_CODE_UNSPECIFIED Not specified.
PG_NUMERIC PostgreSQL compatible NUMERIC type. This annotation needs to be applied to Type instances having NUMERIC type code to specify that values of this type should be treated as PostgreSQL NUMERIC values. Currently this annotation is always needed for NUMERIC when a client interacts with PostgreSQL-enabled Spanner databases.
PG_JSONB PostgreSQL compatible JSONB type. This annotation needs to be applied to Type instances having JSON type code to specify that values of this type should be treated as PostgreSQL JSONB values. Currently this annotation is always needed for JSON when a client interacts with PostgreSQL-enabled Spanner databases.
PG_OID PostgreSQL compatible OID type. This annotation can be used by a client interacting with PostgreSQL-enabled Spanner database to specify that a value should be treated using the semantics of the OID type.

UpdateDatabaseDdlMetadata

Metadata type for the operation returned by UpdateDatabaseDdl.
Fields
actions[]

object (DdlStatementActionInfo)

The brief action info for the DDL statements. actions[i] is the brief info for statements[i].

commitTimestamps[]

string (Timestamp format)

Reports the commit timestamps of all statements that have succeeded so far, where commit_timestamps[i] is the commit timestamp for the statement statements[i].

database

string

The database being modified.

progress[]

object (OperationProgress)

The progress of the UpdateDatabaseDdl operations. All DDL statements will have continuously updating progress, and progress[i] is the operation progress for statements[i]. Also, progress[i] will have start time and end time populated with commit timestamp of operation, as well as a progress of 100% once the operation has completed.

statements[]

string

For an update this list contains all the statements. For an individual statement, this list contains only that statement.

throttled

boolean

Output only. When true, indicates that the operation is throttled e.g. due to resource constraints. When resources become available the operation will resume and this field will be false again.

UpdateDatabaseDdlRequest

Enqueues the given DDL statements to be applied, in order but not necessarily all at once, to the database schema at some point (or points) in the future. The server checks that the statements are executable (syntactically valid, name tables that exist, etc.) before enqueueing them, but they may still fail upon later execution (e.g., if a statement from another batch of statements is applied first and it conflicts in some way, or if there is some data-related problem like a NULL value in a column to which NOT NULL would be added). If a statement fails, all subsequent statements in the batch are automatically cancelled. Each batch of statements is assigned a name which can be used with the Operations API to monitor progress. See the operation_id field for more details.
Fields
operationId

string

If empty, the new update request is assigned an automatically-generated operation ID. Otherwise, operation_id is used to construct the name of the resulting Operation. Specifying an explicit operation ID simplifies determining whether the statements were executed in the event that the UpdateDatabaseDdl call is replayed, or the return value is otherwise lost: the database and operation_id fields can be combined to form the name of the resulting longrunning.Operation: /operations/. operation_id should be unique within the database, and must be a valid identifier: a-z*. Note that automatically-generated operation IDs always begin with an underscore. If the named operation already exists, UpdateDatabaseDdl returns ALREADY_EXISTS.

protoDescriptors

string (bytes format)

Optional. Proto descriptors used by CREATE/ALTER PROTO BUNDLE statements. Contains a protobuf-serialized google.protobuf.FileDescriptorSet. To generate it, install and run protoc with --include_imports and --descriptor_set_out. For example, to generate for moon/shot/app.proto, run $protoc --proto_path=/app_path --proto_path=/lib_path \ --include_imports \ --descriptor_set_out=descriptors.data \ moon/shot/app.proto For more details, see protobuffer self description.

statements[]

string

Required. DDL statements to be applied to the database.

UpdateDatabaseMetadata

Metadata type for the operation returned by UpdateDatabase.
Fields
cancelTime

string (Timestamp format)

The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is best-effort).

progress

object (OperationProgress)

The progress of the UpdateDatabase operation.

request

object (UpdateDatabaseRequest)

The request for UpdateDatabase.

UpdateDatabaseRequest

The request for UpdateDatabase.
Fields
database

object (Database)

Required. The database to update. The name field of the database is of the form projects//instances//databases/.

updateMask

string (FieldMask format)

Required. The list of fields to update. Currently, only enable_drop_protection field can be updated.

UpdateInstanceConfigMetadata

Metadata type for the operation returned by UpdateInstanceConfig.
Fields
cancelTime

string (Timestamp format)

The time at which this operation was cancelled.

instanceConfig

object (InstanceConfig)

The desired instance configuration after updating.

progress

object (InstanceOperationProgress)

The progress of the UpdateInstanceConfig operation.

UpdateInstanceConfigRequest

The request for UpdateInstanceConfig.
Fields
instanceConfig

object (InstanceConfig)

Required. The user instance configuration to update, which must always include the instance configuration name. Otherwise, only fields mentioned in update_mask need be included. To prevent conflicts of concurrent updates, etag can be used.

updateMask

string (FieldMask format)

Required. A mask specifying which fields in InstanceConfig should be updated. The field mask must always be specified; this prevents any future fields in InstanceConfig from being erased accidentally by clients that do not know about them. Only display_name and labels can be updated.

validateOnly

boolean

An option to validate, but not actually execute, a request, and provide the same response.

UpdateInstanceMetadata

Metadata type for the operation returned by UpdateInstance.
Fields
cancelTime

string (Timestamp format)

The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again.

endTime

string (Timestamp format)

The time at which this operation failed or was completed successfully.

expectedFulfillmentPeriod

enum

The expected fulfillment period of this update operation.

Enum type. Can be one of the following:
FULFILLMENT_PERIOD_UNSPECIFIED Not specified.
FULFILLMENT_PERIOD_NORMAL Normal fulfillment period. The operation is expected to complete within minutes.
FULFILLMENT_PERIOD_EXTENDED Extended fulfillment period. It can take up to an hour for the operation to complete.
instance

object (Instance)

The desired end state of the update.

startTime

string (Timestamp format)

The time at which UpdateInstance request was received.

UpdateInstancePartitionMetadata

Metadata type for the operation returned by UpdateInstancePartition.
Fields
cancelTime

string (Timestamp format)

The time at which this operation was cancelled. If set, this operation is in the process of undoing itself (which is guaranteed to succeed) and cannot be cancelled again.

endTime

string (Timestamp format)

The time at which this operation failed or was completed successfully.

instancePartition

object (InstancePartition)

The desired end state of the update.

startTime

string (Timestamp format)

The time at which UpdateInstancePartition request was received.

UpdateInstancePartitionRequest

The request for UpdateInstancePartition.
Fields
fieldMask

string (FieldMask format)

Required. A mask specifying which fields in InstancePartition should be updated. The field mask must always be specified; this prevents any future fields in InstancePartition from being erased accidentally by clients that do not know about them.

instancePartition

object (InstancePartition)

Required. The instance partition to update, which must always include the instance partition name. Otherwise, only fields mentioned in field_mask need be included.

UpdateInstanceRequest

The request for UpdateInstance.
Fields
fieldMask

string (FieldMask format)

Required. A mask specifying which fields in Instance should be updated. The field mask must always be specified; this prevents any future fields in Instance from being erased accidentally by clients that do not know about them.

instance

object (Instance)

Required. The instance to update, which must always include the instance name. Otherwise, only fields mentioned in field_mask need be included.

VisualizationData

(No description provided)
Fields
dataSourceEndToken

string

The token signifying the end of a data_source.

dataSourceSeparatorToken

string

The token delimiting a datasource name from the rest of a key in a data_source.

diagnosticMessages[]

object (DiagnosticMessage)

The list of messages (info, alerts, ...)

endKeyStrings[]

string

We discretize the entire keyspace into buckets. Assuming each bucket has an inclusive keyrange and covers keys from k(i) ... k(n). In this case k(n) would be an end key for a given range. end_key_string is the collection of all such end keys

hasPii

boolean

Whether this scan contains PII.

indexedKeys[]

string

Keys of key ranges that contribute significantly to a given metric Can be thought of as heavy hitters.

keySeparator

string

The token delimiting the key prefixes.

keyUnit

enum

The unit for the key: e.g. 'key' or 'chunk'.

Enum type. Can be one of the following:
KEY_UNIT_UNSPECIFIED Required default value
KEY Each entry corresponds to one key
CHUNK Each entry corresponds to a chunk of keys
metrics[]

object (Metric)

The list of data objects for each metric.

prefixNodes[]

object (PrefixNode)

The list of extracted key prefix nodes used in the key prefix hierarchy.

Write

Arguments to insert, update, insert_or_update, and replace operations.
Fields
columns[]

string

The names of the columns in table to be written. The list of columns must contain enough columns to allow Cloud Spanner to derive values for all primary key columns in the row(s) to be modified.

table

string

Required. The table whose rows will be written.

values[]

array

The values to be written. values can contain more than one list of values. If it does, then multiple rows are written, one for each entry in values. Each list in values must have exactly as many entries as there are entries in columns above. Sending multiple lists is equivalent to sending multiple Mutations, each containing one values entry and repeating table and columns. Individual values in each list are encoded as described here.