- 3.51.0 (latest)
- 3.50.1
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.1
- 3.39.0
- 3.38.0
- 3.37.0
- 3.36.0
- 3.35.1
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.27.1
- 3.26.0
- 3.25.0
- 3.24.0
- 3.23.0
- 3.22.2
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.1
- 3.14.1
- 3.13.0
- 3.12.1
- 3.11.1
- 3.10.0
- 3.9.0
- 3.8.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.0
- 3.3.0
- 3.2.0
- 3.1.0
- 3.0.0
- 2.1.1
- 2.0.0
- 1.19.3
- 1.18.0
- 1.17.1
- 1.16.0
- 1.15.1
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.0
- 1.10.0
API documentation for spanner_v1.types
module.
Classes
BatchCreateSessionsRequest
The request for
BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]
.
Parameters to be applied to each created session.
BatchCreateSessionsResponse
The response for
BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]
.
BeginTransactionRequest
The request for
BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]
.
Required. Options for the new transaction.
CommitRequest
The request for Commit][google.spanner.v1.Spanner.Commit]
.
Required. The transaction in which to commit.
Execute mutations in a temporary transaction. Note that unlike
commit of a previously-started transaction, commit with a
temporary transaction is non-idempotent. That is, if the
CommitRequest
is sent to Cloud Spanner more than once (for
instance, due to retries in the application, or in the
transport library), it is possible that the mutations are
executed more than once. If this is undesirable, use
BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]
and Commit][google.spanner.v1.Spanner.Commit]
instead.
CommitResponse
The response for Commit][google.spanner.v1.Spanner.Commit]
.
CreateSessionRequest
The request for
CreateSession][google.spanner.v1.Spanner.CreateSession]
.
The session to create.
CustomHttpPattern
API documentation for spanner_v1.types.CustomHttpPattern
class.
DeleteSessionRequest
The request for
DeleteSession][google.spanner.v1.Spanner.DeleteSession]
.
DescriptorProto
API documentation for spanner_v1.types.DescriptorProto
class.
Duration
API documentation for spanner_v1.types.Duration
class.
Empty
API documentation for spanner_v1.types.Empty
class.
EnumDescriptorProto
API documentation for spanner_v1.types.EnumDescriptorProto
class.
EnumOptions
API documentation for spanner_v1.types.EnumOptions
class.
EnumValueDescriptorProto
API documentation for spanner_v1.types.EnumValueDescriptorProto
class.
EnumValueOptions
API documentation for spanner_v1.types.EnumValueOptions
class.
ExecuteBatchDmlRequest
The request for
ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]
The transaction to use. A ReadWrite transaction is required. Single-use transactions are not supported (to avoid replay). The caller must either supply an existing transaction ID or begin a new transaction.
A per-transaction sequence number used to identify this
request. This is used in the same space as the seqno in
ExecuteSqlRequest][Spanner.ExecuteSqlRequest]
. See more
details in ExecuteSqlRequest][Spanner.ExecuteSqlRequest]
.
ExecuteBatchDmlResponse
The response for
ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]
. Contains a
list of ResultSet][google.spanner.v1.ResultSet]
, one for each DML
statement that has successfully executed. If a statement fails, the
error is returned as part of the response payload. Clients can determine
whether all DML statements have run successfully, or if a statement
failed, using one of the following approaches:
- Check if 'status' field is OkStatus.
- Check if result_sets_size() equals the number of statements in
ExecuteBatchDmlRequest][Spanner.ExecuteBatchDmlRequest]
.
Example 1: A request with 5 DML statements, all executed successfully. Result: A response with 5 ResultSets, one for each statement in the same order, and an OK status.
Example 2: A request with 5 DML statements. The 3rd statement has a syntax error. Result: A response with 2 ResultSets, for the first 2 statements that run successfully, and a syntax error (INVALID_ARGUMENT) status. From result_set_size() client can determine that the 3rd statement has failed.
If all DML statements are executed successfully, status will be OK. Otherwise, the error status of the first failed statement.
ExecuteSqlRequest
The request for ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]
and
ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]
.
The transaction to use. For queries, if none is provided, the default is a temporary read-only transaction with strong concurrency. Standard DML statements require a ReadWrite transaction. Single-use transactions are not supported (to avoid replay). The caller must either supply an existing transaction ID or begin a new transaction. Partitioned DML requires an existing PartitionedDml transaction ID.
The SQL string can contain parameter placeholders. A parameter
placeholder consists of '@'
followed by the parameter
name. Parameter names consist of any combination of letters,
numbers, and underscores. Parameters can appear anywhere that
a literal value is expected. The same parameter name can be
used more than once, for example: "WHERE id > @msg_id AND id
< @msg_id + 100"
It is an error to execute an SQL statement
with unbound parameters. Parameter values are specified using
params
, which is a JSON object whose keys are parameter
names, and whose values are the corresponding parameter
values.
If this request is resuming a previously interrupted SQL
statement execution, resume_token
should be copied from
the last
PartialResultSet][google.spanner.v1.PartialResultSet]
yielded
before the interruption. Doing this enables the new SQL
statement execution to resume where the last one left off. The
rest of the request parameters must exactly match the request
that yielded this token.
If present, results will be restricted to the specified partition previously created using PartitionQuery(). There must be an exact match for the values of fields common to this message and the PartitionQueryRequest message used to create this partition_token.
ExtensionRangeOptions
API documentation for spanner_v1.types.ExtensionRangeOptions
class.
FieldDescriptorProto
API documentation for spanner_v1.types.FieldDescriptorProto
class.
FieldOptions
API documentation for spanner_v1.types.FieldOptions
class.
FileDescriptorProto
API documentation for spanner_v1.types.FileDescriptorProto
class.
FileDescriptorSet
API documentation for spanner_v1.types.FileDescriptorSet
class.
FileOptions
API documentation for spanner_v1.types.FileOptions
class.
GeneratedCodeInfo
API documentation for spanner_v1.types.GeneratedCodeInfo
class.
GetSessionRequest
The request for GetSession][google.spanner.v1.Spanner.GetSession]
.
Http
API documentation for spanner_v1.types.Http
class.
HttpRule
API documentation for spanner_v1.types.HttpRule
class.
KeyRange
KeyRange represents a range of rows in a table or index.
A range has a start key and an end key. These keys can be open or closed, indicating if the range includes rows with that key.
Keys are represented by lists, where the ith value in the list
corresponds to the ith component of the table or index primary key.
Individual values are encoded as described
here][google.spanner.v1.TypeCode]
.
For example, consider the following table definition:
::
CREATE TABLE UserEvents (
UserName STRING(MAX),
EventDate STRING(10)
) PRIMARY KEY(UserName, EventDate);
The following keys name rows in this table:
::
["Bob", "2014-09-23"]
["Alfred", "2015-06-12"]
Since the UserEvents
table's PRIMARY KEY
clause names two
columns, each UserEvents
key has two elements; the first is the
UserName
, and the second is the EventDate
.
Key ranges with multiple components are interpreted lexicographically by
component using the table or index key's declared sort order. For
example, the following range returns all events for user "Bob"
that
occurred in the year 2015:
::
"start_closed": ["Bob", "2015-01-01"]
"end_closed": ["Bob", "2015-12-31"]
Start and end keys can omit trailing key components. This affects the inclusion and exclusion of rows that exactly match the provided key components: if the key is closed, then rows that exactly match the provided components are included; if the key is open, then rows that exactly match are not included.
For example, the following range includes all events for "Bob"
that
occurred during and after the year 2000:
::
"start_closed": ["Bob", "2000-01-01"]
"end_closed": ["Bob"]
The next example retrieves all events for "Bob"
:
::
"start_closed": ["Bob"]
"end_closed": ["Bob"]
To retrieve events before the year 2000:
::
"start_closed": ["Bob"]
"end_open": ["Bob", "2000-01-01"]
The following range includes all rows in the table:
::
"start_closed": []
"end_closed": []
This range returns all users whose UserName
begins with any
character from A to C:
::
"start_closed": ["A"]
"end_open": ["D"]
This range returns all users whose UserName
begins with B:
::
"start_closed": ["B"]
"end_open": ["C"]
Key ranges honor column sort order. For example, suppose a table is defined as follows:
::
CREATE TABLE DescendingSortedTable {
Key INT64,
...
) PRIMARY KEY(Key DESC);
The following range retrieves all rows with key values between 1 and 100 inclusive:
::
"start_closed": ["100"]
"end_closed": ["1"]
Note that 100 is passed as the start, and 1 is passed as the end,
because Key
is a descending column in the schema.
If the start is closed, then the range includes all rows whose
first len(start_closed)
key columns exactly match
start_closed
.
The end key must be provided. It can be either closed or open.
If the end is open, then the range excludes rows whose first
len(end_open)
key columns exactly match end_open
.
KeySet
KeySet
defines a collection of Cloud Spanner keys and/or key ranges.
All the keys are expected to be in the same table or index. The keys
need not be sorted in any particular way.
If the same key is specified multiple times in the set (for example if two ranges, two keys, or a key and a range overlap), Cloud Spanner behaves as if the key were only specified once.
A list of key ranges. See
KeyRange][google.spanner.v1.KeyRange]
for more information
about key range specifications.
ListSessionsRequest
The request for ListSessions][google.spanner.v1.Spanner.ListSessions]
.
Number of sessions to be returned in the response. If 0 or less, defaults to the server's maximum allowed page size.
An expression for filtering the results of the request. Filter
rules are case insensitive. The fields eligible for filtering
are: - labels.key
where key is the name of a label Some
examples of using filters are: - labels.env:*
--> The
session has the label "env". - labels.env:dev
--> The
session has the label "env" and the value of the label
contains the string "dev".
ListSessionsResponse
The response for ListSessions][google.spanner.v1.Spanner.ListSessions]
.
next_page_token
can be sent in a subsequent
ListSessions][google.spanner.v1.Spanner.ListSessions]
call to
fetch more of the matching sessions.
ListValue
API documentation for spanner_v1.types.ListValue
class.
MessageOptions
API documentation for spanner_v1.types.MessageOptions
class.
MethodDescriptorProto
API documentation for spanner_v1.types.MethodDescriptorProto
class.
MethodOptions
API documentation for spanner_v1.types.MethodOptions
class.
Mutation
A modification to one or more Cloud Spanner rows. Mutations can be
applied to a Cloud Spanner database by sending them in a
Commit][google.spanner.v1.Spanner.Commit]
call.
Insert new rows in a table. If any of the rows already exist,
the write or transaction fails with error ALREADY_EXISTS
.
Like insert][google.spanner.v1.Mutation.insert]
, except that
if the row already exists, then its column values are
overwritten with the ones provided. Any column values not
explicitly written are preserved.
Delete rows from a table. Succeeds whether or not the named rows were present.
OneofDescriptorProto
API documentation for spanner_v1.types.OneofDescriptorProto
class.
OneofOptions
API documentation for spanner_v1.types.OneofOptions
class.
PartialResultSet
Partial results from a streaming read or SQL query. Streaming reads and SQL queries better tolerate large result sets, large rows, and large values, but are a little trickier to consume.
A streamed result set consists of a stream of values, which
might be split into many PartialResultSet
messages to
accommodate large rows and/or large values. Every N complete
values defines a row, where N is equal to the number of
entries in [metadata.row_type.fields][google.spanner.v1.Struc
tType.fields]. Most values are encoded based on type as
described here][google.spanner.v1.TypeCode]
. It is possible
that the last value in values is "chunked", meaning that the
rest of the value is sent in subsequent PartialResultSet
(s). This is denoted by the [chunked_value][google.spanner.v1
.PartialResultSet.chunked_value] field. Two or more chunked
values can be merged to form a complete value as follows: -
bool/number/null
: cannot be chunked - string
:
concatenate the strings - list
: concatenate the lists. If
the last element in a list is a string
, list
, or
object
, merge it with the first element in the next
list by applying these rules recursively. - object
:
concatenate the (field name, field value) pairs. If a field
name is duplicated, then apply these rules recursively to
merge the field values. Some examples of merging: ::
Strings are concatenated. "foo", "bar" => "foobar"
Lists of non-strings are concatenated. [2, 3], [4] =>
[2, 3, 4] # Lists are concatenated, but the last and
first elements are merged # because they are strings.
["a", "b"], ["c", "d"] => ["a", "bc", "d"] # Lists are
concatenated, but the last and first elements are merged #
because they are lists. Recursively, the last and first
elements # of the inner lists are merged because they are
strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b",
"cd"], "e"] # Non-overlapping object fields are combined.
{"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} #
Overlapping object fields are merged. {"a": "1"}, {"a":
"2"} => {"a": "12"} # Examples of merging objects
containing lists of strings. {"a": ["1"]}, {"a": ["2"]} =>
{"a": ["12"]} For a more complete example, suppose a
streaming SQL query is yielding a result set whose rows
contain a single string field. The following
PartialResultSet
\ s might be yielded: :: {
"metadata": { ... } "values": ["Hello", "W"]
"chunked_value": true "resume_token": "Af65..." }
{ "values": ["orl"] "chunked_value": true
"resume_token": "Bqp2..." } { "values": ["d"]
"resume_token": "Zx1B..." } This sequence of
PartialResultSet
\ s encodes two rows, one containing the
field value "Hello"
, and a second containing the field
value "World" = "W" + "orl" + "d"
.
Streaming calls might be interrupted for a variety of reasons,
such as TCP connection loss. If this occurs, the stream of
results can be resumed by re-sending the original request and
including resume_token
. Note that executing any other
transaction in the same session invalidates the token.
Partition
Information returned for each partition returned in a PartitionResponse.
PartitionOptions
Options for a PartitionQueryRequest and PartitionReadRequest.
Note: This hint is currently ignored by PartitionQuery and PartitionRead requests. The desired maximum number of partitions to return. For example, this may be set to the number of workers available. The default for this option is currently 10,000. The maximum value is currently 200,000. This is only a hint. The actual number of partitions returned may be smaller or larger than this maximum count request.
PartitionQueryRequest
The request for
PartitionQuery][google.spanner.v1.Spanner.PartitionQuery]
Read only snapshot transactions are supported, read/write and single use transactions are not.
The SQL query string can contain parameter placeholders. A
parameter placeholder consists of '@'
followed by the
parameter name. Parameter names consist of any combination of
letters, numbers, and underscores. Parameters can appear
anywhere that a literal value is expected. The same parameter
name can be used more than once, for example: "WHERE id >
@msg_id AND id < @msg_id + 100"
It is an error to execute
an SQL query with unbound parameters. Parameter values are
specified using params
, which is a JSON object whose keys
are parameter names, and whose values are the corresponding
parameter values.
Additional options that affect how many partitions are created.
PartitionReadRequest
The request for PartitionRead][google.spanner.v1.Spanner.PartitionRead]
Read only snapshot transactions are supported, read/write and single use transactions are not.
If non-empty, the name of an index on
table][google.spanner.v1.PartitionReadRequest.table]
. This
index is used instead of the table primary key when
interpreting
[key_set][google.spanner.v1.PartitionReadRequest.key_set]
and sorting result rows. See
[key_set][google.spanner.v1.PartitionReadRequest.key_set]
for further information.
Required. key_set
identifies the rows to be yielded.
key_set
names the primary keys of the rows in
table][google.spanner.v1.PartitionReadRequest.table]
to be
yielded, unless
index][google.spanner.v1.PartitionReadRequest.index]
is
present. If
index][google.spanner.v1.PartitionReadRequest.index]
is
present, then
[key_set][google.spanner.v1.PartitionReadRequest.key_set]
instead names index keys in
index][google.spanner.v1.PartitionReadRequest.index]
. It is
not an error for the key_set
to name rows that do not
exist in the database. Read yields nothing for nonexistent
rows.
PartitionResponse
The response for
PartitionQuery][google.spanner.v1.Spanner.PartitionQuery]
or
PartitionRead][google.spanner.v1.Spanner.PartitionRead]
Transaction created by this request.
PlanNode
Node information for nodes appearing in a [QueryPlan.plan_nodes][google.spanner.v1.QueryPlan.plan_nodes].
Used to determine the type of node. May be needed for
visualizing different kinds of nodes differently. For example,
If the node is a
SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR]
node, it will
have a condensed representation which can be used to directly
embed a description of the node in its parent.
List of child node index
\ es and their relationship to
this parent.
Attributes relevant to the node contained in a group of key- value pairs. For example, a Parameter Reference node could have the following information in its metadata: :: { "parameter_reference": "param1", "parameter_type": "array" }
QueryPlan
Contains an ordered list of nodes appearing in the query plan.
ReadRequest
The request for Read][google.spanner.v1.Spanner.Read]
and
StreamingRead][google.spanner.v1.Spanner.StreamingRead]
.
The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency.
If non-empty, the name of an index on
table][google.spanner.v1.ReadRequest.table]
. This index is
used instead of the table primary key when interpreting
[key_set][google.spanner.v1.ReadRequest.key_set] and sorting
result rows. See
[key_set][google.spanner.v1.ReadRequest.key_set] for further
information.
Required. key_set
identifies the rows to be yielded.
key_set
names the primary keys of the rows in
table][google.spanner.v1.ReadRequest.table]
to be yielded,
unless index][google.spanner.v1.ReadRequest.index]
is
present. If index][google.spanner.v1.ReadRequest.index]
is
present, then
[key_set][google.spanner.v1.ReadRequest.key_set] instead
names index keys in
index][google.spanner.v1.ReadRequest.index]
. If the [partiti
on_token][google.spanner.v1.ReadRequest.partition_token]
field is empty, rows are yielded in table primary key order
(if index][google.spanner.v1.ReadRequest.index]
is empty) or
index key order (if
index][google.spanner.v1.ReadRequest.index]
is non-empty). If
the [partition_token][google.spanner.v1.ReadRequest.partition
_token] field is not empty, rows will be yielded in an
unspecified order. It is not an error for the key_set
to
name rows that do not exist in the database. Read yields
nothing for nonexistent rows.
If this request is resuming a previously interrupted read,
resume_token
should be copied from the last
PartialResultSet][google.spanner.v1.PartialResultSet]
yielded
before the interruption. Doing this enables the new read to
resume where the last read left off. The rest of the request
parameters must exactly match the request that yielded this
token.
ResultSet
Results from Read][google.spanner.v1.Spanner.Read]
or
ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]
.
Each element in rows
is a row whose format is defined by [
metadata.row_type][google.spanner.v1.ResultSetMetadata.row_t
ype]. The ith element in each row matches the ith field in [me
tadata.row_type][google.spanner.v1.ResultSetMetadata.row_typ
e]. Elements are encoded based on type as described
here][google.spanner.v1.TypeCode]
.
ResultSetMetadata
Metadata about a ResultSet][google.spanner.v1.ResultSet]
or
PartialResultSet][google.spanner.v1.PartialResultSet]
.
If the read or SQL query began a transaction as a side-effect, the information about the new transaction is yielded here.
ResultSetStats
Additional statistics about a ResultSet][google.spanner.v1.ResultSet]
or PartialResultSet][google.spanner.v1.PartialResultSet]
.
Aggregated statistics from the execution of the query. Only present when the query is profiled. For example, a query could return the statistics as follows: :: { "rows_returned": "3", "elapsed_time": "1.22 secs", "cpu_time": "1.19 secs" }
Standard DML returns an exact count of rows that were modified.
RollbackRequest
The request for Rollback][google.spanner.v1.Spanner.Rollback]
.
Required. The transaction to roll back.
ServiceDescriptorProto
API documentation for spanner_v1.types.ServiceDescriptorProto
class.
ServiceOptions
API documentation for spanner_v1.types.ServiceOptions
class.
Session
A session in the Cloud Spanner API.
The labels for the session. - Label keys must be between 1
and 63 characters long and must conform to the following
regular expression: [a-z]([-a-z0-9]*[a-z0-9])?
. - Label
values must be between 0 and 63 characters long and must
conform to the regular expression
([a-z]([-a-z0-9]*[a-z0-9])?)?
. - No more than 64 labels
can be associated with a given session. See
https://goo.gl/xmQnxf for more information on and examples of
labels.
Output only. The approximate timestamp when the session is last used. It is typically earlier than the actual last use time.
SourceCodeInfo
API documentation for spanner_v1.types.SourceCodeInfo
class.
Struct
API documentation for spanner_v1.types.Struct
class.
StructType
StructType
defines the fields of a
STRUCT][google.spanner.v1.TypeCode.STRUCT]
type.
Timestamp
API documentation for spanner_v1.types.Timestamp
class.
Transaction
A transaction.
For snapshot read-only transactions, the read timestamp chosen
for the transaction. Not returned by default: see [Transaction
Options.ReadOnly.return_read_timestamp][google.spanner.v1.Tr
ansactionOptions.ReadOnly.return_read_timestamp]. A
timestamp in RFC3339 UTC "Zulu" format, accurate to
nanoseconds. Example: "2014-10-02T15:01:23.045123456Z"
.
TransactionOptions
Transactions
Each session can have at most one active transaction at a time. After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction.
Transaction Modes
Cloud Spanner supports three transaction modes:
Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry.
Snapshot read-only. This transaction type provides guaranteed consistency across several reads, but does not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past. Snapshot read-only transactions do not need to be committed.
Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed.
For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed.
Transactions may only read/write data in a single database. They may, however, read/write data in different tables within that database.
Locking Read-Write Transactions
Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent.
Clients should attempt to minimize the amount of time a transaction is
active. Faster transactions commit with higher probability and cause
less contention. Cloud Spanner attempts to keep read locks active as
long as the transaction continues to do reads, and the transaction has
not been terminated by Commit][google.spanner.v1.Spanner.Commit]
or
Rollback][google.spanner.v1.Spanner.Rollback]
. Long periods of
inactivity at the client may cause Cloud Spanner to release a
transaction's locks and abort it.
Conceptually, a read-write transaction consists of zero or more reads or
SQL statements followed by Commit][google.spanner.v1.Spanner.Commit]
.
At any time before Commit][google.spanner.v1.Spanner.Commit]
, the
client can send a Rollback][google.spanner.v1.Spanner.Rollback]
request
to abort the transaction.
Semantics
Cloud Spanner can commit the transaction if all read locks it acquired
are still valid at commit time, and it is able to acquire write locks
for all writes. Cloud Spanner can abort the transaction for any reason.
If a commit attempt returns ABORTED
, Cloud Spanner guarantees that
the transaction has not modified any user data in Cloud Spanner.
Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves.
Retrying Aborted Transactions
When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous.
Under some circumstances (e.g., many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of wall time spent retrying.
Idle Transactions
A transaction is considered idle if it has no outstanding reads or SQL
queries and has not started a read or SQL query within the last 10
seconds. Idle transactions can be aborted by Cloud Spanner so that they
don't hold on to locks indefinitely. In that case, the commit will fail
with error ABORTED
.
If this behavior is undesirable, periodically executing a simple SQL
query in the transaction (e.g., SELECT 1
) prevents the transaction
from becoming idle.
Snapshot Read-Only Transactions
Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes.
Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions.
Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice.
Snapshot read-only transactions do not need to call
Commit][google.spanner.v1.Spanner.Commit]
or
Rollback][google.spanner.v1.Spanner.Rollback]
(and in fact are not
permitted to do so).
To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp.
The types of timestamp bound are:
- Strong (the default).
- Bounded staleness.
- Exact staleness.
If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transaction, because they are able to execute far from the leader replica.
Each type of timestamp bound is discussed in detail below.
Strong
Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction.
Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp.
See
TransactionOptions.ReadOnly.strong][google.spanner.v1.TransactionOptions.ReadOnly.strong]
.
Exact Staleness
These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp <= the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished.
The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time.
These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results.
See [TransactionOptions.ReadOnly.read_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.read_timestamp] and [TransactionOptions.ReadOnly.exact_staleness][google.spanner.v1.TransactionOptions.ReadOnly.exact_staleness].
Bounded Staleness
Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking.
All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results.
Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp.
As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica.
Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions.
See [TransactionOptions.ReadOnly.max_staleness][google.spanner.v1.TransactionOptions.ReadOnly.max_staleness] and [TransactionOptions.ReadOnly.min_read_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.min_read_timestamp].
Old Read Timestamps and Garbage Collection
Cloud Spanner continuously garbage collects deleted and overwritten data
in the background to reclaim storage space. This process is known as
"version GC". By default, version GC reclaims versions after they are
one hour old. Because of this, Cloud Spanner cannot perform reads at
read timestamps more than one hour in the past. This restriction also
applies to in-progress reads and/or SQL queries whose timestamp become
too old while executing. Reads and SQL queries with too-old read
timestamps fail with the error FAILED_PRECONDITION
.
Partitioned DML Transactions
Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions.
Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another.
To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time.
That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions.
The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table.
The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows.
Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement will be applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as
UPDATE table SET column = column + 1
as it could be run multiple times against some rows.The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows.
Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql.
If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all.
Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.
Transaction may write. Authorization to begin a read-write
transaction requires
spanner.databases.beginOrRollbackReadWriteTransaction
permission on the session
resource.
Transaction will not write. Authorization to begin a read-
only transaction requires
spanner.databases.beginReadOnlyTransaction
permission on
the session
resource.
TransactionSelector
This message is used to select the transaction in which a
Read][google.spanner.v1.Spanner.Read]
or
ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]
call runs.
See TransactionOptions][google.spanner.v1.TransactionOptions]
for more
information about transactions.
Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query.
Begin a new transaction and execute this read or SQL query in
it. The transaction ID of the new transaction is returned in [
ResultSetMetadata.transaction][google.spanner.v1.ResultSetMeta
data.transaction], which is a
Transaction][google.spanner.v1.Transaction]
.
Type
Type
indicates the type of a Cloud Spanner value, as might be stored
in a table cell or returned from an SQL query.
If code][google.spanner.v1.Type.code]
==
ARRAY][google.spanner.v1.TypeCode.ARRAY]
, then
array_element_type
is the type of the array elements.
UninterpretedOption
API documentation for spanner_v1.types.UninterpretedOption
class.
Value
API documentation for spanner_v1.types.Value
class.