Reference documentation and code samples for the BigQuery Storage V1 API class Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest.
Request message for AppendRows
.
Because AppendRows is a bidirectional streaming RPC, certain parts of the AppendRowsRequest need only be specified for the first request before switching table destinations. You can also switch table destinations within the same connection for the default stream.
The size of a single AppendRowsRequest must be less than 10 MB in size.
Requests larger than this return an error, typically INVALID_ARGUMENT
.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#arrow_rows
def arrow_rows() -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ArrowData
- (::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ArrowData) — Rows in arrow format. This is an experimental feature only selected for allowlisted customers.
#arrow_rows=
def arrow_rows=(value) -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ArrowData
- value (::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ArrowData) — Rows in arrow format. This is an experimental feature only selected for allowlisted customers.
- (::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ArrowData) — Rows in arrow format. This is an experimental feature only selected for allowlisted customers.
#default_missing_value_interpretation
def default_missing_value_interpretation() -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation
-
(::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation) — Optional. Default missing value interpretation for all columns in the
table. When a value is specified on an
AppendRowsRequest
, it is applied to all requests on the connection from that point forward, until a subsequentAppendRowsRequest
sets it to a different value.missing_value_interpretation
can overridedefault_missing_value_interpretation
. For example, if you want to writeNULL
instead of using default values for some columns, you can setdefault_missing_value_interpretation
toDEFAULT_VALUE
and at the same time, setmissing_value_interpretations
toNULL_VALUE
on those columns.
#default_missing_value_interpretation=
def default_missing_value_interpretation=(value) -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation
-
value (::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation) — Optional. Default missing value interpretation for all columns in the
table. When a value is specified on an
AppendRowsRequest
, it is applied to all requests on the connection from that point forward, until a subsequentAppendRowsRequest
sets it to a different value.missing_value_interpretation
can overridedefault_missing_value_interpretation
. For example, if you want to writeNULL
instead of using default values for some columns, you can setdefault_missing_value_interpretation
toDEFAULT_VALUE
and at the same time, setmissing_value_interpretations
toNULL_VALUE
on those columns.
-
(::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation) — Optional. Default missing value interpretation for all columns in the
table. When a value is specified on an
AppendRowsRequest
, it is applied to all requests on the connection from that point forward, until a subsequentAppendRowsRequest
sets it to a different value.missing_value_interpretation
can overridedefault_missing_value_interpretation
. For example, if you want to writeNULL
instead of using default values for some columns, you can setdefault_missing_value_interpretation
toDEFAULT_VALUE
and at the same time, setmissing_value_interpretations
toNULL_VALUE
on those columns.
#missing_value_interpretations
def missing_value_interpretations() -> ::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}
-
(::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}) — A map to indicate how to interpret missing value for some fields. Missing
values are fields present in user schema but missing in rows. The key is
the field name. The value is the interpretation of missing values for the
field.
For example, a map {'foo': NULL_VALUE, 'bar': DEFAULT_VALUE} means all missing values in field foo are interpreted as NULL, all missing values in field bar are interpreted as the default value of field bar in table schema.
If a field is not in this map and has missing values, the missing values in this field are interpreted as NULL.
This field only applies to the current request, it won't affect other requests on the connection.
Currently, field name can only be top-level column name, can't be a struct field path like 'foo.bar'.
#missing_value_interpretations=
def missing_value_interpretations=(value) -> ::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}
-
value (::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}) — A map to indicate how to interpret missing value for some fields. Missing
values are fields present in user schema but missing in rows. The key is
the field name. The value is the interpretation of missing values for the
field.
For example, a map {'foo': NULL_VALUE, 'bar': DEFAULT_VALUE} means all missing values in field foo are interpreted as NULL, all missing values in field bar are interpreted as the default value of field bar in table schema.
If a field is not in this map and has missing values, the missing values in this field are interpreted as NULL.
This field only applies to the current request, it won't affect other requests on the connection.
Currently, field name can only be top-level column name, can't be a struct field path like 'foo.bar'.
-
(::Google::Protobuf::Map{::String => ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::MissingValueInterpretation}) — A map to indicate how to interpret missing value for some fields. Missing
values are fields present in user schema but missing in rows. The key is
the field name. The value is the interpretation of missing values for the
field.
For example, a map {'foo': NULL_VALUE, 'bar': DEFAULT_VALUE} means all missing values in field foo are interpreted as NULL, all missing values in field bar are interpreted as the default value of field bar in table schema.
If a field is not in this map and has missing values, the missing values in this field are interpreted as NULL.
This field only applies to the current request, it won't affect other requests on the connection.
Currently, field name can only be top-level column name, can't be a struct field path like 'foo.bar'.
#offset
def offset() -> ::Google::Protobuf::Int64Value
- (::Google::Protobuf::Int64Value) — If present, the write is only performed if the next append offset is same as the provided value. If not present, the write is performed at the current end of stream. Specifying a value for this field is not allowed when calling AppendRows for the '_default' stream.
#offset=
def offset=(value) -> ::Google::Protobuf::Int64Value
- value (::Google::Protobuf::Int64Value) — If present, the write is only performed if the next append offset is same as the provided value. If not present, the write is performed at the current end of stream. Specifying a value for this field is not allowed when calling AppendRows for the '_default' stream.
- (::Google::Protobuf::Int64Value) — If present, the write is only performed if the next append offset is same as the provided value. If not present, the write is performed at the current end of stream. Specifying a value for this field is not allowed when calling AppendRows for the '_default' stream.
#proto_rows
def proto_rows() -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ProtoData
- (::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ProtoData) — Rows in proto format.
#proto_rows=
def proto_rows=(value) -> ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ProtoData
- value (::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ProtoData) — Rows in proto format.
- (::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest::ProtoData) — Rows in proto format.
#trace_id
def trace_id() -> ::String
- (::String) — Id set by client to annotate its identity. Only initial request setting is respected.
#trace_id=
def trace_id=(value) -> ::String
- value (::String) — Id set by client to annotate its identity. Only initial request setting is respected.
- (::String) — Id set by client to annotate its identity. Only initial request setting is respected.
#write_stream
def write_stream() -> ::String
-
(::String) — Required. The write_stream identifies the append operation. It must be
provided in the following scenarios:
In the first request to an AppendRows connection.
In all subsequent requests to an AppendRows connection, if you use the same connection to write to multiple tables or change the input schema for default streams.
For explicitly created write streams, the format is:
projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}
For the special default stream, the format is:
projects/{project}/datasets/{dataset}/tables/{table}/streams/_default
.
An example of a possible sequence of requests with write_stream fields within a single connection:
r1: {write_stream: stream_name_1}
r2: {write_stream: /omit/}
r3: {write_stream: /omit/}
r4: {write_stream: stream_name_2}
r5: {write_stream: stream_name_2}
The destination changed in request_4, so the write_stream field must be populated in all subsequent requests in this stream.
#write_stream=
def write_stream=(value) -> ::String
-
value (::String) — Required. The write_stream identifies the append operation. It must be
provided in the following scenarios:
In the first request to an AppendRows connection.
In all subsequent requests to an AppendRows connection, if you use the same connection to write to multiple tables or change the input schema for default streams.
For explicitly created write streams, the format is:
projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}
For the special default stream, the format is:
projects/{project}/datasets/{dataset}/tables/{table}/streams/_default
.
An example of a possible sequence of requests with write_stream fields within a single connection:
r1: {write_stream: stream_name_1}
r2: {write_stream: /omit/}
r3: {write_stream: /omit/}
r4: {write_stream: stream_name_2}
r5: {write_stream: stream_name_2}
The destination changed in request_4, so the write_stream field must be populated in all subsequent requests in this stream.
-
(::String) — Required. The write_stream identifies the append operation. It must be
provided in the following scenarios:
In the first request to an AppendRows connection.
In all subsequent requests to an AppendRows connection, if you use the same connection to write to multiple tables or change the input schema for default streams.
For explicitly created write streams, the format is:
projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}
For the special default stream, the format is:
projects/{project}/datasets/{dataset}/tables/{table}/streams/_default
.
An example of a possible sequence of requests with write_stream fields within a single connection:
r1: {write_stream: stream_name_1}
r2: {write_stream: /omit/}
r3: {write_stream: /omit/}
r4: {write_stream: stream_name_2}
r5: {write_stream: stream_name_2}
The destination changed in request_4, so the write_stream field must be populated in all subsequent requests in this stream.