Reference documentation and code samples for the BigQuery Storage V1 API class Google::Cloud::Bigquery::Storage::V1::ReadRowsResponse.
Response from calling ReadRows
may include row data, progress and
throttling information.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#arrow_record_batch
def arrow_record_batch() -> ::Google::Cloud::Bigquery::Storage::V1::ArrowRecordBatch
Returns
- (::Google::Cloud::Bigquery::Storage::V1::ArrowRecordBatch) — Serialized row data in Arrow RecordBatch format.
#arrow_record_batch=
def arrow_record_batch=(value) -> ::Google::Cloud::Bigquery::Storage::V1::ArrowRecordBatch
Parameter
- value (::Google::Cloud::Bigquery::Storage::V1::ArrowRecordBatch) — Serialized row data in Arrow RecordBatch format.
Returns
- (::Google::Cloud::Bigquery::Storage::V1::ArrowRecordBatch) — Serialized row data in Arrow RecordBatch format.
#arrow_schema
def arrow_schema() -> ::Google::Cloud::Bigquery::Storage::V1::ArrowSchema
Returns
- (::Google::Cloud::Bigquery::Storage::V1::ArrowSchema) — Output only. Arrow schema.
#avro_rows
def avro_rows() -> ::Google::Cloud::Bigquery::Storage::V1::AvroRows
Returns
- (::Google::Cloud::Bigquery::Storage::V1::AvroRows) — Serialized row data in AVRO format.
#avro_rows=
def avro_rows=(value) -> ::Google::Cloud::Bigquery::Storage::V1::AvroRows
Parameter
- value (::Google::Cloud::Bigquery::Storage::V1::AvroRows) — Serialized row data in AVRO format.
Returns
- (::Google::Cloud::Bigquery::Storage::V1::AvroRows) — Serialized row data in AVRO format.
#avro_schema
def avro_schema() -> ::Google::Cloud::Bigquery::Storage::V1::AvroSchema
Returns
- (::Google::Cloud::Bigquery::Storage::V1::AvroSchema) — Output only. Avro schema.
#row_count
def row_count() -> ::Integer
Returns
- (::Integer) — Number of serialized rows in the rows block.
#row_count=
def row_count=(value) -> ::Integer
Parameter
- value (::Integer) — Number of serialized rows in the rows block.
Returns
- (::Integer) — Number of serialized rows in the rows block.
#stats
def stats() -> ::Google::Cloud::Bigquery::Storage::V1::StreamStats
Returns
- (::Google::Cloud::Bigquery::Storage::V1::StreamStats) — Statistics for the stream.
#stats=
def stats=(value) -> ::Google::Cloud::Bigquery::Storage::V1::StreamStats
Parameter
- value (::Google::Cloud::Bigquery::Storage::V1::StreamStats) — Statistics for the stream.
Returns
- (::Google::Cloud::Bigquery::Storage::V1::StreamStats) — Statistics for the stream.
#throttle_state
def throttle_state() -> ::Google::Cloud::Bigquery::Storage::V1::ThrottleState
Returns
- (::Google::Cloud::Bigquery::Storage::V1::ThrottleState) — Throttling state. If unset, the latest response still describes the current throttling status.
#throttle_state=
def throttle_state=(value) -> ::Google::Cloud::Bigquery::Storage::V1::ThrottleState
Parameter
- value (::Google::Cloud::Bigquery::Storage::V1::ThrottleState) — Throttling state. If unset, the latest response still describes the current throttling status.
Returns
- (::Google::Cloud::Bigquery::Storage::V1::ThrottleState) — Throttling state. If unset, the latest response still describes the current throttling status.
#uncompressed_byte_size
def uncompressed_byte_size() -> ::Integer
Returns
- (::Integer) — Optional. If the row data in this ReadRowsResponse is compressed, then uncompressed byte size is the original size of the uncompressed row data. If it is set to a value greater than 0, then decompress into a buffer of size uncompressed_byte_size using the compression codec that was requested during session creation time and which is specified in TableReadOptions.response_compression_codec in ReadSession. This value is not set if no response_compression_codec was not requested and it is -1 if the requested compression would not have reduced the size of this ReadRowsResponse's row data. This attempts to match Apache Arrow's behavior described here https://github.com/apache/arrow/issues/15102 where the uncompressed length may be set to -1 to indicate that the data that follows is not compressed, which can be useful for cases where compression does not yield appreciable savings. When uncompressed_byte_size is not greater than 0, the client should skip decompression.
#uncompressed_byte_size=
def uncompressed_byte_size=(value) -> ::Integer
Parameter
- value (::Integer) — Optional. If the row data in this ReadRowsResponse is compressed, then uncompressed byte size is the original size of the uncompressed row data. If it is set to a value greater than 0, then decompress into a buffer of size uncompressed_byte_size using the compression codec that was requested during session creation time and which is specified in TableReadOptions.response_compression_codec in ReadSession. This value is not set if no response_compression_codec was not requested and it is -1 if the requested compression would not have reduced the size of this ReadRowsResponse's row data. This attempts to match Apache Arrow's behavior described here https://github.com/apache/arrow/issues/15102 where the uncompressed length may be set to -1 to indicate that the data that follows is not compressed, which can be useful for cases where compression does not yield appreciable savings. When uncompressed_byte_size is not greater than 0, the client should skip decompression.
Returns
- (::Integer) — Optional. If the row data in this ReadRowsResponse is compressed, then uncompressed byte size is the original size of the uncompressed row data. If it is set to a value greater than 0, then decompress into a buffer of size uncompressed_byte_size using the compression codec that was requested during session creation time and which is specified in TableReadOptions.response_compression_codec in ReadSession. This value is not set if no response_compression_codec was not requested and it is -1 if the requested compression would not have reduced the size of this ReadRowsResponse's row data. This attempts to match Apache Arrow's behavior described here https://github.com/apache/arrow/issues/15102 where the uncompressed length may be set to -1 to indicate that the data that follows is not compressed, which can be useful for cases where compression does not yield appreciable savings. When uncompressed_byte_size is not greater than 0, the client should skip decompression.