public static final class BigQueryStorageGrpc.BigQueryStorageBlockingStub extends AbstractBlockingStub<BigQueryStorageGrpc.BigQueryStorageBlockingStub>
A stub to allow clients to do synchronous rpc calls to service BigQueryStorage.
BigQuery storage API.
The BigQuery storage API can be used to read data stored in BigQuery.
The v1beta1 API is not yet officially deprecated, and will go through a full
deprecation cycle (https://cloud.google.com/products#product-launch-stages)
before the service is turned down. However, new code should use the v1 API
going forward.
Inheritance
java.lang.Object >
io.grpc.stub.AbstractStub >
io.grpc.stub.AbstractBlockingStub >
BigQueryStorageGrpc.BigQueryStorageBlockingStub
Inherited Members
io.grpc.stub.AbstractBlockingStub.<T>newStub(io.grpc.stub.AbstractStub.StubFactory<T>,io.grpc.Channel)
io.grpc.stub.AbstractBlockingStub.<T>newStub(io.grpc.stub.AbstractStub.StubFactory<T>,io.grpc.Channel,io.grpc.CallOptions)
io.grpc.stub.AbstractStub.<T>withOption(io.grpc.CallOptions.Key<T>,T)
io.grpc.stub.AbstractStub.build(io.grpc.Channel,io.grpc.CallOptions)
io.grpc.stub.AbstractStub.getCallOptions()
io.grpc.stub.AbstractStub.getChannel()
io.grpc.stub.AbstractStub.withCallCredentials(io.grpc.CallCredentials)
io.grpc.stub.AbstractStub.withChannel(io.grpc.Channel)
io.grpc.stub.AbstractStub.withCompression(java.lang.String)
io.grpc.stub.AbstractStub.withDeadline(io.grpc.Deadline)
io.grpc.stub.AbstractStub.withDeadlineAfter(java.time.Duration)
io.grpc.stub.AbstractStub.withDeadlineAfter(long,java.util.concurrent.TimeUnit)
io.grpc.stub.AbstractStub.withExecutor(java.util.concurrent.Executor)
io.grpc.stub.AbstractStub.withInterceptors(io.grpc.ClientInterceptor...)
io.grpc.stub.AbstractStub.withMaxInboundMessageSize(int)
io.grpc.stub.AbstractStub.withMaxOutboundMessageSize(int)
io.grpc.stub.AbstractStub.withOnReadyThreshold(int)
io.grpc.stub.AbstractStub.withWaitForReady()
Methods
batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request)
public Storage.BatchCreateReadSessionStreamsResponse batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request)
Creates additional streams for a ReadSession. This API can be used to
dynamically adjust the parallelism of a batch processing task upwards by
adding additional workers.
build(Channel channel, CallOptions callOptions)
protected BigQueryStorageGrpc.BigQueryStorageBlockingStub build(Channel channel, CallOptions callOptions)
Parameters |
Name |
Description |
channel |
io.grpc.Channel
|
callOptions |
io.grpc.CallOptions
|
Overrides
io.grpc.stub.AbstractStub.build(io.grpc.Channel,io.grpc.CallOptions)
createReadSession(Storage.CreateReadSessionRequest request)
public Storage.ReadSession createReadSession(Storage.CreateReadSessionRequest request)
Creates a new read session. A read session divides the contents of a
BigQuery table into one or more streams, which can then be used to read
data from the table. The read session also specifies properties of the
data to be read, such as a list of columns or a push-down filter describing
the rows to be returned.
A particular row can be read by at most one stream. When the caller has
reached the end of each stream in the session, then all the data in the
table has been read.
Read sessions automatically expire 6 hours after they are created and do
not require manual clean-up by the caller.
finalizeStream(Storage.FinalizeStreamRequest request)
public Empty finalizeStream(Storage.FinalizeStreamRequest request)
Causes a single stream in a ReadSession to gracefully stop. This
API can be used to dynamically adjust the parallelism of a batch processing
task downwards without losing data.
This API does not delete the stream -- it remains visible in the
ReadSession, and any data processed by the stream is not released to other
streams. However, no additional data will be assigned to the stream once
this call completes. Callers must continue reading data on the stream until
the end of the stream is reached so that data which has already been
assigned to the stream will be processed.
This method will return an error if there are no other live streams
in the Session, or if SplitReadStream() has been called on the given
Stream.
Returns |
Type |
Description |
Empty |
|
readRows(Storage.ReadRowsRequest request)
public Iterator<Storage.ReadRowsResponse> readRows(Storage.ReadRowsRequest request)
Reads rows from the table in the format prescribed by the read session.
Each response contains one or more table rows, up to a maximum of 10 MiB
per response; read requests which attempt to read individual rows larger
than this will fail.
Each request also returns a set of stream statistics reflecting the
estimated total number of rows in the read stream. This number is computed
based on the total table size and the number of active streams in the read
session, and may change as other streams continue to read data.
splitReadStream(Storage.SplitReadStreamRequest request)
public Storage.SplitReadStreamResponse splitReadStream(Storage.SplitReadStreamRequest request)
Splits a given read stream into two Streams. These streams are referred to
as the primary and the residual of the split. The original stream can still
be read from in the same manner as before. Both of the returned streams can
also be read from, and the total rows return by both child streams will be
the same as the rows read from the original stream.
Moreover, the two child streams will be allocated back to back in the
original Stream. Concretely, it is guaranteed that for streams Original,
Primary, and Residual, that Original[0-j] = Primary[0-j] and
Original[j-n] = Residual[0-m] once the streams have been read to
completion.
This method is guaranteed to be idempotent.