BigQuery Write API.
The Write API can be used to write data to BigQuery.
For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api v1
Package
@google-cloud/bigquery-storageConstructors
(constructor)(opts)
constructor(opts?: ClientOptions);
Construct an instance of BigQueryWriteClient.
Name | Description |
opts |
ClientOptions
|
Properties
apiEndpoint
static get apiEndpoint(): string;
The DNS address for this API service - same as servicePath(), exists for compatibility reasons.
auth
auth: gax.GoogleAuth;
bigQueryWriteStub
bigQueryWriteStub?: Promise<{
[name: string]: Function;
}>;
descriptors
descriptors: Descriptors;
innerApiCalls
innerApiCalls: {
[name: string]: Function;
};
pathTemplates
pathTemplates: {
[name: string]: gax.PathTemplate;
};
port
static get port(): number;
The port for this API service.
scopes
static get scopes(): string[];
The scopes needed to make gRPC calls for every method defined in this service.
servicePath
static get servicePath(): string;
The DNS address for this API service.
warn
warn: (code: string, message: string, warnType?: string) => void;
Methods
appendRows(options)
appendRows(options?: CallOptions): gax.CancellableStream;
Appends data to the given stream.
If offset
is specified, the offset
is checked against the end of stream. The server returns OUT_OF_RANGE
in AppendRowsResponse
if an attempt is made to append to an offset beyond the current end of the stream or ALREADY_EXISTS
if user provides an offset
that has already been written to. User can retry with adjusted offset within the same RPC connection. If offset
is not specified, append happens at the end of the stream.
The response contains an optional offset at which the append happened. No offset information will be returned for appends to a default stream.
Responses are received in the same order in which requests are sent. There will be one response for each successful inserted request. Responses may optionally embed error information if the originating AppendRequest was not successfully processed.
The specifics of when successfully appended data is made visible to the table are governed by the type of stream:
* For COMMITTED streams (which includes the default stream), data is visible immediately upon successful append.
* For BUFFERED streams, data is made visible via a subsequent FlushRows
rpc which advances a cursor to a newer offset in the stream.
* For PENDING streams, data is not made visible until the stream itself is finalized (via the FinalizeWriteStream
rpc), and the stream is explicitly committed via the BatchCommitWriteStreams
rpc.
Name | Description |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
gax.CancellableStream | {Stream} An object stream which is both readable and writable. It accepts objects representing [AppendRowsRequest] for write() method, and will emit objects representing [AppendRowsResponse] on 'data' event asynchronously. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#bi-directional-streaming) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The write_stream identifies the target of the append operation, and only
* needs to be specified as part of the first request on the gRPC connection.
* If provided for subsequent requests, it must match the value of the first
* request.
* For explicitly created write streams, the format is:
* `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
* `projects/{project}/datasets/{dataset}/tables/{table}/_default`.
*/
// const writeStream = 'abc123'
/**
* If present, the write is only performed if the next append offset is same
* as the provided value. If not present, the write is performed at the
* current end of stream. Specifying a value for this field is not allowed
* when calling AppendRows for the '_default' stream.
*/
// const offset = {}
/**
* Rows in proto format.
*/
// const protoRows = {}
/**
* Id set by client to annotate its identity. Only initial request setting is
* respected.
*/
// const traceId = 'abc123'
// Imports the Storage library
const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;
// Instantiates a client
const storageClient = new BigQueryWriteClient();
async function callAppendRows() {
// Construct request
const request = {
writeStream,
};
// Run request
const stream = await storageClient.appendRows();
stream.on('data', response => {
console.log(response);
});
stream.on('error', err => {
throw err;
});
stream.on('end', () => {
/* API call completed */
});
stream.write(request);
stream.end();
}
callAppendRows();
batchCommitWriteStreams(request, options)
batchCommitWriteStreams(request?: protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest, options?: CallOptions): Promise<[protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, (protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | undefined), {} | undefined]>;
Atomically commits a group of PENDING
streams that belong to the same parent
table.
Streams must be finalized before commit and cannot be committed multiple times. Once a stream is committed, data in the stream becomes available for read operations.
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
Promise<[protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, (protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | undefined), {} | undefined]> | {Promise} - The promise which resolves to an array. The first element of the array is an object representing [BatchCommitWriteStreamsResponse]. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. Parent table that all the streams should belong to, in the form of
* `projects/{project}/datasets/{dataset}/tables/{table}`.
*/
// const parent = 'abc123'
/**
* Required. The group of streams that will be committed atomically.
*/
// const writeStreams = 'abc123'
// Imports the Storage library
const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;
// Instantiates a client
const storageClient = new BigQueryWriteClient();
async function callBatchCommitWriteStreams() {
// Construct request
const request = {
parent,
writeStreams,
};
// Run request
const response = await storageClient.batchCommitWriteStreams(request);
console.log(response);
}
callBatchCommitWriteStreams();
batchCommitWriteStreams(request, options, callback)
batchCommitWriteStreams(request: protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
batchCommitWriteStreams(request, callback)
batchCommitWriteStreams(request: protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsResponse, protos.google.cloud.bigquery.storage.v1.IBatchCommitWriteStreamsRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
close()
close(): Promise<void>;
Terminate the gRPC channel and close the client.
The client will no longer be usable and all future behavior is undefined.
Type | Description |
Promise<void> | {Promise} A promise that resolves when the client is closed. |
createWriteStream(request, options)
createWriteStream(request?: protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest, options?: CallOptions): Promise<[protos.google.cloud.bigquery.storage.v1.IWriteStream, (protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | undefined), {} | undefined]>;
Creates a write stream to the given table. Additionally, every table has a special stream named '_default' to which data can be written. This stream doesn't need to be created using CreateWriteStream. It is a stream that can be used simultaneously by any number of clients. Data written to this stream is considered committed as soon as an acknowledgement is received.
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
Promise<[protos.google.cloud.bigquery.storage.v1.IWriteStream, (protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | undefined), {} | undefined]> | {Promise} - The promise which resolves to an array. The first element of the array is an object representing [WriteStream]. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. Reference to the table to which the stream belongs, in the format
* of `projects/{project}/datasets/{dataset}/tables/{table}`.
*/
// const parent = 'abc123'
/**
* Required. Stream to be created.
*/
// const writeStream = {}
// Imports the Storage library
const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;
// Instantiates a client
const storageClient = new BigQueryWriteClient();
async function callCreateWriteStream() {
// Construct request
const request = {
parent,
writeStream,
};
// Run request
const response = await storageClient.createWriteStream(request);
console.log(response);
}
callCreateWriteStream();
createWriteStream(request, options, callback)
createWriteStream(request: protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
createWriteStream(request, callback)
createWriteStream(request: protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.ICreateWriteStreamRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
finalizeWriteStream(request, options)
finalizeWriteStream(request?: protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest, options?: CallOptions): Promise<[protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, (protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | undefined), {} | undefined]>;
Finalize a write stream so that no new data can be appended to the stream. Finalize is not supported on the '_default' stream.
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
Promise<[protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, (protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | undefined), {} | undefined]> | {Promise} - The promise which resolves to an array. The first element of the array is an object representing [FinalizeWriteStreamResponse]. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. Name of the stream to finalize, in the form of
* `projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}`.
*/
// const name = 'abc123'
// Imports the Storage library
const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;
// Instantiates a client
const storageClient = new BigQueryWriteClient();
async function callFinalizeWriteStream() {
// Construct request
const request = {
name,
};
// Run request
const response = await storageClient.finalizeWriteStream(request);
console.log(response);
}
callFinalizeWriteStream();
finalizeWriteStream(request, options, callback)
finalizeWriteStream(request: protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
finalizeWriteStream(request, callback)
finalizeWriteStream(request: protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamResponse, protos.google.cloud.bigquery.storage.v1.IFinalizeWriteStreamRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
flushRows(request, options)
flushRows(request?: protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest, options?: CallOptions): Promise<[protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | undefined, {} | undefined]>;
Flushes rows to a BUFFERED stream.
If users are appending rows to BUFFERED stream, flush operation is required in order for the rows to become available for reading. A Flush operation flushes up to any previously flushed offset in a BUFFERED stream, to the offset specified in the request.
Flush is not supported on the _default stream, since it is not BUFFERED.
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
Promise<[protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | undefined, {} | undefined]> | {Promise} - The promise which resolves to an array. The first element of the array is an object representing [FlushRowsResponse]. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The stream that is the target of the flush operation.
*/
// const writeStream = 'abc123'
/**
* Ending offset of the flush operation. Rows before this offset(including
* this offset) will be flushed.
*/
// const offset = {}
// Imports the Storage library
const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;
// Instantiates a client
const storageClient = new BigQueryWriteClient();
async function callFlushRows() {
// Construct request
const request = {
writeStream,
};
// Run request
const response = await storageClient.flushRows(request);
console.log(response);
}
callFlushRows();
flushRows(request, options, callback)
flushRows(request: protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
flushRows(request, callback)
flushRows(request: protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IFlushRowsResponse, protos.google.cloud.bigquery.storage.v1.IFlushRowsRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
getProjectId()
getProjectId(): Promise<string>;
Type | Description |
Promise<string> |
getProjectId(callback)
getProjectId(callback: Callback<string, undefined, undefined>): void;
Name | Description |
callback |
Callback<string, undefined, undefined>
|
Type | Description |
void |
getWriteStream(request, options)
getWriteStream(request?: protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest, options?: CallOptions): Promise<[protos.google.cloud.bigquery.storage.v1.IWriteStream, (protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | undefined), {} | undefined]>;
Gets information about a write stream.
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
Promise<[protos.google.cloud.bigquery.storage.v1.IWriteStream, (protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | undefined), {} | undefined]> | {Promise} - The promise which resolves to an array. The first element of the array is an object representing [WriteStream]. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. Name of the stream to get, in the form of
* `projects/{project}/datasets/{dataset}/tables/{table}/streams/{stream}`.
*/
// const name = 'abc123'
// Imports the Storage library
const {BigQueryWriteClient} = require('@google-cloud/bigquery-storage').v1;
// Instantiates a client
const storageClient = new BigQueryWriteClient();
async function callGetWriteStream() {
// Construct request
const request = {
name,
};
// Run request
const response = await storageClient.getWriteStream(request);
console.log(response);
}
callGetWriteStream();
getWriteStream(request, options, callback)
getWriteStream(request: protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest, options: CallOptions, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
getWriteStream(request, callback)
getWriteStream(request: protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest, callback: Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest
|
callback |
Callback<protos.google.cloud.bigquery.storage.v1.IWriteStream, protos.google.cloud.bigquery.storage.v1.IGetWriteStreamRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
initialize()
initialize(): Promise<{
[name: string]: Function;
}>;
Initialize the client. Performs asynchronous operations (such as authentication) and prepares the client. This function will be called automatically when any class method is called for the first time, but if you need to initialize it before calling an actual method, feel free to call initialize() directly.
You can await on this method if you want to make sure the client is initialized.
Type | Description |
Promise<{ [name: string]: Function; }> | {Promise} A promise that resolves to an authenticated service stub. |
matchDatasetFromTableName(tableName)
matchDatasetFromTableName(tableName: string): string | number;
Parse the dataset from Table resource.
Name | Description |
tableName |
string
A fully-qualified path representing Table resource. |
Type | Description |
string | number | {string} A string representing the dataset. |
matchDatasetFromWriteStreamName(writeStreamName)
matchDatasetFromWriteStreamName(writeStreamName: string): string | number;
Parse the dataset from WriteStream resource.
Name | Description |
writeStreamName |
string
A fully-qualified path representing WriteStream resource. |
Type | Description |
string | number | {string} A string representing the dataset. |
matchLocationFromReadSessionName(readSessionName)
matchLocationFromReadSessionName(readSessionName: string): string | number;
Parse the location from ReadSession resource.
Name | Description |
readSessionName |
string
A fully-qualified path representing ReadSession resource. |
Type | Description |
string | number | {string} A string representing the location. |
matchLocationFromReadStreamName(readStreamName)
matchLocationFromReadStreamName(readStreamName: string): string | number;
Parse the location from ReadStream resource.
Name | Description |
readStreamName |
string
A fully-qualified path representing ReadStream resource. |
Type | Description |
string | number | {string} A string representing the location. |
matchProjectFromProjectName(projectName)
matchProjectFromProjectName(projectName: string): string | number;
Parse the project from Project resource.
Name | Description |
projectName |
string
A fully-qualified path representing Project resource. |
Type | Description |
string | number | {string} A string representing the project. |
matchProjectFromReadSessionName(readSessionName)
matchProjectFromReadSessionName(readSessionName: string): string | number;
Parse the project from ReadSession resource.
Name | Description |
readSessionName |
string
A fully-qualified path representing ReadSession resource. |
Type | Description |
string | number | {string} A string representing the project. |
matchProjectFromReadStreamName(readStreamName)
matchProjectFromReadStreamName(readStreamName: string): string | number;
Parse the project from ReadStream resource.
Name | Description |
readStreamName |
string
A fully-qualified path representing ReadStream resource. |
Type | Description |
string | number | {string} A string representing the project. |
matchProjectFromTableName(tableName)
matchProjectFromTableName(tableName: string): string | number;
Parse the project from Table resource.
Name | Description |
tableName |
string
A fully-qualified path representing Table resource. |
Type | Description |
string | number | {string} A string representing the project. |
matchProjectFromWriteStreamName(writeStreamName)
matchProjectFromWriteStreamName(writeStreamName: string): string | number;
Parse the project from WriteStream resource.
Name | Description |
writeStreamName |
string
A fully-qualified path representing WriteStream resource. |
Type | Description |
string | number | {string} A string representing the project. |
matchSessionFromReadSessionName(readSessionName)
matchSessionFromReadSessionName(readSessionName: string): string | number;
Parse the session from ReadSession resource.
Name | Description |
readSessionName |
string
A fully-qualified path representing ReadSession resource. |
Type | Description |
string | number | {string} A string representing the session. |
matchSessionFromReadStreamName(readStreamName)
matchSessionFromReadStreamName(readStreamName: string): string | number;
Parse the session from ReadStream resource.
Name | Description |
readStreamName |
string
A fully-qualified path representing ReadStream resource. |
Type | Description |
string | number | {string} A string representing the session. |
matchStreamFromReadStreamName(readStreamName)
matchStreamFromReadStreamName(readStreamName: string): string | number;
Parse the stream from ReadStream resource.
Name | Description |
readStreamName |
string
A fully-qualified path representing ReadStream resource. |
Type | Description |
string | number | {string} A string representing the stream. |
matchStreamFromWriteStreamName(writeStreamName)
matchStreamFromWriteStreamName(writeStreamName: string): string | number;
Parse the stream from WriteStream resource.
Name | Description |
writeStreamName |
string
A fully-qualified path representing WriteStream resource. |
Type | Description |
string | number | {string} A string representing the stream. |
matchTableFromTableName(tableName)
matchTableFromTableName(tableName: string): string | number;
Parse the table from Table resource.
Name | Description |
tableName |
string
A fully-qualified path representing Table resource. |
Type | Description |
string | number | {string} A string representing the table. |
matchTableFromWriteStreamName(writeStreamName)
matchTableFromWriteStreamName(writeStreamName: string): string | number;
Parse the table from WriteStream resource.
Name | Description |
writeStreamName |
string
A fully-qualified path representing WriteStream resource. |
Type | Description |
string | number | {string} A string representing the table. |
projectPath(project)
projectPath(project: string): string;
Return a fully-qualified project resource name string.
Name | Description |
project |
string
|
Type | Description |
string | {string} Resource name string. |
readSessionPath(project, location, session)
readSessionPath(project: string, location: string, session: string): string;
Return a fully-qualified readSession resource name string.
Name | Description |
project |
string
|
location |
string
|
session |
string
|
Type | Description |
string | {string} Resource name string. |
readStreamPath(project, location, session, stream)
readStreamPath(project: string, location: string, session: string, stream: string): string;
Return a fully-qualified readStream resource name string.
Name | Description |
project |
string
|
location |
string
|
session |
string
|
stream |
string
|
Type | Description |
string | {string} Resource name string. |
tablePath(project, dataset, table)
tablePath(project: string, dataset: string, table: string): string;
Return a fully-qualified table resource name string.
Name | Description |
project |
string
|
dataset |
string
|
table |
string
|
Type | Description |
string | {string} Resource name string. |
writeStreamPath(project, dataset, table, stream)
writeStreamPath(project: string, dataset: string, table: string, stream: string): string;
Return a fully-qualified writeStream resource name string.
Name | Description |
project |
string
|
dataset |
string
|
table |
string
|
stream |
string
|
Type | Description |
string | {string} Resource name string. |