Service that implements Google Cloud Speech API. v1
Package
@google-cloud/speechConstructors
(constructor)(opts)
constructor(opts?: ClientOptions);
Construct an instance of SpeechClient.
Name | Description |
opts |
ClientOptions
|
Properties
apiEndpoint
static get apiEndpoint(): string;
The DNS address for this API service - same as servicePath(), exists for compatibility reasons.
auth
auth: gax.GoogleAuth;
descriptors
descriptors: Descriptors;
innerApiCalls
innerApiCalls: {
[name: string]: Function;
};
operationsClient
operationsClient: gax.OperationsClient;
pathTemplates
pathTemplates: {
[name: string]: gax.PathTemplate;
};
port
static get port(): number;
The port for this API service.
scopes
static get scopes(): string[];
The scopes needed to make gRPC calls for every method defined in this service.
servicePath
static get servicePath(): string;
The DNS address for this API service.
speechStub
speechStub?: Promise<{
[name: string]: Function;
}>;
warn
warn: (code: string, message: string, warnType?: string) => void;
Methods
_streamingRecognize(options)
_streamingRecognize(options?: CallOptions): gax.CancellableStream;
Performs bidirectional streaming speech recognition: receive results while sending audio. This method is only available via the gRPC API (not REST).
Name | Description |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
gax.CancellableStream | {Stream} An object stream which is both readable and writable. It accepts objects representing [StreamingRecognizeRequest] for write() method, and will emit objects representing [StreamingRecognizeResponse] on 'data' event asynchronously. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#bi-directional-streaming) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Provides information to the recognizer that specifies how to process the
* request. The first `StreamingRecognizeRequest` message must contain a
* `streaming_config` message.
*/
// const streamingConfig = {}
/**
* The audio data to be recognized. Sequential chunks of audio data are sent
* in sequential `StreamingRecognizeRequest` messages. The first
* `StreamingRecognizeRequest` message must not contain `audio_content` data
* and all subsequent `StreamingRecognizeRequest` messages must contain
* `audio_content` data. The audio bytes must be encoded as specified in
* `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a
* pure binary representation (not base64). See
* content limits (https://cloud.google.com/speech-to-text/quotas#content).
*/
// const audioContent = 'Buffer.from('string')'
// Imports the Speech library
const {SpeechClient} = require('@google-cloud/speech').v1;
// Instantiates a client
const speechClient = new SpeechClient();
async function callStreamingRecognize() {
// Construct request
const request = {};
// Run request
const stream = await speechClient.streamingRecognize();
stream.on('data', response => {
console.log(response);
});
stream.on('error', err => {
throw err;
});
stream.on('end', () => {
/* API call completed */
});
stream.write(request);
stream.end();
}
callStreamingRecognize();
checkLongRunningRecognizeProgress(name)
checkLongRunningRecognizeProgress(name: string): Promise<LROperation<protos.google.cloud.speech.v1.LongRunningRecognizeResponse, protos.google.cloud.speech.v1.LongRunningRecognizeMetadata>>;
Check the status of the long running operation returned by longRunningRecognize()
.
Name | Description |
name |
string
The operation name that will be passed. |
Type | Description |
Promise<LROperation<protos.google.cloud.speech.v1.LongRunningRecognizeResponse, protos.google.cloud.speech.v1.LongRunningRecognizeMetadata>> | {Promise} - The promise which resolves to an object. The decoded operation object has result and metadata field to get information from. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#long-running-operations) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. Provides information to the recognizer that specifies how to
* process the request.
*/
// const config = {}
/**
* Required. The audio data to be recognized.
*/
// const audio = {}
/**
* Optional. Specifies an optional destination for the recognition results.
*/
// const outputConfig = {}
// Imports the Speech library
const {SpeechClient} = require('@google-cloud/speech').v1;
// Instantiates a client
const speechClient = new SpeechClient();
async function callLongRunningRecognize() {
// Construct request
const request = {
config,
audio,
};
// Run request
const [operation] = await speechClient.longRunningRecognize(request);
const [response] = await operation.promise();
console.log(response);
}
callLongRunningRecognize();
close()
close(): Promise<void>;
Terminate the gRPC channel and close the client.
The client will no longer be usable and all future behavior is undefined.
Type | Description |
Promise<void> | {Promise} A promise that resolves when the client is closed. |
customClassPath(project, location, customClass)
customClassPath(project: string, location: string, customClass: string): string;
Return a fully-qualified customClass resource name string.
Name | Description |
project |
string
|
location |
string
|
customClass |
string
|
Type | Description |
string | {string} Resource name string. |
getProjectId()
getProjectId(): Promise<string>;
Type | Description |
Promise<string> |
getProjectId(callback)
getProjectId(callback: Callback<string, undefined, undefined>): void;
Name | Description |
callback |
Callback<string, undefined, undefined>
|
Type | Description |
void |
initialize()
initialize(): Promise<{
[name: string]: Function;
}>;
Initialize the client. Performs asynchronous operations (such as authentication) and prepares the client. This function will be called automatically when any class method is called for the first time, but if you need to initialize it before calling an actual method, feel free to call initialize() directly.
You can await on this method if you want to make sure the client is initialized.
Type | Description |
Promise<{ [name: string]: Function; }> | {Promise} A promise that resolves to an authenticated service stub. |
longRunningRecognize(request, options)
longRunningRecognize(request?: protos.google.cloud.speech.v1.ILongRunningRecognizeRequest, options?: CallOptions): Promise<[LROperation<protos.google.cloud.speech.v1.ILongRunningRecognizeResponse, protos.google.cloud.speech.v1.ILongRunningRecognizeMetadata>, protos.google.longrunning.IOperation | undefined, {} | undefined]>;
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an Operation.error
or an Operation.response
which contains a LongRunningRecognizeResponse
message. For more information on asynchronous speech recognition, see the [how-to](https://cloud.google.com/speech-to-text/docs/async-recognize).
Name | Description |
request |
protos.google.cloud.speech.v1.ILongRunningRecognizeRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
Promise<[LROperation<protos.google.cloud.speech.v1.ILongRunningRecognizeResponse, protos.google.cloud.speech.v1.ILongRunningRecognizeMetadata>, protos.google.longrunning.IOperation | undefined, {} | undefined]> | {Promise} - The promise which resolves to an array. The first element of the array is an object representing a long running operation. Its |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. Provides information to the recognizer that specifies how to
* process the request.
*/
// const config = {}
/**
* Required. The audio data to be recognized.
*/
// const audio = {}
/**
* Optional. Specifies an optional destination for the recognition results.
*/
// const outputConfig = {}
// Imports the Speech library
const {SpeechClient} = require('@google-cloud/speech').v1;
// Instantiates a client
const speechClient = new SpeechClient();
async function callLongRunningRecognize() {
// Construct request
const request = {
config,
audio,
};
// Run request
const [operation] = await speechClient.longRunningRecognize(request);
const [response] = await operation.promise();
console.log(response);
}
callLongRunningRecognize();
longRunningRecognize(request, options, callback)
longRunningRecognize(request: protos.google.cloud.speech.v1.ILongRunningRecognizeRequest, options: CallOptions, callback: Callback<LROperation<protos.google.cloud.speech.v1.ILongRunningRecognizeResponse, protos.google.cloud.speech.v1.ILongRunningRecognizeMetadata>, protos.google.longrunning.IOperation | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.speech.v1.ILongRunningRecognizeRequest
|
options |
CallOptions
|
callback |
Callback<LROperation<protos.google.cloud.speech.v1.ILongRunningRecognizeResponse, protos.google.cloud.speech.v1.ILongRunningRecognizeMetadata>, protos.google.longrunning.IOperation | null | undefined, {} | null | undefined>
|
Type | Description |
void |
longRunningRecognize(request, callback)
longRunningRecognize(request: protos.google.cloud.speech.v1.ILongRunningRecognizeRequest, callback: Callback<LROperation<protos.google.cloud.speech.v1.ILongRunningRecognizeResponse, protos.google.cloud.speech.v1.ILongRunningRecognizeMetadata>, protos.google.longrunning.IOperation | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.speech.v1.ILongRunningRecognizeRequest
|
callback |
Callback<LROperation<protos.google.cloud.speech.v1.ILongRunningRecognizeResponse, protos.google.cloud.speech.v1.ILongRunningRecognizeMetadata>, protos.google.longrunning.IOperation | null | undefined, {} | null | undefined>
|
Type | Description |
void |
matchCustomClassFromCustomClassName(customClassName)
matchCustomClassFromCustomClassName(customClassName: string): string | number;
Parse the custom_class from CustomClass resource.
Name | Description |
customClassName |
string
A fully-qualified path representing CustomClass resource. |
Type | Description |
string | number | {string} A string representing the custom_class. |
matchLocationFromCustomClassName(customClassName)
matchLocationFromCustomClassName(customClassName: string): string | number;
Parse the location from CustomClass resource.
Name | Description |
customClassName |
string
A fully-qualified path representing CustomClass resource. |
Type | Description |
string | number | {string} A string representing the location. |
matchLocationFromPhraseSetName(phraseSetName)
matchLocationFromPhraseSetName(phraseSetName: string): string | number;
Parse the location from PhraseSet resource.
Name | Description |
phraseSetName |
string
A fully-qualified path representing PhraseSet resource. |
Type | Description |
string | number | {string} A string representing the location. |
matchPhraseSetFromPhraseSetName(phraseSetName)
matchPhraseSetFromPhraseSetName(phraseSetName: string): string | number;
Parse the phrase_set from PhraseSet resource.
Name | Description |
phraseSetName |
string
A fully-qualified path representing PhraseSet resource. |
Type | Description |
string | number | {string} A string representing the phrase_set. |
matchProjectFromCustomClassName(customClassName)
matchProjectFromCustomClassName(customClassName: string): string | number;
Parse the project from CustomClass resource.
Name | Description |
customClassName |
string
A fully-qualified path representing CustomClass resource. |
Type | Description |
string | number | {string} A string representing the project. |
matchProjectFromPhraseSetName(phraseSetName)
matchProjectFromPhraseSetName(phraseSetName: string): string | number;
Parse the project from PhraseSet resource.
Name | Description |
phraseSetName |
string
A fully-qualified path representing PhraseSet resource. |
Type | Description |
string | number | {string} A string representing the project. |
phraseSetPath(project, location, phraseSet)
phraseSetPath(project: string, location: string, phraseSet: string): string;
Return a fully-qualified phraseSet resource name string.
Name | Description |
project |
string
|
location |
string
|
phraseSet |
string
|
Type | Description |
string | {string} Resource name string. |
recognize(request, options)
recognize(request?: protos.google.cloud.speech.v1.IRecognizeRequest, options?: CallOptions): Promise<[protos.google.cloud.speech.v1.IRecognizeResponse, protos.google.cloud.speech.v1.IRecognizeRequest | undefined, {} | undefined]>;
Performs synchronous speech recognition: receive results after all audio has been sent and processed.
Name | Description |
request |
protos.google.cloud.speech.v1.IRecognizeRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Type | Description |
Promise<[protos.google.cloud.speech.v1.IRecognizeResponse, protos.google.cloud.speech.v1.IRecognizeRequest | undefined, {} | undefined]> | {Promise} - The promise which resolves to an array. The first element of the array is an object representing [RecognizeResponse]. Please see the [documentation](https://github.com/googleapis/gax-nodejs/blob/master/client-libraries.md#regular-methods) for more details and examples. |
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. Provides information to the recognizer that specifies how to
* process the request.
*/
// const config = {}
/**
* Required. The audio data to be recognized.
*/
// const audio = {}
// Imports the Speech library
const {SpeechClient} = require('@google-cloud/speech').v1;
// Instantiates a client
const speechClient = new SpeechClient();
async function callRecognize() {
// Construct request
const request = {
config,
audio,
};
// Run request
const response = await speechClient.recognize(request);
console.log(response);
}
callRecognize();
recognize(request, options, callback)
recognize(request: protos.google.cloud.speech.v1.IRecognizeRequest, options: CallOptions, callback: Callback<protos.google.cloud.speech.v1.IRecognizeResponse, protos.google.cloud.speech.v1.IRecognizeRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.speech.v1.IRecognizeRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.cloud.speech.v1.IRecognizeResponse, protos.google.cloud.speech.v1.IRecognizeRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |
recognize(request, callback)
recognize(request: protos.google.cloud.speech.v1.IRecognizeRequest, callback: Callback<protos.google.cloud.speech.v1.IRecognizeResponse, protos.google.cloud.speech.v1.IRecognizeRequest | null | undefined, {} | null | undefined>): void;
Name | Description |
request |
protos.google.cloud.speech.v1.IRecognizeRequest
|
callback |
Callback<protos.google.cloud.speech.v1.IRecognizeResponse, protos.google.cloud.speech.v1.IRecognizeRequest | null | undefined, {} | null | undefined>
|
Type | Description |
void |