Google Cloud Media Translation V1beta1 Client - Class StreamingTranslateSpeechConfig (0.3.3)

Reference documentation and code samples for the Google Cloud Media Translation V1beta1 Client class StreamingTranslateSpeechConfig.

Config used for streaming translation.

Generated from protobuf message google.cloud.mediatranslation.v1beta1.StreamingTranslateSpeechConfig

Namespace

Google \ Cloud \ MediaTranslation \ V1beta1

Methods

__construct

Constructor.

Parameters
NameDescription
data array

Optional. Data for populating the Message object.

↳ audio_config Google\Cloud\MediaTranslation\V1beta1\TranslateSpeechConfig

Required. The common config for all the following audio contents.

↳ single_utterance bool

Optional. If false or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingTranslateSpeechResults with the is_final flag set to true. If true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).

getAudioConfig

Required. The common config for all the following audio contents.

Returns
TypeDescription
Google\Cloud\MediaTranslation\V1beta1\TranslateSpeechConfig|null

hasAudioConfig

clearAudioConfig

setAudioConfig

Required. The common config for all the following audio contents.

Parameter
NameDescription
var Google\Cloud\MediaTranslation\V1beta1\TranslateSpeechConfig
Returns
TypeDescription
$this

getSingleUtterance

Optional. If false or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingTranslateSpeechResults with the is_final flag set to true.

If true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).

Returns
TypeDescription
bool

setSingleUtterance

Optional. If false or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingTranslateSpeechResults with the is_final flag set to true.

If true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).

Parameter
NameDescription
var bool
Returns
TypeDescription
$this