Reference documentation and code samples for the Media Translation V1beta1 API class Google::Cloud::MediaTranslation::V1beta1::StreamingTranslateSpeechConfig.
Config used for streaming translation.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#audio_config
def audio_config() -> ::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig- (::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig) — Required. The common config for all the following audio contents.
#audio_config=
def audio_config=(value) -> ::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig- value (::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig) — Required. The common config for all the following audio contents.
- (::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig) — Required. The common config for all the following audio contents.
#single_utterance
def single_utterance() -> ::Boolean-
(::Boolean) — Optional. If
falseor omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multipleStreamingTranslateSpeechResults with theis_finalflag set totrue.If
true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return anEND_OF_SINGLE_UTTERANCEevent and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).
#single_utterance=
def single_utterance=(value) -> ::Boolean-
value (::Boolean) — Optional. If
falseor omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multipleStreamingTranslateSpeechResults with theis_finalflag set totrue.If
true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return anEND_OF_SINGLE_UTTERANCEevent and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).
-
(::Boolean) — Optional. If
falseor omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multipleStreamingTranslateSpeechResults with theis_finalflag set totrue.If
true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return anEND_OF_SINGLE_UTTERANCEevent and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).