Reference documentation and code samples for the Media Translation V1beta1 API class Google::Cloud::MediaTranslation::V1beta1::StreamingTranslateSpeechConfig.
(::Boolean) — Optional. If false or omitted, the system performs
continuous translation (continuing to wait for and process audio even if
the user pauses speaking) until the client closes the input stream (gRPC
API) or until the maximum time limit has been reached. May return multiple
StreamingTranslateSpeechResults with the is_final flag set to true.
If true, the speech translator will detect a single spoken utterance.
When it detects that the user has paused or stopped speaking, it will
return an END_OF_SINGLE_UTTERANCE event and cease translation.
When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should
stop sending the requests. However, clients should keep receiving remaining
responses until the stream is terminated. To construct the complete
sentence in a streaming way, one should override (if 'is_final' of previous
response is false), or append (if 'is_final' of previous response is true).
#single_utterance=
defsingle_utterance=(value)->::Boolean
Parameter
value (::Boolean) — Optional. If false or omitted, the system performs
continuous translation (continuing to wait for and process audio even if
the user pauses speaking) until the client closes the input stream (gRPC
API) or until the maximum time limit has been reached. May return multiple
StreamingTranslateSpeechResults with the is_final flag set to true.
If true, the speech translator will detect a single spoken utterance.
When it detects that the user has paused or stopped speaking, it will
return an END_OF_SINGLE_UTTERANCE event and cease translation.
When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should
stop sending the requests. However, clients should keep receiving remaining
responses until the stream is terminated. To construct the complete
sentence in a streaming way, one should override (if 'is_final' of previous
response is false), or append (if 'is_final' of previous response is true).
Returns
(::Boolean) — Optional. If false or omitted, the system performs
continuous translation (continuing to wait for and process audio even if
the user pauses speaking) until the client closes the input stream (gRPC
API) or until the maximum time limit has been reached. May return multiple
StreamingTranslateSpeechResults with the is_final flag set to true.
If true, the speech translator will detect a single spoken utterance.
When it detects that the user has paused or stopped speaking, it will
return an END_OF_SINGLE_UTTERANCE event and cease translation.
When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should
stop sending the requests. However, clients should keep receiving remaining
responses until the stream is terminated. To construct the complete
sentence in a streaming way, one should override (if 'is_final' of previous
response is false), or append (if 'is_final' of previous response is true).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-09 UTC."],[],[],null,["# Media Translation V1beta1 API - Class Google::Cloud::MediaTranslation::V1beta1::StreamingTranslateSpeechConfig (v0.12.0)\n\nVersion latestkeyboard_arrow_down\n\n- [0.12.0 (latest)](/ruby/docs/reference/google-cloud-media_translation-v1beta1/latest/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.11.1](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.11.1/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.10.0](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.10.0/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.9.1](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.9.1/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.8.2](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.8.2/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.7.0](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.7.0/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.6.1](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.6.1/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.5.0](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.5.0/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig)\n- [0.4.5](/ruby/docs/reference/google-cloud-media_translation-v1beta1/0.4.5/Google-Cloud-MediaTranslation-V1beta1-StreamingTranslateSpeechConfig) \nReference documentation and code samples for the Media Translation V1beta1 API class Google::Cloud::MediaTranslation::V1beta1::StreamingTranslateSpeechConfig.\n\nConfig used for streaming translation. \n\nInherits\n--------\n\n- Object \n\nExtended By\n-----------\n\n- Google::Protobuf::MessageExts::ClassMethods \n\nIncludes\n--------\n\n- Google::Protobuf::MessageExts\n\nMethods\n-------\n\n### #audio_config\n\n def audio_config() -\u003e ::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig\n\n**Returns**\n\n- ([::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig](./Google-Cloud-MediaTranslation-V1beta1-TranslateSpeechConfig)) --- Required. The common config for all the following audio contents.\n\n### #audio_config=\n\n def audio_config=(value) -\u003e ::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig\n\n**Parameter**\n\n- **value** ([::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig](./Google-Cloud-MediaTranslation-V1beta1-TranslateSpeechConfig)) --- Required. The common config for all the following audio contents. \n**Returns**\n\n- ([::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig](./Google-Cloud-MediaTranslation-V1beta1-TranslateSpeechConfig)) --- Required. The common config for all the following audio contents.\n\n### #single_utterance\n\n def single_utterance() -\u003e ::Boolean\n\n**Returns**\n\n- (::Boolean) --- Optional. If `false` or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingTranslateSpeechResult`s with the `is_final` flag set to `true`.\n\n\n If `true`, the speech translator will detect a single spoken utterance.\n When it detects that the user has paused or stopped speaking, it will\n return an `END_OF_SINGLE_UTTERANCE` event and cease translation.\n When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should\n stop sending the requests. However, clients should keep receiving remaining\n responses until the stream is terminated. To construct the complete\n sentence in a streaming way, one should override (if 'is_final' of previous\n response is false), or append (if 'is_final' of previous response is true).\n\n### #single_utterance=\n\n def single_utterance=(value) -\u003e ::Boolean\n\n**Parameter**\n\n- **value** (::Boolean) --- Optional. If `false` or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingTranslateSpeechResult`s with the `is_final` flag set to `true`.\n\n\n If `true`, the speech translator will detect a single spoken utterance.\n When it detects that the user has paused or stopped speaking, it will\n return an `END_OF_SINGLE_UTTERANCE` event and cease translation.\n When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should\n stop sending the requests. However, clients should keep receiving remaining\n responses until the stream is terminated. To construct the complete\n sentence in a streaming way, one should override (if 'is_final' of previous\nresponse is false), or append (if 'is_final' of previous response is true). \n**Returns**\n\n- (::Boolean) --- Optional. If `false` or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple `StreamingTranslateSpeechResult`s with the `is_final` flag set to `true`.\n\n\n If `true`, the speech translator will detect a single spoken utterance.\n When it detects that the user has paused or stopped speaking, it will\n return an `END_OF_SINGLE_UTTERANCE` event and cease translation.\n When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should\n stop sending the requests. However, clients should keep receiving remaining\n responses until the stream is terminated. To construct the complete\n sentence in a streaming way, one should override (if 'is_final' of previous\n response is false), or append (if 'is_final' of previous response is true)."]]