Reference documentation and code samples for the Cloud Speech-to-Text V2 API class Google::Cloud::Speech::V2::StreamingRecognizeResponse.
StreamingRecognizeResponse is the only message returned to the client by
StreamingRecognize. A series of zero or more StreamingRecognizeResponse
messages are streamed back to the client. If there is no recognizable
audio then no messages are streamed back to the client.
Here are some examples of StreamingRecognizeResponses that might
be returned while processing audio:
results { alternatives { transcript: " that is the question"
confidence: 0.98 }
alternatives { transcript: " that was the question" }
is_final: true }
Notes:
Only two of the above responses #4 and #7 contain final results; they are
indicated by is_final: true. Concatenating these together generates the
full transcript: "to be or not to be that is the question".
The others contain interim results. #3 and #6 contain two interim
results: the first portion has a high stability and is less likely to
change; the second portion has a low stability and is very likely to
change. A UI designer might choose to show only high stability results.
The specific stability and confidence values shown above are only for
illustrative purposes. Actual values may vary.
In each response, only one of these fields will be set:
error,
speech_event_type, or
one or more (repeated) results.
(::Array<::Google::Cloud::Speech::V2::StreamingRecognitionResult>) — This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
It contains zero or one
is_final=true
result (the newly settled portion), followed by zero or more
is_final=false
results (the interim results).
value (::Array<::Google::Cloud::Speech::V2::StreamingRecognitionResult>) — This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
It contains zero or one
is_final=true
result (the newly settled portion), followed by zero or more
is_final=false
results (the interim results).
Returns
(::Array<::Google::Cloud::Speech::V2::StreamingRecognitionResult>) — This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
It contains zero or one
is_final=true
result (the newly settled portion), followed by zero or more
is_final=false
results (the interim results).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Cloud Speech-to-Text V2 API - Class Google::Cloud::Speech::V2::StreamingRecognizeResponse (v1.2.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.2.1 (latest)](/ruby/docs/reference/google-cloud-speech-v2/latest/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [1.2.0](/ruby/docs/reference/google-cloud-speech-v2/1.2.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [1.1.0](/ruby/docs/reference/google-cloud-speech-v2/1.1.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [1.0.1](/ruby/docs/reference/google-cloud-speech-v2/1.0.1/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.16.0](/ruby/docs/reference/google-cloud-speech-v2/0.16.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.15.0](/ruby/docs/reference/google-cloud-speech-v2/0.15.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.14.0](/ruby/docs/reference/google-cloud-speech-v2/0.14.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.13.1](/ruby/docs/reference/google-cloud-speech-v2/0.13.1/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.12.0](/ruby/docs/reference/google-cloud-speech-v2/0.12.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.11.0](/ruby/docs/reference/google-cloud-speech-v2/0.11.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.10.0](/ruby/docs/reference/google-cloud-speech-v2/0.10.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.9.2](/ruby/docs/reference/google-cloud-speech-v2/0.9.2/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.8.0](/ruby/docs/reference/google-cloud-speech-v2/0.8.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.7.0](/ruby/docs/reference/google-cloud-speech-v2/0.7.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.6.0](/ruby/docs/reference/google-cloud-speech-v2/0.6.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.5.1](/ruby/docs/reference/google-cloud-speech-v2/0.5.1/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.4.0](/ruby/docs/reference/google-cloud-speech-v2/0.4.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.3.0](/ruby/docs/reference/google-cloud-speech-v2/0.3.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.2.0](/ruby/docs/reference/google-cloud-speech-v2/0.2.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse)\n- [0.1.0](/ruby/docs/reference/google-cloud-speech-v2/0.1.0/Google-Cloud-Speech-V2-StreamingRecognizeResponse) \nReference documentation and code samples for the Cloud Speech-to-Text V2 API class Google::Cloud::Speech::V2::StreamingRecognizeResponse.\n\n`StreamingRecognizeResponse` is the only message returned to the client by\n`StreamingRecognize`. A series of zero or more `StreamingRecognizeResponse`\nmessages are streamed back to the client. If there is no recognizable\naudio then no messages are streamed back to the client.\n\nHere are some examples of `StreamingRecognizeResponse`s that might\nbe returned while processing audio:\n\n1. results { alternatives { transcript: \"tube\" } stability: 0.01 }\n\n2. results { alternatives { transcript: \"to be a\" } stability: 0.01 }\n\n3. results { alternatives { transcript: \"to be\" } stability: 0.9 }\n results { alternatives { transcript: \" or not to be\" } stability: 0.01 }\n\n4. results { alternatives { transcript: \"to be or not to be\"\n confidence: 0.92 }\n alternatives { transcript: \"to bee or not to bee\" }\n is_final: true }\n\n5. results { alternatives { transcript: \" that's\" } stability: 0.01 }\n\n6. results { alternatives { transcript: \" that is\" } stability: 0.9 }\n results { alternatives { transcript: \" the question\" } stability: 0.01 }\n\n7. results { alternatives { transcript: \" that is the question\"\n confidence: 0.98 }\n alternatives { transcript: \" that was the question\" }\n is_final: true }\n\nNotes:\n\n- Only two of the above responses #4 and #7 contain final results; they are\n indicated by `is_final: true`. Concatenating these together generates the\n full transcript: \"to be or not to be that is the question\".\n\n- The others contain interim `results`. #3 and #6 contain two interim\n `results`: the first portion has a high stability and is less likely to\n change; the second portion has a low stability and is very likely to\n change. A UI designer might choose to show only high stability `results`.\n\n- The specific `stability` and `confidence` values shown above are only for\n illustrative purposes. Actual values may vary.\n\n- In each response, only one of these fields will be set:\n `error`,\n `speech_event_type`, or\n one or more (repeated) `results`.\n\nInherits\n--------\n\n- Object \n\nExtended By\n-----------\n\n- Google::Protobuf::MessageExts::ClassMethods \n\nIncludes\n--------\n\n- Google::Protobuf::MessageExts\n\nMethods\n-------\n\n### #metadata\n\n def metadata() -\u003e ::Google::Cloud::Speech::V2::RecognitionResponseMetadata\n\n**Returns**\n\n- ([::Google::Cloud::Speech::V2::RecognitionResponseMetadata](./Google-Cloud-Speech-V2-RecognitionResponseMetadata)) --- Metadata about the recognition.\n\n### #metadata=\n\n def metadata=(value) -\u003e ::Google::Cloud::Speech::V2::RecognitionResponseMetadata\n\n**Parameter**\n\n- **value** ([::Google::Cloud::Speech::V2::RecognitionResponseMetadata](./Google-Cloud-Speech-V2-RecognitionResponseMetadata)) --- Metadata about the recognition. \n**Returns**\n\n- ([::Google::Cloud::Speech::V2::RecognitionResponseMetadata](./Google-Cloud-Speech-V2-RecognitionResponseMetadata)) --- Metadata about the recognition.\n\n### #results\n\n def results() -\u003e ::Array\u003c::Google::Cloud::Speech::V2::StreamingRecognitionResult\u003e\n\n**Returns**\n\n- (::Array\\\u003c[::Google::Cloud::Speech::V2::StreamingRecognitionResult](./Google-Cloud-Speech-V2-StreamingRecognitionResult)\\\u003e) --- This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one [is_final](/ruby/docs/reference/google-cloud-speech-v2/latest/Google-Cloud-Speech-V2-StreamingRecognitionResult#Google__Cloud__Speech__V2__StreamingRecognitionResult_is_final_instance_ \"Google::Cloud::Speech::V2::StreamingRecognitionResult#is_final (method)\")=`true` result (the newly settled portion), followed by zero or more [is_final](/ruby/docs/reference/google-cloud-speech-v2/latest/Google-Cloud-Speech-V2-StreamingRecognitionResult#Google__Cloud__Speech__V2__StreamingRecognitionResult_is_final_instance_ \"Google::Cloud::Speech::V2::StreamingRecognitionResult#is_final (method)\")=`false` results (the interim results).\n\n### #results=\n\n def results=(value) -\u003e ::Array\u003c::Google::Cloud::Speech::V2::StreamingRecognitionResult\u003e\n\n**Parameter**\n\n- **value** (::Array\\\u003c[::Google::Cloud::Speech::V2::StreamingRecognitionResult](./Google-Cloud-Speech-V2-StreamingRecognitionResult)\\\u003e) --- This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one [is_final](/ruby/docs/reference/google-cloud-speech-v2/latest/Google-Cloud-Speech-V2-StreamingRecognitionResult#Google__Cloud__Speech__V2__StreamingRecognitionResult_is_final_instance_ \"Google::Cloud::Speech::V2::StreamingRecognitionResult#is_final (method)\")=`true` result (the newly settled portion), followed by zero or more [is_final](/ruby/docs/reference/google-cloud-speech-v2/latest/Google-Cloud-Speech-V2-StreamingRecognitionResult#Google__Cloud__Speech__V2__StreamingRecognitionResult_is_final_instance_ \"Google::Cloud::Speech::V2::StreamingRecognitionResult#is_final (method)\")=`false` results (the interim results). \n**Returns**\n\n- (::Array\\\u003c[::Google::Cloud::Speech::V2::StreamingRecognitionResult](./Google-Cloud-Speech-V2-StreamingRecognitionResult)\\\u003e) --- This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one [is_final](/ruby/docs/reference/google-cloud-speech-v2/latest/Google-Cloud-Speech-V2-StreamingRecognitionResult#Google__Cloud__Speech__V2__StreamingRecognitionResult_is_final_instance_ \"Google::Cloud::Speech::V2::StreamingRecognitionResult#is_final (method)\")=`true` result (the newly settled portion), followed by zero or more [is_final](/ruby/docs/reference/google-cloud-speech-v2/latest/Google-Cloud-Speech-V2-StreamingRecognitionResult#Google__Cloud__Speech__V2__StreamingRecognitionResult_is_final_instance_ \"Google::Cloud::Speech::V2::StreamingRecognitionResult#is_final (method)\")=`false` results (the interim results).\n\n### #speech_event_offset\n\n def speech_event_offset() -\u003e ::Google::Protobuf::Duration\n\n**Returns**\n\n- ([::Google::Protobuf::Duration](./Google-Protobuf-Duration)) --- Time offset between the beginning of the audio and event emission.\n\n### #speech_event_offset=\n\n def speech_event_offset=(value) -\u003e ::Google::Protobuf::Duration\n\n**Parameter**\n\n- **value** ([::Google::Protobuf::Duration](./Google-Protobuf-Duration)) --- Time offset between the beginning of the audio and event emission. \n**Returns**\n\n- ([::Google::Protobuf::Duration](./Google-Protobuf-Duration)) --- Time offset between the beginning of the audio and event emission.\n\n### #speech_event_type\n\n def speech_event_type() -\u003e ::Google::Cloud::Speech::V2::StreamingRecognizeResponse::SpeechEventType\n\n**Returns**\n\n- ([::Google::Cloud::Speech::V2::StreamingRecognizeResponse::SpeechEventType](./Google-Cloud-Speech-V2-StreamingRecognizeResponse-SpeechEventType)) --- Indicates the type of speech event.\n\n### #speech_event_type=\n\n def speech_event_type=(value) -\u003e ::Google::Cloud::Speech::V2::StreamingRecognizeResponse::SpeechEventType\n\n**Parameter**\n\n- **value** ([::Google::Cloud::Speech::V2::StreamingRecognizeResponse::SpeechEventType](./Google-Cloud-Speech-V2-StreamingRecognizeResponse-SpeechEventType)) --- Indicates the type of speech event. \n**Returns**\n\n- ([::Google::Cloud::Speech::V2::StreamingRecognizeResponse::SpeechEventType](./Google-Cloud-Speech-V2-StreamingRecognizeResponse-SpeechEventType)) --- Indicates the type of speech event."]]