Contains a speech recognition result corresponding to a portion of
the audio that is currently being processed or an indication that
this is the end of the single requested utterance.
While end-user audio is being processed, Dialogflow sends a series
of results. Each result may contain a transcript value. A
transcript represents a portion of the utterance. While the
recognizer is processing audio, transcript values may be interim
values or finalized values. Once a transcript is finalized, the
is_final value is set to true and processing continues for the
next transcript.
If
StreamingDetectIntentRequest.query_input.audio.config.single_utterance
was true, and the recognizer has completed processing audio, the
message_type value is set to `END_OF_SINGLE_UTTERANCE and the
following (last) result contains the last finalized transcript.
The complete end-user utterance is determined by concatenating the
finalized transcript values received for the series of results.
In the following example, single utterance is enabled. In the case
where single utterance is not enabled, result 7 would not occur.
::
Num
transcript
message_type
is_final
1
"tube"
TRANSCRIPT
false
2
"to be a"
TRANSCRIPT
false
3
"to be"
TRANSCRIPT
false
4
"to be or not to be"
TRANSCRIPT
true
5
"that's"
TRANSCRIPT
false
6
"that is
TRANSCRIPT
false
7
unset
END_OF_SINGLE_UTTERANCE
unset
8
" that is the question"
TRANSCRIPT
true
Concatenating the finalized transcripts with is_final set to
true, the complete utterance becomes "to be or not to be that is the
question".
str
Transcript text representing the words that the user spoke.
Populated if and only if message_type = TRANSCRIPT.
is_final
bool
If false, the StreamingRecognitionResult represents
an interim result that may change. If true, the
recognizer will not return any further hypotheses about this
piece of the audio. May only be populated for
message_type = TRANSCRIPT.
confidence
float
The Speech confidence between 0.0 and 1.0 for the current
portion of audio. A higher number indicates an estimated
greater likelihood that the recognized words are correct.
The default of 0.0 is a sentinel value indicating that
confidence was not set.
This field is typically only provided if is_final is
true and you should not rely on it being accurate or even
set.
stability
float
An estimate of the likelihood that the speech recognizer
will not change its guess about this interim recognition
result:
- If the value is unspecified or 0.0, Dialogflow didn't
compute the stability. In particular, Dialogflow will
only provide stability for TRANSCRIPT results with
is_final = false.
- Otherwise, the value is in (0.0, 1.0] where 0.0 means
completely unstable and 1.0 means completely stable.
speech_word_info
MutableSequence[google.cloud.dialogflowcx_v3.types.SpeechWordInfo]
Word-specific information for the words recognized by Speech
in
transcript.
Populated if and only if message_type = TRANSCRIPT
and [InputAudioConfig.enable_word_info] is set.
speech_end_offset
google.protobuf.duration_pb2.Duration
Time offset of the end of this Speech recognition result
relative to the beginning of the audio. Only populated for
message_type = TRANSCRIPT.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Class StreamingRecognitionResult (1.42.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.42.0 (latest)](/python/docs/reference/dialogflow-cx/latest/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.41.1](/python/docs/reference/dialogflow-cx/1.41.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.40.0](/python/docs/reference/dialogflow-cx/1.40.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.39.0](/python/docs/reference/dialogflow-cx/1.39.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.38.0](/python/docs/reference/dialogflow-cx/1.38.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.37.0](/python/docs/reference/dialogflow-cx/1.37.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.36.0](/python/docs/reference/dialogflow-cx/1.36.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.35.0](/python/docs/reference/dialogflow-cx/1.35.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.34.1](/python/docs/reference/dialogflow-cx/1.34.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.33.0](/python/docs/reference/dialogflow-cx/1.33.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.32.1](/python/docs/reference/dialogflow-cx/1.32.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.31.1](/python/docs/reference/dialogflow-cx/1.31.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.30.1](/python/docs/reference/dialogflow-cx/1.30.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.29.0](/python/docs/reference/dialogflow-cx/1.29.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.28.0](/python/docs/reference/dialogflow-cx/1.28.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.27.0](/python/docs/reference/dialogflow-cx/1.27.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.26.0](/python/docs/reference/dialogflow-cx/1.26.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.25.0](/python/docs/reference/dialogflow-cx/1.25.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.24.0](/python/docs/reference/dialogflow-cx/1.24.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.23.1](/python/docs/reference/dialogflow-cx/1.23.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.22.0](/python/docs/reference/dialogflow-cx/1.22.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.21.0](/python/docs/reference/dialogflow-cx/1.21.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.20.1](/python/docs/reference/dialogflow-cx/1.20.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.19.0](/python/docs/reference/dialogflow-cx/1.19.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.18.0](/python/docs/reference/dialogflow-cx/1.18.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.17.1](/python/docs/reference/dialogflow-cx/1.17.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.16.0](/python/docs/reference/dialogflow-cx/1.16.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.15.0](/python/docs/reference/dialogflow-cx/1.15.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.14.1](/python/docs/reference/dialogflow-cx/1.14.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.13.5](/python/docs/reference/dialogflow-cx/1.13.5/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.12.1](/python/docs/reference/dialogflow-cx/1.12.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.11.0](/python/docs/reference/dialogflow-cx/1.11.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.10.0](/python/docs/reference/dialogflow-cx/1.10.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.9.1](/python/docs/reference/dialogflow-cx/1.9.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.8.0](/python/docs/reference/dialogflow-cx/1.8.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.7.0](/python/docs/reference/dialogflow-cx/1.7.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.6.0](/python/docs/reference/dialogflow-cx/1.6.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.5.0](/python/docs/reference/dialogflow-cx/1.5.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.4.0](/python/docs/reference/dialogflow-cx/1.4.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.3.2](/python/docs/reference/dialogflow-cx/1.3.2/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.2.0](/python/docs/reference/dialogflow-cx/1.2.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.1.1](/python/docs/reference/dialogflow-cx/1.1.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [1.0.0](/python/docs/reference/dialogflow-cx/1.0.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.8.2](/python/docs/reference/dialogflow-cx/0.8.2/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.7.1](/python/docs/reference/dialogflow-cx/0.7.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.6.0](/python/docs/reference/dialogflow-cx/0.6.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.5.0](/python/docs/reference/dialogflow-cx/0.5.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.4.1](/python/docs/reference/dialogflow-cx/0.4.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.3.0](/python/docs/reference/dialogflow-cx/0.3.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.2.0](/python/docs/reference/dialogflow-cx/0.2.0/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult)\n- [0.1.1](/python/docs/reference/dialogflow-cx/0.1.1/google.cloud.dialogflowcx_v3.types.StreamingRecognitionResult) \n\n StreamingRecognitionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)\n\nContains a speech recognition result corresponding to a portion of\nthe audio that is currently being processed or an indication that\nthis is the end of the single requested utterance.\n\nWhile end-user audio is being processed, Dialogflow sends a series\nof results. Each result may contain a `transcript` value. A\ntranscript represents a portion of the utterance. While the\nrecognizer is processing audio, transcript values may be interim\nvalues or finalized values. Once a transcript is finalized, the\n`is_final` value is set to true and processing continues for the\nnext transcript.\n\nIf\n`StreamingDetectIntentRequest.query_input.audio.config.single_utterance`\nwas true, and the recognizer has completed processing audio, the\n`message_type` value is set to \\`END_OF_SINGLE_UTTERANCE and the\nfollowing (last) result contains the last finalized transcript.\n\nThe complete end-user utterance is determined by concatenating the\nfinalized transcript values received for the series of results.\n\nIn the following example, single utterance is enabled. In the case\nwhere single utterance is not enabled, result 7 would not occur.\n\n::\n\nConcatenating the finalized transcripts with `is_final` set to\ntrue, the complete utterance becomes \"to be or not to be that is the\nquestion\".\n\nClasses\n-------\n\n### MessageType\n\n MessageType(value)\n\nType of the response message."]]