- 2.39.1 (latest)
- 2.39.0
- 2.38.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.30.2
- 2.29.0
- 2.28.3
- 2.27.0
- 2.26.0
- 2.25.0
- 2.24.1
- 2.23.3
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.1
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.2
- 2.14.1
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.1
- 2.8.1
- 2.7.1
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.0
- 2.1.2
- 2.0.0
- 1.1.3
- 1.0.0
- 0.8.0
- 0.7.2
Contains a speech recognition result corresponding to a portion of the
audio that is currently being processed or an indication that this is
the end of the single requested utterance. Example: 1. transcript:
“tube” 2. transcript: “to be a” 3. transcript: “to be” 4.
transcript: “to be or not to be” is_final: true 5. transcript: "
that’s" 6. transcript: " that is" 7. message_type:
END_OF_SINGLE_UTTERANCE
8. transcript: " that is the question"
is_final: true Only two of the responses contain final results (#4
and #8 indicated by is_final: true
). Concatenating these generates
the full transcript: “to be or not to be that is the question”. In
each response we populate: - for TRANSCRIPT
: transcript
and
possibly is_final
. - for END_OF_SINGLE_UTTERANCE
: only
message_type
.
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
The Speech confidence between 0.0 and 1.0 for the current
portion of audio. A higher number indicates an estimated
greater likelihood that the recognized words are correct. The
default of 0.0 is a sentinel value indicating that confidence
was not set. This field is typically only provided if
is_final
is true and you should not rely on it being
accurate or even set.
Word-specific information for the words recognized by Speech
in [transcript][google.cloud.dialogflow.v2beta1.StreamingRecog
nitionResult.transcript]. Populated if and only if
message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.