StreamingRecognitionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.
While end-user audio is being processed, Dialogflow sends a series
of results. Each result may contain a transcript
value. A
transcript represents a portion of the utterance. While the
recognizer is processing audio, transcript values may be interim
values or finalized values. Once a transcript is finalized, the
is_final
value is set to true and processing continues for the
next transcript.
If
StreamingDetectIntentRequest.query_input.audio_config.single_utterance
was true, and the recognizer has completed processing audio, the
message_type
value is set to `END_OF_SINGLE_UTTERANCE and the
following (last) result contains the last finalized transcript.
The complete end-user utterance is determined by concatenating the finalized transcript values received for the series of results.
In the following example, single utterance is enabled. In the case where single utterance is not enabled, result 7 would not occur.
::
Num | transcript | message_type | is_final |
---|---|---|---|
1 | "tube" | TRANSCRIPT | false |
2 | "to be a" | TRANSCRIPT | false |
3 | "to be" | TRANSCRIPT | false |
4 | "to be or not to be" | TRANSCRIPT | true |
5 | "that's" | TRANSCRIPT | false |
6 | "that is | TRANSCRIPT | false |
7 | unset | END_OF_SINGLE_UTTERANCE | unset |
8 | " that is the question" | TRANSCRIPT | true |
Concatenating the finalized transcripts with is_final
set to
true, the complete utterance becomes "to be or not to be that is the
question".
Attributes | |
---|---|
Name | Description |
message_type |
google.cloud.dialogflow_v2.types.StreamingRecognitionResult.MessageType
Type of the result message. |
transcript |
str
Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT .
|
is_final |
bool
If false , the StreamingRecognitionResult represents
an interim result that may change. If true , the
recognizer will not return any further hypotheses about this
piece of the audio. May only be populated for
message_type = TRANSCRIPT .
|
confidence |
float
The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if is_final is
true and you should not rely on it being accurate or even
set.
|
speech_word_info |
MutableSequence[google.cloud.dialogflow_v2.types.SpeechWordInfo]
Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT
and [InputAudioConfig.enable_word_info] is set.
|
speech_end_offset |
google.protobuf.duration_pb2.Duration
Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT .
|
language_code |
str
Detected language code for the transcript. |
Classes
MessageType
MessageType(value)
Type of the response message.
Values:
MESSAGE_TYPE_UNSPECIFIED (0):
Not specified. Should never be used.
TRANSCRIPT (1):
Message contains a (possibly partial)
transcript.
END_OF_SINGLE_UTTERANCE (2):
This event indicates that the server has detected the end of
the user's speech utterance and expects no additional
inputs. Therefore, the server will not process additional
audio (although it may subsequently return additional
results). The client should stop sending additional audio
data, half-close the gRPC connection, and wait for any
additional results until the server closes the gRPC
connection. This message is only sent if
single_utterance
was set to true
, and is not used
otherwise.