- 2.39.1 (latest)
- 2.39.0
- 2.38.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.30.2
- 2.29.0
- 2.28.3
- 2.27.0
- 2.26.0
- 2.25.0
- 2.24.1
- 2.23.3
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.1
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.2
- 2.14.1
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.1
- 2.8.1
- 2.7.1
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.0
- 2.1.2
- 2.0.0
- 1.1.3
- 1.0.0
- 0.8.0
- 0.7.2
InputAudioConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Instructs the speech recognizer how to process the audio content.
Attributes | |
---|---|
Name | Description |
audio_encoding |
google.cloud.dialogflow_v2.types.AudioEncoding
Required. Audio encoding of the audio content to process. |
sample_rate_hertz |
int
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to `Cloud Speech API documentation |
language_code |
str
Required. The language of the supplied audio. Dialogflow does not do translations. See `Language Support |
enable_word_info |
bool
If true , Dialogflow returns
SpeechWordInfo
in
StreamingRecognitionResult
with information about the recognized speech words, e.g.
start and end time offsets. If false or unspecified, Speech
doesn't return any word-level information.
|
phrase_hints |
MutableSequence[str]
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See `the Cloud Speech documentation |
speech_contexts |
MutableSequence[google.cloud.dialogflow_v2.types.SpeechContext]
Context information to assist speech recognition. See `the Cloud Speech documentation |
model |
str
Optional. Which Speech model to select for the given request. For more information, see `Speech models |
model_variant |
google.cloud.dialogflow_v2.types.SpeechModelVariant
Which variant of the [Speech model][google.cloud.dialogflow.v2.InputAudioConfig.model] to use. |
single_utterance |
bool
If false (default), recognition does not cease until the
client closes the stream. If true , the recognizer will
detect a single spoken utterance in input audio. Recognition
ceases when it detects the audio's voice has stopped or
paused. In this case, once a detected intent is received,
the client should close the stream and start a new request
with a new stream as needed. Note: This setting is relevant
only for streaming methods. Note: When specified,
InputAudioConfig.single_utterance takes precedence over
StreamingDetectIntentRequest.single_utterance.
|
disable_no_speech_recognized_event |
bool
Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result,
trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.
|
enable_automatic_punctuation |
bool
Enable automatic punctuation option at the speech backend. |