SpeechToTextConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Configures speech transcription for ConversationProfile.
Attributes |
|
---|---|
Name | Description |
speech_model_variant |
google.cloud.dialogflow_v2beta1.types.SpeechModelVariant
The speech model used in speech to text. SPEECH_MODEL_VARIANT_UNSPECIFIED , USE_BEST_AVAILABLE
will be treated as USE_ENHANCED . It can be overridden in
AnalyzeContentRequest
and
StreamingAnalyzeContentRequest
request. If enhanced model variant is specified and an
enhanced version of the specified model for the language
does not exist, then it would emit an error.
|
model |
str
Which Speech model to select. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then Dialogflow auto-selects a model based on other parameters in the SpeechToTextConfig and Agent settings. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to `Cloud Speech API documentation |
phrase_sets |
MutableSequence[str]
List of names of Cloud Speech phrase sets that are used for transcription. |
audio_encoding |
google.cloud.dialogflow_v2beta1.types.AudioEncoding
Audio encoding of the audio content to process. |
sample_rate_hertz |
int
Sample rate (in Hertz) of the audio content sent in the query. Refer to `Cloud Speech API documentation |
language_code |
str
The language of the supplied audio. Dialogflow does not do translations. See `Language Support |
enable_word_info |
bool
If true , Dialogflow returns
SpeechWordInfo
in
StreamingRecognitionResult
with information about the recognized speech words, e.g.
start and end time offsets. If false or unspecified, Speech
doesn't return any word-level information.
|
use_timeout_based_endpointing |
bool
Use timeout based endpointing, interpreting endpointer sensitivy as seconds of timeout value. |