The natural language speech audio to be processed.
A single request can contain up to 2 minutes of speech audio data.
The [transcribed
text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain
more than 256 bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The `AudioInput` class in the Dialogflow v3 API represents natural speech audio for processing."],["This class implements several interfaces, including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, providing functionalities for message handling, equality comparison, deep cloning, and buffer message operations."],["The `AudioInput` class includes properties such as `Audio`, which represents the speech audio data, and `Config`, which defines the speech recognizer's processing instructions."],["The documentation covers multiple versions of the API, with the latest being version 2.23.0, while the current context is version 2.17.0."],["There is a parameter list available for the `AudioInput(AudioInput other)` constructor that provides the `other` parameters description."]]],[]]