The natural language speech audio to be processed.
A single request can contain up to 2 minutes of speech audio data.
The [transcribed
text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain
more than 256 bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The provided content details the `AudioInput` class within the Dialogflow v3 API, which is used for processing natural speech audio."],["The `AudioInput` class implements multiple interfaces, including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, providing various functionalities for message handling, equality checks, cloning, and buffer management."],["There are two constructors available for creating `AudioInput` objects: a default constructor and one that accepts another `AudioInput` object for copying."],["The class includes properties for `Audio`, which is the speech data itself as a ByteString, and `Config`, which specifies how the speech recognizer should process the audio input, with `Config` being required."],["The document version is 2.8.0, but it also lists all the previous version links, including the most current version 2.23.0, along with the full list of versions, going back to version 1.0.0."]]],[]]