The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain more than 256
bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["This webpage details the `AudioInput` class within the `Google.Cloud.Dialogflow.Cx.V3` namespace, which is used to represent and process natural speech audio."],["The latest version available for this class is 2.23.0, with numerous historical versions listed, ranging from 1.0.0 to 2.22.0, each accessible via a direct link."],["`AudioInput` inherits from the base `Object` class and implements multiple interfaces, including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`."],["Key properties of the `AudioInput` class include `Audio` (a `ByteString` for the speech audio data) and `Config` (an `InputAudioConfig` that specifies how to process the audio)."],["The provided content displays a historical record of the versions for this specific class and how one can navigate each of them."]]],[]]