The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed
text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain
more than 256 bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The provided content details the `AudioInput` class within the Google Cloud Dialogflow CX V3 API, specifically for .NET, along with its various versions, ranging from the latest 2.23.0 to older ones like 1.0.0."],["The `AudioInput` class is used to represent and handle natural speech audio that is to be processed by Dialogflow."],["This class implements several interfaces including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`."],["The `AudioInput` class has two main properties: `Audio`, which holds the audio data as a `ByteString`, and `Config`, which is of type `InputAudioConfig` and defines how the audio should be processed."],["There are two constructors provided for this class, one default and one to initialize a new object with an existing `AudioInput` instance."]]],[]]