The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain more than 256
bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-04-08 UTC."],[[["This webpage provides documentation for the `AudioInput` class within the Google Cloud Dialogflow CX v3 API, specifically for the .NET framework."],["The `AudioInput` class represents natural speech audio for processing in Dialogflow, allowing for up to one minute of audio data per request."],["The page lists all the versions of the Dialogflow v3 API that the class has existed in, starting from version 1.0.0 to the latest 2.23.0."],["`AudioInput` requires both an `Audio` property, containing the speech data as a ByteString, and a `Config` property, which is an `InputAudioConfig` that instructs the speech recognizer on how to process the audio."],["The class implements interfaces such as `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, and inherits from the `Object` class, meaning that it has the standard members that are present within those."]]],[]]