The natural language speech audio to be processed.
A single request can contain up to 2 minutes of speech audio data.
The [transcribed
text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain
more than 256 bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["This webpage provides reference documentation for the `AudioInput` class within the Google Cloud Dialogflow CX v3 API, specifically within the .NET environment."],["The `AudioInput` class represents natural speech audio intended for processing by the Dialogflow system, with a limit of 2 minutes of audio per request."],["The page lists various versions of the Dialogflow.Cx.V3 API, with version 2.23.0 being the latest, while the current focus of the documentation is on version 2.16.0."],["`AudioInput` includes properties such as `Audio`, a `ByteString` for the audio data itself, and `Config`, an `InputAudioConfig` object to specify how the audio should be processed."],["The class `AudioInput` inherits from `object` and implements `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`."]]],[]]