The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain more than 256
bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["This webpage provides documentation for the `AudioInput` class within the `Google.Cloud.Dialogflow.Cx.V3` namespace in .NET, which represents natural speech audio for processing."],["The latest version documented is 2.23.0, and the page offers links to documentation for versions ranging from 2.23.0 down to 1.0.0, all relating to the `AudioInput` class."],["`AudioInput` instances can be constructed using a default constructor or by providing another `AudioInput` object, and they utilize `ByteString` to carry the audio."],["The `AudioInput` class requires a `Config` property, which specifies to the speech recognizer how to process the audio, and an `Audio` property, which is the natural language speech audio to be processed."],["The webpage documents that `AudioInput` implements multiple interfaces such as IMessage, IEquatable, IDeepCloneable, IBufferMessage, and provides a list of inherited members from the Object class."]]],[]]