The natural language speech audio to be processed.
A single request can contain up to 2 minutes of speech audio data.
The [transcribed
text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain
more than 256 bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["This webpage provides documentation for the `AudioInput` class within the Google Cloud Dialogflow CX v3 API for .NET, with the most recent version being 2.23.0."],["The `AudioInput` class, part of the `Google.Cloud.Dialogflow.Cx.V3` namespace, is designed to represent natural speech audio that will be processed by the Dialogflow service."],["The `AudioInput` class has an `Audio` property of type `ByteString` for holding the audio data, and a `Config` property of type `InputAudioConfig` that guides the speech recognition process."],["The documentation covers versions of the `AudioInput` class from version 1.0.0 up to the latest 2.23.0, with each version having its own dedicated page."],["The `AudioInput` class implements several interfaces, including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, and inherits members from the base `object` class."]]],[]]