The natural language speech audio to be processed.
A single request can contain up to 2 minutes of speech audio data.
The [transcribed
text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain
more than 256 bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The provided content is the documentation for the `AudioInput` class within the Google.Cloud.Dialogflow.Cx.V3 namespace, which is part of the Dialogflow v3 API for .NET."],["`AudioInput` represents natural speech audio that can be processed, having methods and properties used to be configured, such as `Config` and `Audio`."],["The latest version of the library is 2.23.0, but this documentation is for version 2.13.0, and it also provides links to older and newer versions of the class."],["The `AudioInput` class implements several interfaces, including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`."],["The properties AudioInput has, `Audio`, is used to put the natural language speech audio, and `Config` is required to instruct the speech recognizer."]]],[]]