The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain more than 256
bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The `AudioInput` class represents natural speech audio for processing within the Google Cloud Dialogflow CX V3 API."],["Version 2.23.0 is the latest available version of the Google.Cloud.Dialogflow.Cx.V3 library, with previous versions available, including 2.0.0."],["The `AudioInput` class, derived from Object and implementing several interfaces, handles audio data and is defined in the `Google.Cloud.Dialogflow.Cx.V3` namespace."],["The class can be instantiated with either a default constructor or a constructor that takes another `AudioInput` object as a parameter."],["The `Audio` property stores the actual speech audio data as a `ByteString`, and the `Config` property defines the parameters for speech recognition through the `InputAudioConfig` class."]]],[]]