The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain more than 256
bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The `AudioInput` class represents natural speech audio intended for processing within the Google Cloud Dialogflow CX platform, with the latest version being 2.23.0."],["This class implements several interfaces including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, enabling it to interact within the Google Protocol Buffers framework."],["`AudioInput` objects require `Config` and `Audio` properties, in which `Config` provides instructions for audio processing, and `Audio` holds the speech audio data itself."],["The `AudioInput` class allows for single audio requests up to 1 minute in length and transcribed text no longer than 256 bytes, for both streaming and non-streaming audio detection."],["The class contains two constructors, one being a default constructor and the other taking a parameter of another `AudioInput` object."]]],[]]