The natural language speech audio to be processed.
A single request can contain up to 2 minutes of speech audio data.
The [transcribed
text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain
more than 256 bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The provided content details the `AudioInput` class within the Google Cloud Dialogflow CX v3 API, which is used to represent natural speech audio for processing."],["The latest version of the `AudioInput` class documentation is 2.23.0, while the content provided pertains to version 2.9.0."],["The `AudioInput` class implements several interfaces including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, and inherits from the base `object` class."],["Key properties of `AudioInput` include `Audio`, which holds the speech data as a `ByteString`, and `Config`, which specifies how the speech should be processed with the `InputAudioConfig` class."],["This class has two constructors: a default `AudioInput()` and an `AudioInput(AudioInput other)` which takes another `AudioInput` as a parameter."]]],[]]