The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain more than 256
bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["This webpage details the `AudioInput` class, which is used to represent natural speech audio for processing, specifically within the Google Cloud Dialogflow CX V3 environment."],["The latest version of `AudioInput` is 2.23.0, and the page provides access to numerous previous versions, ranging from 1.0.0 up to the latest."],["The `AudioInput` class implements several interfaces, including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, highlighting its role in data management and processing within the system."],["The class has properties for both `Audio` (the speech audio data itself) and `Config` (the instructions for the speech recognizer), which are essential for audio processing."],["The `AudioInput` class can be constructed with no parameters, or an `AudioInput` parameter."]]],[]]