The natural language speech audio to be processed.
A single request can contain up to 1 minute of speech audio data.
The [transcribed text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain more than 256
bytes.
For non-streaming audio detect intent, both config and audio must be
provided.
For streaming audio detect intent, config must be provided in
the first request and audio must be provided in all following requests.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["The latest version of the `AudioInput` class within the `Google.Cloud.Dialogflow.Cx.V3` namespace is version 2.23.0, with many previous versions also available, dating back to version 1.0.0."],["The `AudioInput` class represents natural speech audio intended for processing and it implements several interfaces, including `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`."],["This class is designed to handle up to 1 minute of audio data in a single request and this data is represented by the `Audio` property, of type `ByteString`."],["The `Config` property of the `AudioInput` class, of type `InputAudioConfig`, is required for instructing the speech recognizer on how to process the audio."],["Instances of `AudioInput` can be created using a parameterless constructor or a constructor that takes another `AudioInput` object as an argument, and it inherits several members from the `Object` class."]]],[]]