Required. The natural language speech audio to be processed.
A single request can contain up to 2 minutes of speech audio data.
The transcribed text cannot contain more than 256 bytes for virtual agent
interactions.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["This document provides reference documentation for the `AudioInput` class within the Google Cloud Dialogflow v2beta1 API, specifically version 1.0.0-beta20."],["The `AudioInput` class represents natural language speech audio that will be processed, and it must include instructions on how to process the data, and the audio to be processed."],["`AudioInput` implements interfaces such as `IMessage`, `IEquatable`, `IDeepCloneable`, and `IBufferMessage`, and inherits from `object`."],["There are two constructors available for `AudioInput`: a default constructor, and a constructor that takes another `AudioInput` instance as a parameter."],["`AudioInput` contains two properties, `Audio` and `Config`, that are both required, with `Audio` being the speech audio to be processed, and `Config` instructing the speech recognizer on processing."]]],[]]