Event indicates that the server has detected the end of the user's speech
utterance and expects no additional speech. Therefore, the server will
not process additional audio (although it may subsequently return
additional results). The client should stop sending additional audio
data, half-close the gRPC connection, and wait for any additional results
until the server closes the gRPC connection. This message is only sent if
[single_utterance][google.cloud.dialogflow.cx.v3.InputAudioConfig.single_utterance]
was set to true, and is not used otherwise.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-21 UTC."],[[["This page provides reference documentation for the `StreamingRecognitionResult.Types.MessageType` enum within the Google Cloud Dialogflow CX v3 API for .NET."],["The latest version documented is 2.23.0, with versions ranging from 1.0.0 up to 2.23.0, each with its own link to the documentation."],["The enum `StreamingRecognitionResult.Types.MessageType` defines the different types of messages in a streaming recognition result, with the `EndOfSingleUtterance`, `Transcript`, and `Unspecified` fields."],["The `EndOfSingleUtterance` field signifies the end of user speech and signals the server to cease processing, only applicable when the `single_utterance` setting is true."],["The `Transcript` field contains speech-to-text data, while the `Unspecified` field represents an undefined state."]]],[]]