- 2.39.1 (latest)
- 2.39.0
- 2.38.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.30.2
- 2.29.0
- 2.28.3
- 2.27.0
- 2.26.0
- 2.25.0
- 2.24.1
- 2.23.3
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.1
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.2
- 2.14.1
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.1
- 2.8.1
- 2.7.1
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.0
- 2.1.2
- 2.0.0
- 1.1.3
- 1.0.0
- 0.8.0
- 0.7.2
The top-level message sent by the client to the Sessions.StreamingDetectIntent method.
Multiple request messages should be sent in order:
- The first message must contain session, [query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] plus optionally [query_params][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]. The message must not contain [input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio].
If [query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to [query_input.audio_config][google.cloud.dialogflow.v2.InputAudioConfig], all subsequent messages must contain [input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with [query_input.text][google.cloud.dialogflow.v2.QueryInput.text].
However, note that:
- Dialogflow will bill you for the audio duration so far.
- Dialogflow discards all Speech recognition results in favor of the input text.
- Dialogflow will use the language code from the first message.
After you sent all input, you must half-close or abort the request stream.
The parameters of this query.
Please use [InputAudioConfig.single_utterance][google.cloud.d
ialogflow.v2.InputAudioConfig.single_utterance] instead. If
false
(default), recognition does not cease until the
client closes the stream. If true
, the recognizer will
detect a single spoken utterance in input audio. Recognition
ceases when it detects the audio's voice has stopped or
paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a
new stream as needed. This setting is ignored when
query_input
is a piece of text or an event.
Mask for [output_audio_config][google.cloud.dialogflow.v2.St reamingDetectIntentRequest.output_audio_config] indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level. If unspecified or empty, [output_audio_config][google.cloud.dia logflow.v2.StreamingDetectIntentRequest.output_audio_config] replaces the agent-level config in its entirety.