Reference documentation and code samples for the Dialogflow V2 API class Google::Cloud::Dialogflow::V2::StreamingDetectIntentRequest.
The top-level message sent by the client to the [StreamingDetectIntent][] method.
Multiple request messages should be sent in order:
- The first message must contain session, query_input plus optionally query_params. If the client wants to receive an audio response, it should also contain output_audio_config. The message must not contain input_audio.
If query_input was set to query_input.audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text.
However, note that:
* Dialogflow will bill you for the audio duration so far.
* Dialogflow discards all Speech recognition results in favor of the
input text.
* Dialogflow will use the language code from the first message.
After you sent all input, you must half-close or abort the request stream.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#enable_debugging_info
def enable_debugging_info() -> ::Boolean-
(::Boolean) — if true,
StreamingDetectIntentResponse.debugging_infowill get populated.
#enable_debugging_info=
def enable_debugging_info=(value) -> ::Boolean-
value (::Boolean) — if true,
StreamingDetectIntentResponse.debugging_infowill get populated.
-
(::Boolean) — if true,
StreamingDetectIntentResponse.debugging_infowill get populated.
#input_audio
def input_audio() -> ::String-
(::String) — The input audio content to be recognized. Must be sent if
query_inputwas set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.
#input_audio=
def input_audio=(value) -> ::String-
value (::String) — The input audio content to be recognized. Must be sent if
query_inputwas set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.
-
(::String) — The input audio content to be recognized. Must be sent if
query_inputwas set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.
#output_audio_config
def output_audio_config() -> ::Google::Cloud::Dialogflow::V2::OutputAudioConfig- (::Google::Cloud::Dialogflow::V2::OutputAudioConfig) — Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
#output_audio_config=
def output_audio_config=(value) -> ::Google::Cloud::Dialogflow::V2::OutputAudioConfig- value (::Google::Cloud::Dialogflow::V2::OutputAudioConfig) — Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
- (::Google::Cloud::Dialogflow::V2::OutputAudioConfig) — Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
#output_audio_config_mask
def output_audio_config_mask() -> ::Google::Protobuf::FieldMask-
(::Google::Protobuf::FieldMask) — Mask for
output_audio_config
indicating which settings in this request-level config should override
speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
#output_audio_config_mask=
def output_audio_config_mask=(value) -> ::Google::Protobuf::FieldMask-
value (::Google::Protobuf::FieldMask) — Mask for
output_audio_config
indicating which settings in this request-level config should override
speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
-
(::Google::Protobuf::FieldMask) — Mask for
output_audio_config
indicating which settings in this request-level config should override
speech synthesizer settings defined at agent-level.
If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.
#query_input
def query_input() -> ::Google::Cloud::Dialogflow::V2::QueryInput-
(::Google::Cloud::Dialogflow::V2::QueryInput) —
Required. The input specification. It can be set to:
an audio config which instructs the speech recognizer how to process the speech audio,
a conversational query in the form of text, or
an event that specifies which intent to trigger.
#query_input=
def query_input=(value) -> ::Google::Cloud::Dialogflow::V2::QueryInput-
value (::Google::Cloud::Dialogflow::V2::QueryInput) —
Required. The input specification. It can be set to:
an audio config which instructs the speech recognizer how to process the speech audio,
a conversational query in the form of text, or
an event that specifies which intent to trigger.
-
(::Google::Cloud::Dialogflow::V2::QueryInput) —
Required. The input specification. It can be set to:
an audio config which instructs the speech recognizer how to process the speech audio,
a conversational query in the form of text, or
an event that specifies which intent to trigger.
#query_params
def query_params() -> ::Google::Cloud::Dialogflow::V2::QueryParameters- (::Google::Cloud::Dialogflow::V2::QueryParameters) — The parameters of this query.
#query_params=
def query_params=(value) -> ::Google::Cloud::Dialogflow::V2::QueryParameters- value (::Google::Cloud::Dialogflow::V2::QueryParameters) — The parameters of this query.
- (::Google::Cloud::Dialogflow::V2::QueryParameters) — The parameters of this query.
#session
def session() -> ::String-
(::String) — Required. The name of the session the query is sent to.
Format of the session name:
projects/<Project ID>/agent/sessions/<Session ID>, orprojects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. IfEnvironment IDis not specified, we assume default 'draft' environment. IfUser IDis not specified, we are using "-". It's up to the API caller to choose an appropriateSession IDandUser Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession IDandUser IDmust not exceed 36 characters.For more information, see the API interactions guide.
Note: Always use agent versions for production traffic. See Versions and environments.
#session=
def session=(value) -> ::String-
value (::String) — Required. The name of the session the query is sent to.
Format of the session name:
projects/<Project ID>/agent/sessions/<Session ID>, orprojects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. IfEnvironment IDis not specified, we assume default 'draft' environment. IfUser IDis not specified, we are using "-". It's up to the API caller to choose an appropriateSession IDandUser Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession IDandUser IDmust not exceed 36 characters.For more information, see the API interactions guide.
Note: Always use agent versions for production traffic. See Versions and environments.
-
(::String) — Required. The name of the session the query is sent to.
Format of the session name:
projects/<Project ID>/agent/sessions/<Session ID>, orprojects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. IfEnvironment IDis not specified, we assume default 'draft' environment. IfUser IDis not specified, we are using "-". It's up to the API caller to choose an appropriateSession IDandUser Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of theSession IDandUser IDmust not exceed 36 characters.For more information, see the API interactions guide.
Note: Always use agent versions for production traffic. See Versions and environments.
#single_utterance
def single_utterance() -> ::Boolean-
(::Boolean) — Please use
InputAudioConfig.single_utterance
instead. If
false(default), recognition does not cease until the client closes the stream. Iftrue, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored whenquery_inputis a piece of text or an event.
#single_utterance=
def single_utterance=(value) -> ::Boolean-
value (::Boolean) — Please use
InputAudioConfig.single_utterance
instead. If
false(default), recognition does not cease until the client closes the stream. Iftrue, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored whenquery_inputis a piece of text or an event.
-
(::Boolean) — Please use
InputAudioConfig.single_utterance
instead. If
false(default), recognition does not cease until the client closes the stream. Iftrue, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored whenquery_inputis a piece of text or an event.