Class StreamingAnalyzeContentRequest (2.28.3)

StreamingAnalyzeContentRequest(
    mapping=None, *, ignore_unknown_fields=False, **kwargs
)

The top-level message sent by the client to the Participants.StreamingAnalyzeContent method.

Multiple request messages should be sent in order:

  1. The first message must contain participant, config and optionally query_params. If you want to receive an audio response, it should also contain reply_audio_config. The message must not contain input.

  2. If config in the first message was set to audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather analyze text input after you already started Speech recognition, please send a message with StreamingAnalyzeContentRequest.input_text.

    However, note that:

    • Dialogflow will bill you for the audio so far.
    • Dialogflow discards all Speech recognition results in favor of the text input.
  3. If StreamingAnalyzeContentRequest.config in the first message was set to StreamingAnalyzeContentRequest.text_config, then the second message must contain only input_text. Moreover, you must not send more than two messages.

After you sent all input, you must half-close or abort the request stream.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Attributes

NameDescription
participant str
Required. The name of the participant this text comes from. Format: projects/.
audio_config google.cloud.dialogflow_v2beta1.types.InputAudioConfig
Instructs the speech recognizer how to process the speech audio. This field is a member of oneof_ config.
text_config google.cloud.dialogflow_v2beta1.types.InputTextConfig
The natural language text to be processed. This field is a member of oneof_ config.
reply_audio_config google.cloud.dialogflow_v2beta1.types.OutputAudioConfig
Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.
input_audio bytes
The input audio content to be recognized. Must be sent if audio_config is set in the first message. The complete audio over all streaming messages must not exceed 1 minute. This field is a member of oneof_ input.
input_text str
The UTF-8 encoded natural language text to be processed. Must be sent if text_config is set in the first message. Text length must not exceed 256 bytes for virtual agent interactions. The input_text field can be only sent once, and would cancel the speech recognition if any ongoing. This field is a member of oneof_ input.
input_dtmf google.cloud.dialogflow_v2beta1.types.TelephonyDtmfEvents
The DTMF digits used to invoke intent and fill in parameter value. This input is ignored if the previous response indicated that DTMF input is not accepted. This field is a member of oneof_ input.
query_params google.cloud.dialogflow_v2beta1.types.QueryParameters
Parameters for a Dialogflow virtual-agent query.
assist_query_params google.cloud.dialogflow_v2beta1.types.AssistQueryParameters
Parameters for a human assist query.
cx_parameters google.protobuf.struct_pb2.Struct
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
cx_current_page str
The unique identifier of the CX page to override the current_page in the session. Format: projects/. If cx_current_page is specified, the previous state of the session will be ignored by Dialogflow CX, including the previous page][QueryResult.current_page] and the [previous session parameters][QueryResult.parameters]. In most cases, cx_current_page and cx_parameters should be configured together to direct a session to a specific state. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
enable_extended_streaming bool
Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there's no need to half close the stream to get the response. Restrictions: - Timeout: 3 mins. - Audio Encoding: only supports AudioEncoding.AUDIO_ENCODING_LINEAR_16 and AudioEncoding.AUDIO_ENCODING_MULAW - Lifecycle: conversation should be in Assist Stage, go to [Conversation.CreateConversation][] for more information. InvalidArgument Error will be returned if the one of restriction checks failed. You can find more details in https://cloud.google.com/agent-assist/docs/extended-streaming
enable_partial_automated_agent_reply bool
Enable partial virtual agent responses. If this flag is not enabled, response stream still contains only one final response even if some Fulfillment\ s in Dialogflow virtual agent have been configured to return partial responses.
enable_debugging_info bool
if true, StreamingAnalyzeContentResponse.debugging_info will get populated.