The top-level message returned from the StreamingAnalyzeContent
method.
Multiple response messages can be returned in order:
If the input was set to streaming audio, the first one or more
messages contain recognition_result. Each
recognition_result represents a more complete transcript of
what the user said. The last recognition_result has
is_final set to true.
In virtual agent stage: if
enable_partial_automated_agent_reply is true, the following N
(currently 1 <= N <= 4) messages contain
automated_agent_reply and optionally reply_audio returned
by the virtual agent. The first (N-1)
automated_agent_reply\ s will have
automated_agent_reply_type set to PARTIAL. The last
automated_agent_reply has automated_agent_reply_type set
to FINAL. If enable_partial_automated_agent_reply is not
enabled, response stream only contains the final reply.
In human assist stage: the following N (N >= 1) messages contain
human_agent_suggestion_results,
end_user_suggestion_results or message.
str
Optional. The output text content.
This field is set if an automated agent
responded with a text for the user.
reply_audio
google.cloud.dialogflow_v2beta1.types.OutputAudio
Optional. The audio data bytes encoded as specified in the
request. This field is set if:
- The reply_audio_config field is specified in the
request.
- The automated agent, which this output comes from,
responded with audio. In such case, the
reply_audio.config field contains settings used to
synthesize the speech.
In some scenarios, multiple output audio fields may be
present in the response structure. In these cases, only the
top-most-level audio output has content.
automated_agent_reply
google.cloud.dialogflow_v2beta1.types.AutomatedAgentReply
Optional. Only set if a Dialogflow automated agent has
responded. Note that:
[AutomatedAgentReply.detect_intent_response.output_audio][]
and
[AutomatedAgentReply.detect_intent_response.output_audio_config][]
are always empty, use
reply_audio
instead.
MutableSequence[google.cloud.dialogflow_v2beta1.types.SuggestionResult]
The suggestions for most recent human agent. The order is
the same as
HumanAgentAssistantConfig.SuggestionConfig.feature_configs
of
HumanAgentAssistantConfig.human_agent_suggestion_config.
end_user_suggestion_results
MutableSequence[google.cloud.dialogflow_v2beta1.types.SuggestionResult]
The suggestions for end user. The order is the same as
HumanAgentAssistantConfig.SuggestionConfig.feature_configs
of
HumanAgentAssistantConfig.end_user_suggestion_config.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Class StreamingAnalyzeContentResponse (2.41.2)\n\nVersion latestkeyboard_arrow_down\n\n- [2.41.2 (latest)](/python/docs/reference/dialogflow/latest/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.41.1](/python/docs/reference/dialogflow/2.41.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.40.0](/python/docs/reference/dialogflow/2.40.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.39.1](/python/docs/reference/dialogflow/2.39.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.38.0](/python/docs/reference/dialogflow/2.38.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.37.0](/python/docs/reference/dialogflow/2.37.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.36.0](/python/docs/reference/dialogflow/2.36.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.35.0](/python/docs/reference/dialogflow/2.35.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.34.0](/python/docs/reference/dialogflow/2.34.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.33.0](/python/docs/reference/dialogflow/2.33.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.32.0](/python/docs/reference/dialogflow/2.32.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.30.2](/python/docs/reference/dialogflow/2.30.2/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.29.0](/python/docs/reference/dialogflow/2.29.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.28.3](/python/docs/reference/dialogflow/2.28.3/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.27.0](/python/docs/reference/dialogflow/2.27.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.26.0](/python/docs/reference/dialogflow/2.26.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.25.0](/python/docs/reference/dialogflow/2.25.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.24.1](/python/docs/reference/dialogflow/2.24.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.23.3](/python/docs/reference/dialogflow/2.23.3/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.22.0](/python/docs/reference/dialogflow/2.22.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.21.0](/python/docs/reference/dialogflow/2.21.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.20.0](/python/docs/reference/dialogflow/2.20.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.19.1](/python/docs/reference/dialogflow/2.19.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.18.0](/python/docs/reference/dialogflow/2.18.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.17.0](/python/docs/reference/dialogflow/2.17.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.16.1](/python/docs/reference/dialogflow/2.16.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.15.2](/python/docs/reference/dialogflow/2.15.2/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.14.1](/python/docs/reference/dialogflow/2.14.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.13.0](/python/docs/reference/dialogflow/2.13.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.12.0](/python/docs/reference/dialogflow/2.12.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.11.0](/python/docs/reference/dialogflow/2.11.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.10.0](/python/docs/reference/dialogflow/2.10.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.9.1](/python/docs/reference/dialogflow/2.9.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.8.1](/python/docs/reference/dialogflow/2.8.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.7.1](/python/docs/reference/dialogflow/2.7.1/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.6.0](/python/docs/reference/dialogflow/2.6.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.5.0](/python/docs/reference/dialogflow/2.5.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.4.0](/python/docs/reference/dialogflow/2.4.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.3.0](/python/docs/reference/dialogflow/2.3.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.2.0](/python/docs/reference/dialogflow/2.2.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.1.2](/python/docs/reference/dialogflow/2.1.2/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [2.0.0](/python/docs/reference/dialogflow/2.0.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [1.1.3](/python/docs/reference/dialogflow/1.1.3/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [1.0.0](/python/docs/reference/dialogflow/1.0.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [0.8.0](/python/docs/reference/dialogflow/0.8.0/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse)\n- [0.7.2](/python/docs/reference/dialogflow/0.7.2/google.cloud.dialogflow_v2beta1.types.StreamingAnalyzeContentResponse) \n\n StreamingAnalyzeContentResponse(\n mapping=None, *, ignore_unknown_fields=False, **kwargs\n )\n\nThe top-level message returned from the `StreamingAnalyzeContent`\nmethod.\n\nMultiple response messages can be returned in order:\n\n1. If the input was set to streaming audio, the first one or more\n messages contain `recognition_result`. Each\n `recognition_result` represents a more complete transcript of\n what the user said. The last `recognition_result` has\n `is_final` set to `true`.\n\n2. In virtual agent stage: if\n `enable_partial_automated_agent_reply` is true, the following N\n (currently 1 \\\u003c= N \\\u003c= 4) messages contain\n `automated_agent_reply` and optionally `reply_audio` returned\n by the virtual agent. The first (N-1)\n `automated_agent_reply`\\\\ s will have\n `automated_agent_reply_type` set to `PARTIAL`. The last\n `automated_agent_reply` has `automated_agent_reply_type` set\n to `FINAL`. If `enable_partial_automated_agent_reply` is not\n enabled, response stream only contains the final reply.\n\n In human assist stage: the following N (N \\\u003e= 1) messages contain\n `human_agent_suggestion_results`,\n `end_user_suggestion_results` or `message`."]]