Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries.
If you might use Agent Assist or other CCAI products now or in the future, consider using AnalyzeContent instead of sessions.detectIntent. AnalyzeContent has additional functionality for Agent Assist and other CCAI products.
Required. The name of the session this query is sent to. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>,
-projects//locations//agent/sessions/,
-projects//agent/environments//users//sessions/,
-projects//locations//agent/environments//users//sessions/`,
If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment (Environment ID might be referred to as environment name at some places). If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters. For more information, see the API interactions guide.
Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
The natural language speech audio to be processed. This field should be populated iff queryInput is set to an input audio config. A single request can contain up to 1 minute of speech audio data.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-27 UTC."],[[["\u003cp\u003eThis document describes how to use the \u003ccode\u003edetectIntent\u003c/code\u003e method to process natural language queries and receive structured data, noting that it's non-idempotent due to potential updates to contexts and session entity types.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003edetectIntent\u003c/code\u003e method uses a \u003ccode\u003ePOST\u003c/code\u003e request to a specified endpoint, with a required session parameter that includes project, location, and session details, as well as authorization that requires the \u003ccode\u003edialogflow.sessions.detectIntent\u003c/code\u003e IAM permission on the session resource.\u003c/p\u003e\n"],["\u003cp\u003eThe request body for \u003ccode\u003edetectIntent\u003c/code\u003e requires structured data, including query parameters, input specifications, audio configuration, and input audio, with specific formatting guidelines for each field, such as a base64-encoded string for \u003ccode\u003einputAudio\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe method offers optional request-level overrides to the agent's speech synthesizer settings via the \u003ccode\u003eoutputAudioConfigMask\u003c/code\u003e field, which is a comma-separated list of field names to be overridden.\u003c/p\u003e\n"],["\u003cp\u003eSuccessful calls to the \u003ccode\u003edetectIntent\u003c/code\u003e method return a \u003ccode\u003eDetectIntentResponse\u003c/code\u003e and authorization requires either the \u003ccode\u003ehttps://www.googleapis.com/auth/cloud-platform\u003c/code\u003e or the \u003ccode\u003ehttps://www.googleapis.com/auth/dialogflow\u003c/code\u003e OAuth scope.\u003c/p\u003e\n"]]],[],null,[]]