Given the conversation between a Human and a AI assistant and a list of sources,
write a final answer for the AI assistant.
Follow these guidelines:
+ Answer the Human's query and make sure you mention all relevant details from
the sources, using exactly the same words as the sources if possible.
+ The answer must be based only on the sources and not introduce any additional
information.
+ All numbers, like price, date, time or phone numbers must appear exactly as
they are in the sources.
+ Give as comprehensive answer as possible given the sources. Include all
important details, and any caveats and conditions that apply.
+ The answer MUST be in English.
+ Don't try to make up an answer: If the answer cannot be found in the sources,
you admit that you don't know and you answer NOT_ENOUGH_INFORMATION.
You will be given a few examples before you begin.
Example 1:
Sources:
[1] <product or service> Info Page
Yes, <company> offers <product or service> in various options or variations.
Human: Do you sell <product or service>?
AI: Yes, <company> sells <product or service>. Is there anything else I can
help you with?
Example 2:
Sources:
[1] Andrea - Wikipedia
Andrea is a given name which is common worldwide for both males and females.
Human: How is the weather?
AI: NOT_ENOUGH_INFORMATION
Begin! Let's work this out step by step to be sure we have the right answer.
Sources:
$sources
$end-user-metadata
$conversation
Human: $original-query
AI:
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[[["\u003cp\u003eAgent grounding allows setting a minimum confidence level for responses, ensuring only high-quality information is provided, and a heuristics filter can be enabled to reduce inaccurate responses.\u003c/p\u003e\n"],["\u003cp\u003eData store prompt settings allow customization of the agent's name, identity, company name, company description, and scope, enhancing the quality and branding of generated answers, and a preview of the generated prompt is shown.\u003c/p\u003e\n"],["\u003cp\u003eUsers can select from multiple generative models for summarization, including \u003ccode\u003egemini-1.0-pro-001\u003c/code\u003e, \u003ccode\u003egemini-1.5-flash-001\u003c/code\u003e, \u003ccode\u003egemini-1.5-flash-002\u003c/code\u003e, \u003ccode\u003egemini-2.0-flash-001 (preview)\u003c/code\u003e, and \u003ccode\u003egemini-2.0-flash-lite-001 (preview)\u003c/code\u003e, with the option to use a default option that is subject to change.\u003c/p\u003e\n"],["\u003cp\u003eCustom prompts for the summarization LLM can be created using placeholders like \u003ccode\u003e$original-query\u003c/code\u003e, \u003ccode\u003e$sources\u003c/code\u003e, and \u003ccode\u003e$conversation\u003c/code\u003e, which can enhance the quality of answers, but missing key placeholders can prevent saving the prompt.\u003c/p\u003e\n"],["\u003cp\u003eData store fallback settings allow for the display of the most appropriate link when an agent fails to produce an answer, and the use of generative AI can be toggled on or off.\u003c/p\u003e\n"]]],[],null,["# Data store tool settings\n\nThe following\n[data store tool](/dialogflow/cx/docs/concept/data-store/handler)\nconfiguration settings are available.\n\nTool settings (Conversational Agents console only)\n--------------------------------------------------\n\nSelecting an option from the **Tool settings** drop-down menu automatically\naffects which other tool configurations are available.\n\nThe `Default` and `Optimized for voice` options are default settings that will\nautomatically configure all other parameters in the tool configuration menu.\n\nThe following options are available:\n\nGrounding\n---------\n\nConversational Agents (Dialogflow CX) calculates a confidence level for each response generated from the\ncontent of your connected data stores. This gauges the confidence that all\ninformation in the response is supported by information in the data stores.\nYou can adjust which responses are allowed by selecting the lowest confidence\nlevel you are comfortable with. You can select\nthe lowest confidence level allowed, and the agent won't return responses lower\nthan that level.\n\nThere are 5 confidence levels to choose from: `VERY_LOW`, `LOW`, `MEDIUM`,\n`HIGH`, and `VERY_HIGH`.\n\nYou can also apply a grounding heuristics filter. If enabled, responses\ncontaining content that is likely inaccurate based on common hallucinations are\nsuppressed.\n\nSelect summarization model\n--------------------------\n\nYou can select the generative model used by a data store agent for the\nsummarization generative request. The following table contains the available\noptions:\n\nAll listed models are available in all\n[supported languages](/dialogflow/cx/docs/concept/data-store#languages) and\n[supported regions](/dialogflow/cx/docs/concept/data-store#regions).\n\n### Summarization custom prompt\n\n| **Note:** Providing a custom prompt might influence the quality of answers either positively or negatively. You are responsible for the quality of the answers.\n\nYou can either use a default summarization prompt with your selected\nsummarization model or provide your own. The prompt is a text template that may\ncontain predefined placeholders. The placeholders will be replaced with the\nappropriate values at runtime and the final text will be sent to the LLM.\n\nThe placeholders are as follows:\n\n- `$original-query`: The user's query text.\n- `$rewritten-query`: Dialogflow uses a rewriter module to rewrite the original user query into a more accurate format.\n- `$sources`: Dialogflow uses Enterprise Search to search for sources\n based on the user's query. The found sources are rendered in a specific\n format:\n\n [1] title of first source\n content of first source\n [2] title of second source\n content of second source\n\n- `$end-user-metadata`: Information about the user sending the query is\n rendered in the following format:\n\n The following additional information is available about the human: {\n \"key1\": \"value1\",\n \"key2\": \"value2\",\n ...\n }\n\n- `$conversation`: The conversation history is rendered in the following\n format:\n\n Human: user's first query\n AGENT: answer to user's first query\n Human: user's second query\n AGENT: answer to user's second query\n\n- `${conversation USER:\"\u003cuser prefix\u003e\" AGENT:\"\u003cagent prefix\u003e\" TURNS:\u003cturn\n count\u003e}`: A parameterized version of the `$conversation` placeholder. You\n can customize the end-user prefix (`USER`), the agent prefix (`AGENT`), and the\n number of previous turns to include (`TURNS`). All placeholder parameter\n values must be specified.\n\n For example, `${conversation USER:\"Human says:\" AGENT:\"Agent says:\" TURNS:1}`.\n The conversation history is rendered as: \n\n Human says: user's first query\n Agent says: answer to user's first query\n\n| **Important:** The placeholders `$original-query` (or `$rewritten-query`) and `$sources` are required in custom prompts. You will be prevented from saving prompts missing these placeholders because the agent won't function correctly without them. The `$conversation` placeholder is **recommended** - without it the agent won't be able to use contextual information from the conversation to answer questions.\n\nA custom prompt should instruct the LLM to return \"NOT_ENOUGH_INFORMATION\" when\nit cannot provide an answer. In this case, the agent will invoke a [no-match\nevent](/dialogflow/cx/docs/concept/handler#event-built-in).\n\nFor example: \n\n Given the conversation between a Human and a AI assistant and a list of sources,\n write a final answer for the AI assistant.\n Follow these guidelines:\n + Answer the Human's query and make sure you mention all relevant details from\n the sources, using exactly the same words as the sources if possible.\n + The answer must be based only on the sources and not introduce any additional\n information.\n + All numbers, like price, date, time or phone numbers must appear exactly as\n they are in the sources.\n + Give as comprehensive answer as possible given the sources. Include all\n important details, and any caveats and conditions that apply.\n + The answer MUST be in English.\n + Don't try to make up an answer: If the answer cannot be found in the sources,\n you admit that you don't know and you answer NOT_ENOUGH_INFORMATION.\n You will be given a few examples before you begin.\n\n Example 1:\n Sources:\n [1] \u003cproduct or service\u003e Info Page\n Yes, \u003ccompany\u003e offers \u003cproduct or service\u003e in various options or variations.\n\n Human: Do you sell \u003cproduct or service\u003e?\n AI: Yes, \u003ccompany\u003e sells \u003cproduct or service\u003e. Is there anything else I can\n help you with?\n\n Example 2:\n Sources:\n [1] Andrea - Wikipedia\n Andrea is a given name which is common worldwide for both males and females.\n\n Human: How is the weather?\n AI: NOT_ENOUGH_INFORMATION\n\n Begin! Let's work this out step by step to be sure we have the right answer.\n\n Sources:\n $sources\n\n $end-user-metadata\n $conversation\n Human: $original-query\n AI:\n\nSelect rewriter model\n---------------------\n\n| **Note:** Providing a custom prompt might influence the quality of answers either positively or negatively. You are responsible for the quality of the answers.\n\nWhen a user query is processed, the agent sends the user query and a prompt to\nthe LLM to refactor the user query, which performs a **rewriter**.\n\nYou can select the generative model used by a data store agent for the\nrewriter generative request. The following table lists the available\noptions:\n\nAll listed models are available in all\n[supported languages](/dialogflow/cx/docs/concept/data-store#languages) and\n[supported regions](/dialogflow/cx/docs/concept/data-store#regions).\n\n### Rewriter custom prompt\n\nYou can use a default prompt or optionally provide your own. The prompt is a\ntext template that may contain predefined placeholders. The placeholders will be\nreplaced with the appropriate values at runtime and the final text will be sent\nto the LLM.\n\nThe placeholders and required text are as follows:\n\n- `$original-query`: The user's query text.\n- `$conversation`: The conversation history is rendered in the following\n format:\n\n Human: user's first query\n AGENT: answer to user's first query\n Human: user's second query\n AGENT: answer to user's second query\n\n- `${conversation USER:\"\u003cuser prefix\u003e\" AGENT:\"\u003cagent prefix\u003e\" TURNS:\u003cturn\n count\u003e}`: A parameterized version of the `$conversation` placeholder. You\n can customize the end-user prefix (`USER`), the agent prefix (`AGENT`), and the\n number of previous turns to include (`TURNS`). All placeholder parameter\n values must be specified.\n\n For example, `${conversation USER:\"Human says:\" AGENT:\"Agent says:\" TURNS:1}`.\n The conversation history is rendered as: \n\n Human says: user's first query\n Agent says: answer to user's first query\n\n- `$end-user-metadata`: Information about the user sending the query is\n rendered in the following format:\n\n The following additional information is available about the human: {\n \"key1\": \"value1\",\n \"key2\": \"value2\",\n ...\n }\n\nFor example: \n\n Your goal is to perform a search query to help the AI assistant respond to the human's last statement.\n * Always output the best search query you can, even if you suspect it's not needed.\n * Never generate a query that is the same as the user's last statement.\n * Include as much context as necessary from the conversation history.\n * Output a concise search query, and nothing else.\n * Don't use quotes or search operators.\n * The query must be in ${language!}.\n\n Conversation History: $conversation\n Human: $original-query\n Search Query:\n\nPayload settings\n----------------\n\nPayload settings provide a way to add the data store snippets as rich content in\nthe response payload, which is rendered in the [messenger](/dialogflow/cx/docs/concept/integration/dialogflow-messenger).\nYou have the option of turning this feature on or off."]]