Stay organized with collections
Save and categorize content based on your preferences.
A conversation dataset contains conversation transcript data, and is used to
train either a Smart Reply or Summarization custom model.
Smart Reply uses the conversation transcripts
to recommend text responses to human agents conversing with an end-user.
Summarization custom models
are trained on conversation datasets that contain both transcripts and
annotation data. They use the annotations to generate conversation
summaries to human agents after a conversation has completed.
There are two ways to create a dataset: Using the Console tutorial workflows,
or manually creating a dataset in the Console using the Data->Datasets tab. We recommend that you use the Console tutorials as a first
option. To use the Console tutorials, navigate to the
Agent Assist Console
and click the Get started button under the feature you'd like to test.
This page demonstrates how to create a dataset manually.
Before you begin
Follow the Dialogflow setup
instructions to enable Dialogflow on a Google Cloud Platform project.
We recommend that you read the Agent Assist
basics page before starting this tutorial.
If you are implementing Smart Reply using your own transcript data, make
sure your transcripts are in JSON in the specified
format
and stored in a
Google Cloud Storage bucket. A
conversation dataset must contain at least 30,000 conversations, otherwise
model training will fail. As a general rule, the more conversations you have
the better your model quality will be. We suggest that you remove any
conversations with fewer than 20 messages or 3 conversation turns (changes
in which participant is making an utterance). We also suggest that you
remove any bot messages or messages automatically generated by systems (for
example, "Agent enters the chat room"). We recommend that you upload
at least 3 months of conversations to ensure coverage of as many use cases
as possible. The maximum number of conversations in a conversation dataset
is 1,000,000.
If you are implementing Summarization using your own transcript and
annotation data, make sure your transcripts are in the specified
format
and stored in a
Google Cloud Storage bucket. The
recommended minimum number of training annotations is 1000. The enforced
minimum number is 100.
Navigate to the Agent Assist Console.
Select your Google Cloud Platform project, then click on the Data menu
option on the far left margin of the page. The Data menu displays all of
your data. There are two tabs, one each for conversation datasets and
knowledge bases.
Click on the conversation datasets tab, then on the +Create new
button at the top right of the conversation datasets page.
Create a conversation dataset
Enter a Name and optional Description for your new dataset. In the
Conversation data field, enter the URI of the storage bucket that
contains your conversation transcripts. Agent Assist supports use of
the * symbol for wildcard matching. The URI should have the following
format:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[[["\u003cp\u003eConversation datasets, which contain conversation transcript data, are used to train Smart Reply models that suggest text responses and Summarization models that generate conversation summaries for human agents.\u003c/p\u003e\n"],["\u003cp\u003eDatasets can be created manually in the Console using the "Data -> Datasets" tab, or it is recommended to use the Console tutorials located in the Agent Assist Console under the "Get started" button.\u003c/p\u003e\n"],["\u003cp\u003eTo use Smart Reply, conversation datasets must contain at least 30,000 conversations in \u003ccode\u003eJSON\u003c/code\u003e format and stored in a Google Cloud Storage bucket, and you should aim to remove any conversations with fewer than 20 messages or three conversation turns.\u003c/p\u003e\n"],["\u003cp\u003eFor Summarization, in addition to conversation transcripts in the required format, your dataset will need to contain at least 100 conversation annotations and a recommended 1000, also stored in a Google Cloud Storage bucket.\u003c/p\u003e\n"],["\u003cp\u003eTo create a new conversation dataset, you will need to specify a name, an optional description, and the URI of the Google Cloud Storage bucket containing your conversation transcripts, using the \u003ccode\u003egs://<bucket name>/<object name>\u003c/code\u003e format.\u003c/p\u003e\n"]]],[],null,["# Create a conversation dataset\n\nA conversation dataset contains conversation transcript data, and is used to\ntrain either a Smart Reply or Summarization custom model.\n[Smart Reply](/agent-assist/docs/smart-reply) uses the conversation transcripts\nto recommend text responses to human agents conversing with an end-user.\n[Summarization custom models](/agent-assist/docs/summarization-console)\nare trained on conversation datasets that contain both transcripts and\n**annotation** data. They use the annotations to generate conversation\nsummaries to human agents after a conversation has completed.\n\nThere are two ways to create a dataset: Using the Console tutorial workflows,\nor manually creating a dataset in the Console using the **Data** **-\\\u003e**\n**Datasets** tab. We recommend that you use the Console tutorials as a first\noption. To use the Console tutorials, navigate to the\n[Agent Assist Console](https://agentassist.cloud.google.com)\nand click the **Get started** button under the feature you'd like to test.\n\nThis page demonstrates how to create a dataset manually.\n\nBefore you begin\n----------------\n\n1. Follow the [Dialogflow setup](/dialogflow/es/docs/quick/setup?hl=en)\n instructions to enable Dialogflow on a Google Cloud Platform project.\n\n2. We recommend that you read the Agent Assist\n [basics](/agent-assist/docs/basics) page before starting this tutorial.\n\n3. If you are implementing Smart Reply using your own transcript data, make\n sure your transcripts are in `JSON` in the specified\n [format](/agent-assist/docs/conversation-data-format#conversation_transcript_data)\n and stored in a\n [Google Cloud Storage bucket](/storage/docs/creating-buckets). A\n conversation dataset must contain at least 30,000 conversations, otherwise\n model training will fail. As a general rule, the more conversations you have\n the better your model quality will be. We suggest that you remove any\n conversations with fewer than 20 messages or 3 conversation turns (changes\n in which participant is making an utterance). We also suggest that you\n remove any bot messages or messages automatically generated by systems (for\n example, \"Agent enters the chat room\"). We recommend that you upload\n at least 3 months of conversations to ensure coverage of as many use cases\n as possible. The maximum number of conversations in a conversation dataset\n is 1,000,000.\n\n4. If you are implementing Summarization using your own transcript and\n annotation data, make sure your transcripts are in the specified\n [format](/agent-assist/docs/summarization#summarization_training_data)\n and stored in a\n [Google Cloud Storage bucket](/storage/docs/creating-buckets). The\n recommended minimum number of training annotations is 1000. The enforced\n minimum number is 100.\n\n5. Navigate to the [Agent Assist Console](https://agentassist.cloud.google.com).\n Select your Google Cloud Platform project, then click on the **Data** menu\n option on the far left margin of the page. The **Data** menu displays all of\n your data. There are two tabs, one each for **conversation datasets** and\n **knowledge bases**.\n\n6. Click on the **conversation datasets** tab, then on the **+Create new**\n button at the top right of the conversation datasets page.\n\nCreate a conversation dataset\n-----------------------------\n\n1. Enter a **Name** and optional **Description** for your new dataset. In the\n **Conversation data** field, enter the URI of the storage bucket that\n contains your conversation transcripts. Agent Assist supports use of\n the `*` symbol for wildcard matching. The URI should have the following\n format:\n\n gs://\u003cbucket name\u003e/\u003cobject name\u003e\n\n For example: \n\n gs://mydata/conversationjsons/conv0*.json\n gs://mydatabucket/test/conv.json\n\n2. Click **Create** . Your new dataset now appears in the dataset list on the\n **Data** menu page under the **Conversation datasets** tab.\n\nWhat's next\n-----------\n\nTrain a [Smart Reply](/agent-assist/docs/smart-reply) or\n[Summarization](/agent-assist/docs/summarization-console) model on\none or more conversation datasets\n[using the Agent Assist console](/agent-assist/docs/model-training)."]]