This page serves as a guide to implement conversational product filtering for Vertex AI Search for commerce. This document provides data-backed best practices to ensure a successful implementation of conversational product filtering for mid-to-large retail businesses.
Vertex AI Search for commerce conversational product filtering is an AI-powered tool that transforms search into a guided experience to accompany shoppers when they interact with large product catalogs. When a site user performs a broad search (like coffee table or red dress) that returns thousands of results, conversational product filtering intelligently prompts them with follow-up questions to quickly narrow down their options.
Business use case
The conversational product filtering capability in guided search is specifically designed to address broad, ambiguous or very nuanced search queries. Applying filters to narrow the results significantly increases both revenue and user engagement.
The primary goal is to help shoppers find the right items quickly and intuitively.
Businesses use conversational filtering to:
- Accelerate product discovery: Help shoppers quickly narrow down vast product selections (such as going from 5,000 area rugs to a few hundred targeted results) by asking relevant questions.
- Refine personalization: The questions and multiple-choice options are custom for every query, based on historical filter usage data for that specific query (coffee table is historically filtered by color more often than size, so color can be asked first).
- Simplify implementation: Questions are predesignated for product attributes such as color and width, with one question per attribute.
One-way conversation user journey
Conversational product filtering operates as a one-way conversation that accompanies the shopper throughout their search journey on an ecommerce site. The AI model asks the shopper a question, and the shopper answers.
The shopper initiates a search query. Example: area rugs
The retail site returns 80+ pages of product results.
Vertex AI Search for commerce asks the shopper on the site a question to help narrow their search. Example: Which color are you looking for?
The shopper selects an answer from a list of multiple-choice or free-text options. Example: blue
The product results on the page are immediately filtered based on the shopper's selection.
Search then presents the next most relevant follow-up question. Example: What shape are you looking for?
Figure 1. Conversational filtering user journey.
Follow-up questions in search
If conversational product filtering is enabled, follow-up questions on the site drive a conversation that ensues until one of the three following scenarios occur:
- A preconfigured minimum product count is reached (a conversation is not useful when only two products show up).
- The user clicks on a product and adds it to their cart (the objective).
- Conversational product filtering runs out of AI-generated questions.
Use as an alternative to dynamic facets
Dynamic facets are associated with broad queries and resulting high search return counts, which leads to low revenue per query. End users can become overwhelmed when they see tens of thousands of results and abandon the search. Conversational search is able to refine queries and can be used with dynamic facets. Conversational product filtering offers some advantages over dynamic facets, being more human, more interactive, and using less on-page real estate.
For more information, refer to the Facets page.
Edit generative questions
Conversational product filtering encourages a human-in-the-loop interaction with the generative AI questions by allowing retailers to preliminarily edit, overwrite, or deselect AI-generated questions according to their preferences, based on the uploaded catalog. Questions can be edited or disabled individually or in bulk in the Search for commerce console or the API in order to tailor the questions they want to appear in the search.
User interaction with filters
This section describes how to configure conversational product filtering. Replacing static, hard-coded filter elements with dynamic conversational filtering to free up screen space for more targeted products is recommended. All applied filters, regardless of their origin, should globally update the product grid.
Subsequent conversational questions should intelligently adapt to the complete set of applied filters, offering both multiple-choice and free-text input options.
Unified global filters
Shoppers can interact with both conversational filters and any remaining classic filter elements. Your frontend implementation must be able to handle this scenario.
- Global application: When a user makes a selection from any filter element on the page, whether it is a conversational product filter or classic filter element, the product grid must update to show results with all global filters applied.
- Intelligent follow-up: The next conversational question the user sees should be relevant based on the complete set of applied filters applies, regardless of which element the user selects. For example, if a shopper selects a
color
filter from the conversational element and asize
filter from the classic filter element, the subsequent conversational question should not ask the shopper what size they want.
Filter types
Conversational product filtering enables the option to use both multiple choice selections and free text user input.
Multiple choice
Vertex AI Search for commerce can present up to 20 multiple-choice options, based on the value names in the product catalog. Options appear in a sorted list of the most relevant choices. As previously noted, for long options (such as long brand names), ensure users can side-scroll rather than wrap to new lines, to maintain vertical compactness.
Free text input
The product filtering feature supports free text input, allowing shoppers to enter their own answer if the option they are looking for isn't listed. While this option exists, more customers opt for multiple choice selections.
If you implement free text, consider a button that says Type your own which, when clicked, reveals a visual display for a text input window directly on the page, or opens a separate dialog window for text input.
Replace hard-coded elements
Many commerce search site developers have prebuilt, manual category filter components in their web interface intended for top revenue-generating queries. These are typically time-consuming to produce, expensive, and not very dynamic.
Figure 2. Example of hard-coded element display.
- Recommendation: The core idea behind conversational filtering is to enable you to quickly deploy dynamic experiences like these across all you products, not just for the few top queries that the visual elements were designed for. Therefore, identify and remove elements that conversational filtering is designed to replace. Avoid having two competing sets of filter elements that perform similar functions. This frees up space on the screen to be able to show more targeted products.
Data ingestion and quality
The Vertex AI model's intelligence is built on user interaction data. The onboarding process uses a two-phased approach to data ingestion.
Phase 1: Initial start with historical events
To begin, the model can be trained on historical event data. This data is initially ingested into the Google environment, allowing the model to be effective even on new projects with no live interaction data.
Phase 2: Transition to live query data
After the capability is live and collecting data, Vertex AI uses the live query data stream to refine the serving model. The live query data is generally of higher quality than historically captured event data as historical events can sometimes be missing key information. This makes live query data more effective for ongoing optimization.
User experience design principles for conversational filtering
Successful implementation of conversational product filtering relies on well-thought-out user experience design.
Visual display elements
The placement and appearance of the conversational filter significantly impact its effectiveness.
Handle long attributes
If multiple-choice options are long (such as brand names), do not wrap them to new lines as this adds height to the elements. Instead, allow them to extend horizontally off the page and enable side-scrolling.
Here is an example of a horizontal scroll implementation:
Figure 3. Example of horizontal element display.
Optimize filter placement
Begin by placing the conversational filter in a prominent position, typically where content first appears, which has yielded positive results. A key takeaway for this placement is that the conversation filtering bar should be as vertically compact as possible. When the conversational product filtering feature is positioned prominently, it can shift product displays further down the page, out of immediate view. This can be a drawback, as shoppers see fewer products. Therefore, the products that are visible must be as relevant as possible.
Additionally, consider placing the conversational filter further down the page, perhaps after three to five rows of products. This approach prevents the conversational element from significantly displacing the initial product displays.
If conversational product filtering becomes your main method for narrowing product selections, think about fully minimizing, or even replacing, your current manual filter bar. This could allow you to add another column of product items.
Desktop and mobile
While desktop implementations have proven successful, results on mobile have been less consistent and have shown lower overall performance gains. The limited screen size on mobile requires a more creative and deliberate approach to placement.
Recommendation: Prioritize desktop implementations over mobile, at first. The larger screen size on desktop allows for greater flexibility in creative designs. The smaller screen on mobile forces developers to prioritize certain elements.
Avoid: Chat window interfaces. Do not implement the conversational filter as a chat window. This takes users away from the main web interface and can disrupt the intended web checkout flow design that developers typically spend considerable time optimizing.
Additional mobile considerations
Mobile web and apps should also be treated independently when it comes to testing. Mobile app testing is inherently difficult to conduct, but offers greater flexibility in customization. Mobile web is often quicker to test, but comes with different tradeoffs for various mobile web browsers.
Ideas for experimentation
- Placement between product rows: Insert the component partway down the page, after three to five rows of products. This approach prevents the conversational element from significantly displacing the initial product displays.
- Fly-out / pop-up: Use a button that triggers a dialog or fly-out menu containing the filter questions. This could be integrated with existing filter pop-ups or be a separate element.
- Sticky bar: A persistent bar on the screen presents the questions and options. This sits in front of the products rather than pushing them down.
- Testing considerations: When testing mobile and desktop, ensure that these experiments are conducted independently. The shopping behaviors for each device vary greatly, and the visual components that work on one device might not translate to the other.
Developer's guide
The following is a developer's guide on how to integrate conversational product filtering in your API.
Admin experience
Manage the generative questions and conversational product filtering directly in the API, or in the Search for commerce console, and set it up in the Data quality and Evaluate sections of the Search for commerce console.
Cloud console
The console allows retailers to manage generative questions in a conversational product filtering experience. Learn more about using generative questions in conversational product filtering.
Steps to use the generative question service
Satisfy data requirements.
Configure manual question overrides.
Turn the feature on.
Data requirements
To find out if your search data is ready for conversational product filtering, in the console, under Conversational product filtering and browse, or under Data quality > Conversation, go to the Coverage checks tab.
To enable conversational product filtering, you need to meet certain data requirements.
These are:
- 1,000 queries per day: After you reach this first threshold, a conversation plan is generated that evaluates your inputs and outputs:
- Inputs: filter count in events
- Outputs: conversational coverage
- 25% conversational coverage: Calculated by Vertex AI Search for commerce models, conversational coverage means the percentage of queries that have one question. A frequency-weighted 25% (by volume) of queries should have at least a first question that matches it.
If you don't have 25% conversational coverage yet, but have the first prerequisite 1000 queries per day, blocking and advisory checks begin to be applied to your outputs and inputs, respectively. Here, Vertex AI Search for commerce begins to calculate by how much of a percentage your user-event-applied filters have to increase in order to reach the 25% conversational coverage threshold. The more filters that are uploaded, the higher the coverage reached.
To view your conversational readiness:
- Go to the Conversation tab in the Data quality page in the Search for commerce console. This provides you with a critical check of whether a minimum of 25% of search queries have at least one follow-up question, as well as advisory checks as to what percentage of user events with valid filters is needed to reach that conversational coverage goal.
Figure 4. Conversational readiness check.
If you pass the critical check, with sufficient user events with valid filters, proceed to the next step.
To control how generative questions are served, go to the Conversational product filtering and browse page in the Vertex AI Search for commerce console.
Generative question controls
The generative AI writes a question for every indexable attribute in the catalog, using both names and values of attributes for system and custom attributes. These questions are generated by an LLM and aim to enhance the search experience. For example, for furniture type, values can be indoor or outdoor, the AI synthesizes a question about what type of furniture you are looking for.
Each facet has one generated question. Based on historic user events and facet engagement from past search event data, the questions are sorted by expected frequency of the question appearing. The AI first looks at the questions on top, then finds what is relevant by attribute. The list of questions is generated once. If a new attribute is added, it is reflected in the list in two hours.
Go to the Conversational search and browse page in the Search for commerce console.
Go to the Conversational search and browse page.Under the Manage AI generated questions tab, view all the questions sorted by how often they are used, in query-weighted frequency, meaning how often they are served with common queries. The ranking uses the frequency field in the
GenerativeQuestionConfig
configuration. This field is responsible for sorting the AI-generated questions by how often they are used.You can use the filter option to filter the questions.
Check the box to enable question visibility for each attribute.
Click edit at the end of each row to open an edit panel for each question.
To make bulk edits, follow these steps:
Select or clear the boxes next to the questions that you want to include or exclude in conversation.
Click either the addAllow in conversation or the removeDisallow in conversation buttons that appear at the top of the list. Alternatively, to edit an individual question, click edit and clear or recheck the box next to Allowed in conversation in the pane that opens:
Figure 5. Edit each AI-generated question.
Use generative questions in conversational product filtering
The generative question service API provides controls to mitigate potential inconsistencies in the LLM output. These can be managed from the console. Here, retailers can also configure conversational product filtering by toggling its enabled state and setting the minimum number of products required to trigger it.
You can define the questions, specifying the question itself, potential answers, and whether the question is allowed in the conversation. Individual questions can be generated by an LLM or overridden by the retailer. The console supports reviewing AI-generated questions, allowing retailers to override them or toggle their conversational status. Questions can also be bulk edited.
Edit individual questions
You can also use controls to curate the individual questions. It is recommended to do this before you turn conversational product filtering on.
For each question, there are two options. Click edit in in the last column to access the questions visible to the users panel:
- Turn off a question for all queries: The question is enabled by default. Clear (or check again) the box next to Allowed in conversation. This option skips the question altogether. A retailer may opt to disable a question entirely if it does not relate to the queried attributes or could be misconstrued as inappropriate in some way (a question such as "What dress size are you looking for?" may be perceived as prying about a shopper's weight.)
- Rewrite a question: In the pane, you can see the AI-generated question, what attribute it is attached to and what values the attribute has. Click the pencil to rewrite it.
Turn on conversational filtering
After you have edited your generative AI questions in the console, you are ready to turn on conversational product filtering.
To enable conversational product filtering, go to the Conversational product filtering and browse page in the Search for commerce console.
Go to the Conversational search and browse page in the Search for commerce console.
Go to the Conversational search and browse page.Consider the minimum amount of products in your catalog you want returned in the search before questions are generated. This number can be higher but never lower than 2. One row to a page is often the right amount for triggering a conversation.
Configure the number and switch the toggle to On. If fewer products match the number, they get filtered out.
Figure 6. Switch toggle on to Enable conversational search.
This page provides information as to the status of your blocking and advisory checks. If you have enough search queries with at least one follow-up question, your site is now conversational search enabled.
Evaluate and test
Evaluate lets you preview the serving experience by running a test search and testing your questions against displayed facets. This part of the console provides you with a preview of your serving experience with conversational product filtering.
To evaluate and test, follow these steps. In the Evaluate section on the Search or Browse tabs on the Evaluate page of the Search for commerce console.
Go to the Evaluate page in the Search for commerce console.
Go to the Evaluate pageClick Search or Browse.
In the Search Evaluation field, enter a test query that makes sense based on the catalog you have uploaded to search, such as shoes if your catalog consists of clothing items.
Click Search preview to see search results.
Figure 7. Preview results.
If you have conversational product filtering enabled, generative questions are enabled.
Generative Question API
This section describes how to use the generative question API to integrate the conversational search API into your UI, manage the generative questions, and serve the feature on your site.
API integration
Objects:
- GenerativeQuestionsFeatureConfig
- GenerativeQuestionConfig
- GenerativeQuestions Service
- UpdateGenerativeQuestionsFeatureConfiguration
- UpdateGenerativeQuestionConfig
- ListGenerativeQuestionConfigs
- GetGenerativeQuestionFeatureConfig
- BatchUpdateGenerativeQuestionConfigs
The core to integrating this feature is defining the question
resource. This includes the question itself and whether the question is allowed in the conversation. The question is by default generated by an LLM but can be overridden by the administrator.
Enable conversational product filtering
Object:
- GenerativeQuestionsFeatureConfig
This object is a control configuration file for enabling the feature for generative questions to manage the overall serving experience of conversational product filtering. GenerativeQuestionsFeatureConfig
uses a GET method to obtain attribute information and whether the attributes are indexable or not from the catalog associated with the project.
The feature_enabled
switch determines whether questions are used at serving time. It manages the top-level toggles in the console.
Serving experience
Conversational product filtering is based on engaging the user with an ongoing conversation of multiple turns. Therefore, there is at least a second response required for conversational product filtering to work. The user is presented with a follow-up question and suggested answers in the response. The user can respond to this follow-up question either by entering their answer or by clicking on a suggested answer (multiple choice option).
Multiple choice The multiple choice option functions behind the scenes like a facet (an event type filter), which narrows the query using filtering. In the background, when the user clicks on a multiple choice response, a filter is applied to the query. Applying a filter using conversational multiple choice is identical to applying the same filter using dynamic facets or tiles.
Free text If the user responds in free text, a new and narrower query is generated. Learn more about how conversational product filtering enriches filter and user event capturing at the API level.
Service enabled by the feature
The generative questions service (service GenerativeQuestionService{...}
) is used for managing LLM-generated questions. Its parent object is the catalog, where it retrieves information to return questions for a given catalog. The service is used to manage the overall generative question feature state, make individual or batch changes, and toggle questions on or off. Data requirements must be met to interface with the Service API and the questions need to first be initialized before they can be managed.
The service interacts with the feature level and question level configs with two sets of handlers:
GenerativeQuestionsFeatureConfig handlers (feature-level):
- Update lets you change minimum products and enable fields.
- Get returns an object.
GenerativeQuestion Config handlers (question-level):
- List returns all questions for a given catalog.
- Update performs individual question management.
- Batch Update performs grouped question management.
The service returns a semantically appropriate question based on the initial query.
A follow-up question is generated by the LLM model and can be overridden. The questions are displayed based on how likely it is used by customers by calling the search event history. If there is no search event history, the fallback is on the commerce search logs.
Different questions are generated based on the previous query. There are no fixed weights. The AI that drives the LLM-generated questions learns from the queries and changes the weighting for every query, so that "shirt", for example, weighs the category very heavily, but "XL red shirt" weighs category, size and color.
Configure the serving experience
Configure the serving experience by integrating the conversational filtering configuration API with the Search API.
User journey in the API
The conversational flow works as follows: The user initiates a search with an initial query and sets the mode
flag to CONVERSATIONAL_FILTER_ONLY
. The user then selects an answer or provides free-text input, which is sent back to the API using the user_answer
field.
The Conversational API provides the additional_filter
field in the response. The user must apply these filters to the Search API follow-up request. The search results are based on the user's input and provide a new follow-up question, prompting a follow-up query and continuing the conversation in multiple turns until the user finds what they're looking for on the retailer website.
User scenarios
Assuming conversational product filtering is enabled on the website, the user journey and subsequent interaction with Vertex AI Search for commerce follows this path:
- Step 1. First query comes from user to both Search and Conversational API. The Search API only returns search results. The Conversational API returns the suggested answers and follow-up questions. Call the Search API for the same query or
page_category
and fetch the search results. - Step 2. Follow-up conversation requested is sent to conversational search. Call the Conversational API with the right conversation filtering mode.
- Step 3. Initial search response with search results only. The Conversational API refines the query by returning the suggested answers and follow-up questions.
- Scenario A: User selects multiple choice.
- Scenario B: User selects free text.
- Step 1B. Text answer sent to the Conversational API. Use the Conversational API to send the user answer.
- Step 2B. The user gets a conversational follow-up question with
some suggested answers in the Conversational response. Search run again
with a modified query. Conversational API sends another question and
additional_filter
. This filter must be applied to the search results fetched from the Search API in the first step.
Step 1. First query comes from user
The conversationalFilteringMode
mode in the Conversational API is option and the default DISABLED
can be set to ENABLED
to control conversational filtering.
First, developers need to create the following search request by setting the product or item as the query, in this example, "dress"
:
Additional actions on the client side to enable conversationally filtered searches:
Developers must also create a conversational search request by setting
"dress"
as the query.Developers must set
mode
toCONVERSATIONAL_FILTER_ONLY
in order to get a conversational response. Otherwise, if it's set toDISABLED
, no follow-up question is supplied.
Step 2. Retailer → search: Initial query with conversation enabled
Step 3. Search → retailer: conversation ID, refined query, follow-up question, suggested answers
Conversational product filtering serves these options for continued conversational engagement, leading to faster search refinement:
Scenario A: User selects a multiple choice option
If a user selected a multiple choice answer yellow:
- Developers must restore the
conversation_id
from session storage. - Set
mode
to beCONVERSATIONAL_FILTER_ONLY
. - Set
user_answer
for what user selects.
Step 1A. Retailer → search: selected answer filter
Step 2A. Search → retailer: filters applied
Scenario B: User selects a free text input
If a user types in lavender:
- Developers should restore the
conversation_id
from session storage. - Set
followup_conversation_requested
to be true. - Set
user_answer
for what user inputs (with the"text_answer:"
prefix).
Step 1B. Retailer → search: text answer
Step 2B. Search → retailer: run with modified query
Iterative improvement with testing
Conversational product filtering is an ongoing process of optimization, requiring continuous refinement and data-driven decisions. The goal is to maximize feature effectiveness by understanding shopper behavior and adapting the design accordingly.
Shopper behaviors are dynamic and evolve over time, influenced by various factors such as market trends, competitor offerings, and changes in personal preferences. It is crucial to experiment continuously. Continue to iterate on your designs and test new approaches as you gather more data and observe how shoppers interact with the AI features. This ongoing cycle of experimentation, data analysis, and refinement ensures that the AI features remain relevant, effective, and optimized for an evolving user base.
Regularly review performance metrics, conduct user surveys, and analyze feedback to identify areas for improvement and new opportunities for innovation. This commitment to continuous iteration is key to long-term success in AI feature deployment.
Lessons learned
- Experiment continuously: The optimal result is often not the first design you try.
- Iterate and adapt: User behaviors evolve. Continue to iterate on your designs and test new approaches as you gather more data and observe how shoppers interact with the feature.
- Beyond A/B: Don't limit yourself to just A/B testing (comparing two versions). Instead, conduct many A/B/C/D/E/F tests to explore a wider range of UI designs and placement options.
Key metrics for optimization
To effectively optimize Vertex AI Search for commerce, it's crucial to define and track relevant metrics that provide insights into user engagement, satisfaction, and the overall impact of the features. Key metrics to consider include:
- Conversion rate: The percentage of users who complete the targeted action, such as making a purchase.
- User satisfaction scores (such as NPS, CSAT): Direct feedback from users on their experience with the AI feature, providing qualitative insights into usability and perceived value.
- Adoption rate: The percentage of shoppers that actively use conversational product filtering, indicating its visibility and perceived utility.