Vertex AI embeddings models can generate optimized embeddings for various task types, such as document retrieval, question and answering, and fact verification. Task types are labels that optimize the embeddings that the model generates based on your intended use case. This document describes how to choose the optimal task type for your embeddings.
Supported models
Task types are supported by the following models:
textembedding-gecko@003
text-embedding-004
text-embedding-005
text-multilingual-embedding-002
Benefits of task types
Task types can improve the quality of embeddings generated by an embeddings model.
For example, when building Retrieval Augmented Generation (RAG) systems, a common design is to use text embeddings and Vector Search to perform a similarity search. In some cases this can lead to degraded search quality, because questions and their answers are not semantically similar. For example, a question like "Why is the sky blue?" and its answer "The scattering of sunlight causes the blue color," have distinctly different meanings as statements, which means that a RAG system won't automatically recognize their relation, as demonstrated in figure 1. Without task types, a RAG developer would need to train their model to learn the relationship between queries and answers which requires advanced data science skills and experience, or use LLM-based query expansion or HyDE which can introduce high latency and costs.
Task types enable you to generate optimized embeddings for specific tasks, which saves you the time and cost it would take to develop your own task-specific embeddings. The generated embedding for a query "Why is the sky blue?" and its answer "The scattering of sunlight causes the blue color" would be in the shared embedding space that represents the relationship between them, as demonstrated in figure 2. In this RAG example, the optimized embeddings would lead to improved similarity searches.
In addition to the query and answer use case, task types also provide optimized embeddings space for tasks such as classification, clustering, and fact verification.
Supported task types
Embeddings models that use task types support the following task types:
Task type | Description |
---|---|
SEMANTIC_SIMILARITY |
Used to generate embeddings that are optimized to assess text similarity |
CLASSIFICATION |
Used to generate embeddings that are optimized to classify texts according to preset labels |
CLUSTERING |
Used to generate embeddings that are optimized to cluster texts based on their similarities |
RETRIEVAL_DOCUMENT , RETRIEVAL_QUERY , QUESTION_ANSWERING , and FACT_VERIFICATION |
Used to generate embeddings that are optimized for document search or information retrieval |
CODE_RETRIEVAL_QUERY |
Used to retrieve a code block based on a natural language query, such as sort an array or reverse a linked list. Embeddings of the code blocks are computed using RETRIEVAL_DOCUMENT . |
The best task type for your embeddings job depends on what use case you have for your embeddings. Before you select a task type, determine your embeddings use case.
Determine your embeddings use case
Embeddings use cases typically fall within one of four categories: assessing
text similarity, classifying texts, clustering texts, or retrieving information
from texts. If your use case doesn't fall into one of the preceding categories,
use the RETRIEVAL_QUERY
task type by default.
Assess text similarity
If you want to use embeddings to assess text similarity, use the
SEMANTIC_SIMILARITY
task type. This task type generates embeddings that are
optimized for generating similarity scores.
For example, suppose you want to generate embeddings to use to compare the similarity of the following texts:
- The cat is sleeping
- The feline is napping
When the embeddings are used to create a similarity score, the similarity score is high, because both texts have nearly the same meaning.
Consider the following real-world scenarios where assessing input similarity would be useful:
- For a recommendation system, you want to identify items (e.g., products, articles, movies) that are semantically similar to a user's preferred items, providing personalized recommendations and enhancing user satisfaction.
Classify texts
If you want to use embeddings to classify texts according to preset labels, use
the CLASSIFICATION
task type. This task type generates embeddings in an
embeddings space that is optimized for classification.
For example, suppose you want to generate embeddings for social media posts that you can then use to classify their sentiment as positive, negative, or neutral. When embeddings for a social media post that reads "I don't like traveling on airplanes" are classified, the sentiment would be classified as negative.
Cluster texts
If you want to use embeddings to cluster texts based on their similarities, use
the CLUSTERING
task type. This task type generates embeddings that are
optimized for being grouped based on their similarities.
For example, suppose you want to generate embeddings for news articles so that you can show users articles that are topically-related to the ones they have previously read. After the embeddings are generated and clustered, you can suggest additional sports-related articles to users who read a lot about sports.
Additional use cases for clustering include the following:
- Customer segmentation: group customers with similar embeddings generated from their profiles or activities for targeted marketing and personalized experiences.
- Product segmentation: clustering product embeddings based on their product title and description, product images, or customer reviews can help businesses do segment analysis on their products.
- Market research: clustering consumer survey responses or social media data embeddings can reveal hidden patterns and trends in consumer opinions, preferences, and behaviors, aiding market research efforts and informing product development strategies.
- Healthcare: clustering patient embeddings derived from medical data can help identify groups with similar conditions or treatment responses, leading to more personalized healthcare plans and targeted therapies.
- Customer feedback trends: clustering customer feedback from various channels (surveys, social media, support tickets) into groups can help identify common pain points, feature requests, and areas for product improvement.
Retrieve information from texts
If you want to use embeddings for document search or information retrieval and Q&A use cases such as search, chatbots, or RAG as discussed in the introduction, you need to run two embeddings jobs with different task types:
- Use the
RETRIEVAL_DOCUMENT
task type to create optimized embeddings for your documents (also called a corpus). - Use one of the following task types to create optimized embeddings for your
queries, depending on the nature of the queries:
RETRIEVAL_QUERY
: Use as the default task type for queries, such as "best restaurants in Vancouver", "green vegetables", or "What is the best cookie recipe?".QUESTION_ANSWERING
: Use in cases where all queries are formatted as proper questions, such as "Why is the sky blue?" or "How do I tie my shoelaces?".FACT_VERIFICATION
: Use in cases where you want to retrieve a document from your corpus that proves or disproves a statement. For example, the query "apples grow underground" might retrieve an article about apples that would ultimately disprove the statement.
Consider the following real-world scenario where retrieval queries would be useful:
- For an ecommerce platform, you want to use embeddings to enable users to search for products using both text queries and images, providing a more intuitive and engaging shopping experience.
- For an educational platform, you want to build a question-answering system that can answer students' questions based on textbook content or educational resources, providing personalized learning experiences and helping students understand complex concepts.
What's next
- Learn how to get text embeddings.