Vertex AI APIs for building search and RAG experiences
Stay organized with collections
Save and categorize content based on your preferences.
Vertex AI offers a suite of APIs to help you build Retrieval-Augmented
Generation (RAG) applications or a search engine. This page introduces those
APIs.
Retrieval and generation
RAG is a methodology that enables Large Language Models (LLMs) to generate
responses that are grounded to your data source of choice. There are two stages
in RAG:
Retrieval: Getting the most relevant facts quickly can be a common
search problem. With RAG, you can quickly retrieve the facts that are
important to generate an answer.
Generation: The retrieved facts are used by the LLM to generate a
grounded response.
Vertex AI offers options for both stages to match a variety of
developer needs.
Retrieval
Choose the best retrieval method for your needs:
Vertex AI Search: Vertex AI Search is a
Google Search-quality information retrieval engine that can be a
component of any generative AI application that uses your enterprise data.
Vertex AI Search works as an out-of-the-box semantic & keyword
search engine for RAG with the ability to process a variety of document
types and with connectors to a variety of source systems including
BigQuery and many third party systems.
Build your own retrieval: If you want to build your semantic search, you
can rely on Vertex AI APIs for components of your custom RAG
system. This suite of APIs provide high-quality implementations for document
parsing, embedding generation, vector search, and semantic ranking. Using these
lower-level APIs gives you full flexibility on the design of your retriever
while at the same time offering accelerated time to market and high quality
by relying on lower-level Vertex AI APIs.
Bring an existing retrieval: You can use your existing search as a
retriever for grounded generation.
You can also use the Vertex APIs for RAG
to upgrade your existing search to higher quality. For more information, see
Grounding overview.
Vertex AI RAG Engine: Vertex AI RAG Engine
provides a fully-managed runtime for RAG orchestration, which lets
developers build RAG for use in production and enterprise-ready contexts.
Google Search: When you use Grounding with
Google Search for your Gemini model, then Gemini
uses Google Search and generates output that is grounded to the
relevant search results. This retrieval method doesn't require management
and you get the world's knowledge available to Gemini.
Ground with your data:
Generate well-grounded answers to a user's query. The grounded generation
API uses specialized, fine-tuned Gemini models and is an effective
way to reduce hallucinations and provide responses grounded to your sources
or third-party sources including references to grounding support content.
You can also ground responses to your Vertex AI Search data using
Generative AI on Vertex AI. For more information, see
Ground with your data.
Ground with Google Search: Gemini is Google's most capable
model and offers out-of-the-box grounding with Google Search. You
can use it to build your fully-customized grounded generation solution.
Model Garden: If you want full control and the model of your choice,
you can use any of the models in
Vertex AI Model Garden for generation.
Build your own Retrieval Augmented Generation
Developing a custom RAG system for grounding offers flexibility and control at
every step of the process. Vertex AI offers a suite of APIs to help you
create your own search solutions. Using those APIs gives you full flexibility on
the design of your RAG application while at the same time offering accelerated
time to market and high quality by relying on these lower-level
Vertex AI APIs.
The Document AI Layout Parser.
The Document AI Layout Parser transforms documents in various
formats into structured representations, making content like paragraphs,
tables, lists, and structural elements like headings, page headers, and
footers accessible, and creating context-aware chunks that facilitate
information retrieval in a range of generative AI and discovery apps.
Embeddings API: The Vertex AI embeddings APIs let you create
embeddings for text or multimodal inputs. Embeddings are vectors of
floating point numbers that are designed to capture the meaning of their
input. You can use the embeddings to power semantic search using Vector
search.
Vector Search. The retrieval engine is a key part of your RAG
or search application. Vertex AI Vector Search is a
retrieval engine that can search from billions of semantically similar or
semantically related items at scale, with high queries per second (QPS), high
recall, low latency, and cost efficiency. It can search over dense
embeddings, and supports sparse embedding keyword search and hybrid search in
Public preview.
The ranking API.
The ranking API takes in a list of documents and reranks those documents
based on how relevant the documents are to a given query. Compared to
embeddings that look purely at the semantic similarity of a document and a
query, the ranking API can give you a more precise score for how well a
document answers a given query.
The grounded generation API. Use the grounded
generation API to generate
well-grounded answers to a user's prompt. The grounding sources can be your
Vertex AI Search data stores, custom data that you provide, or
Google Search.
The generate content API. Use the generate content API to generate
well-grounded answers to a user's prompt. The grounding sources can be your
Vertex AI Search data stores or Google Search.
The check grounding API.
The check grounding API determines how grounded a given piece of text is in a
given set of reference texts. The API can generate supporting citations from
the reference text to indicate where the given text is supported by the
reference texts. Among other things, the API can be used to assess the
grounded-ness of responses from a RAG systems. Additionally, as an
experimental feature, the API also generates contradicting citations that
show where the given text and reference texts disagree.
Workflow: Generate grounded responses from unstructured data
Here's a workflow that outlines how to integrate the Vertex AI RAG APIs
to generate grounded responses from unstructured data.
Import your unstructured documents, such as PDF files, HTML files, or images
with text, into a Cloud Storage location.
Process the imported documents using the layout parser.
The layout parser breaks down the unstructured documents into chunks and
transforms the unstructured content into its structured representation. The
layout parser also extracts annotations from the chunks.
If you generated the answers using an answer generation model other than the
Google models, you can check the grounding of these answers
using the check grounding method.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eVertex AI provides APIs for building Retrieval-Augmented Generation (RAG) applications and search engines, supporting both retrieval and generation stages.\u003c/p\u003e\n"],["\u003cp\u003eFor retrieval, options include Vertex AI Search, building your own using Vertex AI APIs, using an existing search engine, Vertex AI RAG Engine, or leveraging Google Search for Gemini models.\u003c/p\u003e\n"],["\u003cp\u003eFor generation, options include the Grounded Generation API, using Gemini with built-in Google Search grounding, or using models from the Vertex AI Model Garden for full customization.\u003c/p\u003e\n"],["\u003cp\u003eVertex AI's suite of APIs includes the Document AI Layout Parser, Embeddings API, Vector Search, and Ranking API, enabling users to create custom RAG systems with flexibility and control.\u003c/p\u003e\n"],["\u003cp\u003eThe Vertex AI workflow for generating grounded responses from unstructured data involves importing documents, processing with the layout parser, creating text embeddings, indexing with Vector Search, ranking chunks, and generating grounded answers.\u003c/p\u003e\n"]]],[],null,["# Vertex AI APIs for building search and RAG experiences\n\nVertex AI offers a suite of APIs to help you build Retrieval-Augmented\nGeneration (RAG) applications or a search engine. This page introduces those\nAPIs.\n\nRetrieval and generation\n------------------------\n\nRAG is a methodology that enables Large Language Models (LLMs) to generate\nresponses that are grounded to your data source of choice. There are two stages\nin RAG:\n\n1. **Retrieval**: Getting the most relevant facts quickly can be a common search problem. With RAG, you can quickly retrieve the facts that are important to generate an answer.\n2. **Generation:** The retrieved facts are used by the LLM to generate a grounded response.\n\nVertex AI offers options for both stages to match a variety of\ndeveloper needs.\n\nRetrieval\n---------\n\nChoose the best retrieval method for your needs:\n\n- **Vertex AI Search:** Vertex AI Search is a\n Google Search-quality information retrieval engine that can be a\n component of any generative AI application that uses your enterprise data.\n Vertex AI Search works as an out-of-the-box semantic \\& keyword\n search engine for RAG with the ability to process a variety of document\n types and with connectors to a variety of source systems including\n BigQuery and many third party systems.\n\n For more information, see\n [Vertex AI Search](/enterprise-search).\n- **Build your own retrieval:** If you want to build your semantic search, you\n can rely on Vertex AI APIs for components of your custom RAG\n system. This suite of APIs provide high-quality implementations for document\n parsing, embedding generation, vector search, and semantic ranking. Using these\n lower-level APIs gives you full flexibility on the design of your retriever\n while at the same time offering accelerated time to market and high quality\n by relying on lower-level Vertex AI APIs.\n\n For more information, see\n [Build your own Retrieval Augmented Generation](#build-rag).\n- **Bring an existing retrieval** : You can use your existing search as a\n retriever for [grounded generation](/generative-ai-app-builder/docs/grounded-gen).\n You can also use the Vertex APIs for RAG\n to upgrade your existing search to higher quality. For more information, see\n [Grounding overview](/vertex-ai/generative-ai/docs/grounding/overview).\n\n- **Vertex AI RAG Engine**: Vertex AI RAG Engine\n provides a fully-managed runtime for RAG orchestration, which lets\n developers build RAG for use in production and enterprise-ready contexts.\n\n For more information, see [Vertex AI RAG Engine\n overview](/vertex-ai/generative-ai/docs/rag-overview) in the Generative AI\n on Vertex AI documentation.\n- **Google Search**: When you use Grounding with\n Google Search for your Gemini model, then Gemini\n uses Google Search and generates output that is grounded to the\n relevant search results. This retrieval method doesn't require management\n and you get the world's knowledge available to Gemini.\n\n For more information, see [Grounding with\n Google Search](/vertex-ai/generative-ai/docs/multimodal/ground-gemini)\n in the Generative AI on Vertex AI documentation.\n\nGeneration\n----------\n\nChoose the best generation method for your needs:\n\n- **Ground with your data**:\n Generate well-grounded answers to a user's query. The grounded generation\n API uses specialized, fine-tuned Gemini models and is an effective\n way to reduce hallucinations and provide responses grounded to your sources\n or third-party sources including references to grounding support content.\n\n For more information, see\n [Generate grounded answers with RAG](/generative-ai-app-builder/docs/grounded-gen).\n\n You can also ground responses to your Vertex AI Search data using\n Generative AI on Vertex AI. For more information, see\n [Ground with your data](/vertex-ai/generative-ai/docs/multimodal/ground-with-your-data).\n- **Ground with Google Search:** Gemini is Google's most capable\n model and offers out-of-the-box grounding with Google Search. You\n can use it to build your fully-customized grounded generation solution.\n\n For more information, see [Grounding with Google Search](/vertex-ai/generative-ai/docs/multimodal/ground-gemini) in\n the Generative AI on Vertex AI documentation.\n- **Model Garden:** If you want full control and the model of your choice,\n you can use any of the models in\n [Vertex AI Model Garden](/model-garden) for generation.\n\nBuild your own Retrieval Augmented Generation\n---------------------------------------------\n\nDeveloping a custom RAG system for grounding offers flexibility and control at\nevery step of the process. Vertex AI offers a suite of APIs to help you\ncreate your own search solutions. Using those APIs gives you full flexibility on\nthe design of your RAG application while at the same time offering accelerated\ntime to market and high quality by relying on these lower-level\nVertex AI APIs.\n\n- **The Document AI Layout Parser.**\n The Document AI Layout Parser transforms documents in various\n formats into structured representations, making content like paragraphs,\n tables, lists, and structural elements like headings, page headers, and\n footers accessible, and creating context-aware chunks that facilitate\n information retrieval in a range of generative AI and discovery apps.\n\n For more information, see [Document AI Layout Parser](/document-ai/docs/layout-parse-chunk) in the\n *Document AI* documentation.\n- **Embeddings API:** The Vertex AI embeddings APIs let you create\n embeddings for text or multimodal inputs. Embeddings are vectors of\n floating point numbers that are designed to capture the meaning of their\n input. You can use the embeddings to power semantic search using Vector\n search.\n\n For more information, see [Text embeddings](/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings) and\n [Multimodal embeddings](/vertex-ai/generative-ai/docs/embeddings/get-multimodal-embeddings) in the Generative AI on\n Vertex AI documentation.\n- **Vector Search.** The retrieval engine is a key part of your RAG\n or search application. Vertex AI Vector Search is a\n retrieval engine that can search from billions of semantically similar or\n semantically related items at scale, with high queries per second (QPS), high\n recall, low latency, and cost efficiency. It can search over dense\n embeddings, and supports sparse embedding keyword search and hybrid search in\n Public preview.\n\n For more information, see: [Overview of Vertex AI\n Vector Search](/vertex-ai/docs/vector-search/overview) in the\n Vertex AI documentation.\n- **The ranking API.**\n The ranking API takes in a list of documents and reranks those documents\n based on how relevant the documents are to a given query. Compared to\n embeddings that look purely at the semantic similarity of a document and a\n query, the ranking API can give you a more precise score for how well a\n document answers a given query.\n\n For more information, see\n [Improve search and RAG quality with ranking API](/generative-ai-app-builder/docs/ranking).\n- **The grounded generation API.** Use the grounded\n generation API to generate\n well-grounded answers to a user's prompt. The grounding sources can be your\n Vertex AI Search data stores, custom data that you provide, or\n Google Search.\n\n For more information, see [Generate grounded answers](/generative-ai-app-builder/docs/grounded-gen).\n- **The generate content API.** Use the generate content API to generate\n well-grounded answers to a user's prompt. The grounding sources can be your\n Vertex AI Search data stores or Google Search.\n\n For more information, see\n [Ground with Google Search](/vertex-ai/generative-ai/docs/multimodal/ground-with-google-search) or\n [Ground with your data](/vertex-ai/generative-ai/docs/multimodal/ground-with-your-data).\n- **The check grounding API.**\n The check grounding API determines how grounded a given piece of text is in a\n given set of reference texts. The API can generate supporting citations from\n the reference text to indicate where the given text is supported by the\n reference texts. Among other things, the API can be used to assess the\n grounded-ness of responses from a RAG systems. Additionally, as an\n experimental feature, the API also generates contradicting citations that\n show where the given text and reference texts disagree.\n\n For more information, see [Check grounding](/generative-ai-app-builder/docs/check-grounding).\n\nWorkflow: Generate grounded responses from unstructured data\n------------------------------------------------------------\n\nHere's a workflow that outlines how to integrate the Vertex AI RAG APIs\nto generate grounded responses from unstructured data.\n\n1. Import your unstructured documents, such as PDF files, HTML files, or images with text, into a Cloud Storage location.\n2. Process the imported documents using the [layout parser](/document-ai/docs/layout-parse-chunk). The layout parser breaks down the unstructured documents into chunks and transforms the unstructured content into its structured representation. The layout parser also extracts annotations from the chunks.\n3. [Create text embeddings](/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings) for chunks using Vertex AI text embeddings API.\n4. [Index and retrieve](/vertex-ai/docs/vector-search/create-manage-index) the chunk embeddings using Vector Search.\n5. [Rank the chunks](/generative-ai-app-builder/docs/ranking) using the ranking API and determine the top-ranked chunks.\n6. Generate grounded answers based on the top-ranked chunks using the [grounded generation API](/generative-ai-app-builder/docs/grounded-gen) or using the [generate content API](/vertex-ai/generative-ai/docs/multimodal/ground-with-your-data).\n\nIf you generated the answers using an answer generation model other than the\nGoogle models, you can [check the grounding](/generative-ai-app-builder/docs/check-grounding) of these answers\nusing the check grounding method."]]