Vertex AI RAG Engine billing

This page describes the Vertex AI RAG Engine pricing and billing based on the Vertex AI RAG Engine components you use, such as models, reranking, and vector storage.

For more information, see the Vertex AI RAG Engine overview page.

Pricing and billing

Vertex AI RAG Engine is free to use. However, if you configure Vertex AI RAG Engine components, the billing might be affected.

This table explains how billing works when you use the RAG components.

Component How billing works with Vertex AI RAG Engine
Data ingestion Vertex AI RAG Engine supports ingesting data from different data sources. For example, uploading local files, Cloud Storage, and Google Drive. Accessing files in these data sources from Vertex AI RAG Engine is free, but these data sources might charge for data transfer. For example, data egress costs.
Data transformation (file parsing)
  • Default parser: Free.
  • LLM Parser: Vertex AI RAG Engine uses the LLM model that you specified to parse your file, and you will see and pay LLM model costs directly from your project.
  • Document AI layout parser: Vertex AI RAG Engine uses the Document AI layout parser that you specified to process your file, and you will see and pay for the use of the Document AI layout parser directly from your project.
Data transformation (file chunking) Supports fixed-size chunking, which is free.
Embedding generation Vertex AI RAG Engine orchestrates the embedding generation using the embedding model that you specified, and your project is billed for the costs associated with that model.

For more pricing information, see Cost of building and deploying AI models in Vertex AI.

Data indexing and retrieval RAG Engine supports two categories of vector databases for vector search:
  • RAG-managed database
  • Bring-Your-Own vector database

A RAG-managed database has two purposes:
  • A RAG-managed database stores RAG resources, such as RAG corpora and RAG files. File contents are excluded.
  • Upon your choice, embedding indexing and retrieval for vector search.

A RAG-managed database uses a Spanner instance as the backend.

For each of your projects, Vertex AI RAG Engine provisions a customer-specific Google Cloud project and manages RAG-managed resources that are stored in Vertex AI RAG Engine, so that your data is physically isolated.

If you choose the RagManagedDB Basic tier or Scaled tier, Vertex AI RAG Engine provisions a Spanner Enterprise edition instance in the corresponding project:

  • Basic tier: 100 processing units with backup
  • Scaled tier: Starting at 1 node (1,000 processing units) and autoscaling up to 10 nodes with backup

If any RAG corpus in your project chooses to use a RAG-managed database for the vector search, you will be charged for the RAG-managed Spanner instance.

Vertex AI RAG Engine surfaces Spanner costs from your corresponding RAG-managed project to your Google Cloud project, so that you can see and pay Spanner instance costs.

For more pricing details on Spanner, see Spanner pricing.

Reranking for Vertex AI RAG Engine The following ranking tools are supported post retrieval:
  • LLM Reranker: Vertex AI RAG Engine uses the LLM model that you specified to rerank the retrieval results, and you will see and pay LLM model costs directly from your project.
  • Vertex AI Search ranking API: Vertex AI RAG Engine uses the Vertex AI Search ranking API to rerank the retrieval results, and you will see and pay for the Ranking API directly from your project.

What's next