Use Document AI layout parser with Vertex AI RAG Engine

This guide shows how to use the Document AI layout parser with RAG Engine, covering the following topics:

The following diagram summarizes the overall workflow for using the API or SDK:

Overview of the Document AI layout parser

Document AI is a document-processing and document-understanding platform that takes unstructured data from documents and transforms it into structured data. You can then analyze and use this structured data. With Document AI, you can create scalable, end-to-end, cloud-based document processing applications without specialized machine-learning expertise. It is built on generative AI products within Vertex AI.

The layout parser extracts content elements from the document, such as text, tables, and lists. It then creates context-aware chunks that facilitate information retrieval in generative AI and discovery applications. When you use the layout parser for retrieval and LLM generation, the chunking process considers the document's layout. This improves semantic coherence and reduces noise in the content. All text in a chunk comes from the same layout entity, such as a heading, subheading, or list.

For file types used by layout detection, see Layout detection per file type.

Use the layout parser in the console

To use the layout parser in Vertex AI RAG Engine, create a corpus by following these steps:

  1. In the Google Cloud console, go to the RAG Engine page.

    Go to RAG Engine

  2. Select Create corpus.

  3. In the Region field, select your region.

  4. In the Corpus name field, enter a name for your corpus.

  5. In the Description field, enter a description.

  6. In the Data section, select where to upload your data.

  7. Expand the Advanced options section.

    1. In the Chunking strategy section, the following default sizes are recommended:

      • Chunking size: 1024
      • Chunk overlap: 256
    2. In the Layout parser section, select the LLM parser option, which has the highest accuracy for documents with images or charts.

      1. From the Model field, select your model.
      2. Optional: In the Maximum parsing requests per min field, enter your maximum parsing requests.
      3. Optional: In the Custom parsing prompt field, enter your parsing prompt.
      4. Click Continue.
  8. On the Configure vector store page, configure the following:

    1. In the Embedding model field, select your embedding model.
    2. In the Vector database section, select your database.
  9. Click Create corpus.

Use the layout parser with the API or SDK

This section shows how to programmatically import files using the layout parser.

Before you begin

Before you import your files, complete the following steps.

Enable the Document AI API

Enable the Document AI API for your project. For more information on enabling APIs, see the Service Usage documentation.

Enable the Document AI API.

Enable the API

Create and enable a layout parser processor

  1. Create a layout parser by following the instructions in Creating and managing processors. The processor type name is LAYOUT_PARSER_PROCESSOR.

  2. Enable the layout parser by following the instructions in Enable a processor.

Prepare your RAG corpus

If you don't have a RAG corpus, create one. For an example, see Create a RAG corpus example.

If you already have a RAG corpus, existing files that were imported without a layout parser won't be re-imported when you Import files using Layout Parser. If you want to use a layout parser with your files, delete the files first. For example, see Delete a RAG file example.

Limitations

The ImportRagFiles API supports the layout parser, however, the following limitations apply:

  • The maximum input file size is 20 MB for all file types.
  • There is a maximum of 500 pages per PDF file.

The Document AI quotas and pricing also apply.

Import files

You can import files and folders from various sources using the layout parser.

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.

Before running the code sample, replace the following variables:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RAG corpus resource.
  • GCS_URIS: A list of Cloud Storage locations. For example: "gs://my-bucket1", "gs://my-bucket2".
  • LAYOUT_PARSER_PROCESSOR_NAME: The resource path to the layout parser processor that was created. For example: "projects/{project}/locations/{location}/processors/{processor_id}".
  • CHUNK_SIZE: Optional: The number of tokens each chunk should have.
from vertexai import rag
import vertexai

PROJECT_ID = YOUR_PROJECT_ID
corpus_name = "projects/{PROJECT_ID}/locations/us-central1/ragCorpora/{rag_corpus_id}"
paths = ["https://drive.google.com/file/123", "gs://my_bucket/my_files_dir"]  # Supports Cloud Storage and Google Drive.

# Initialize Vertex AI API once per session
vertexai.init(project=PROJECT_ID, location="us-central1")

response = rag.import_files(
    corpus_name=corpus_name,
    paths=paths,
    transformation_config=rag.TransformationConfig(
        rag.ChunkingConfig(chunk_size=1024, chunk_overlap=256)
    ),
    import_result_sink="gs://sample-existing-folder/sample_import_result_unique.ndjson",  # Optional: This must be an existing storage bucket folder, and the filename must be unique (non-existent).
    llm_parser=rag.LlmParserConfig(
      model_name="gemini-2.5-pro-preview-05-06",
      max_parsing_requests_per_min=100,
    ),  # Optional
    max_embedding_requests_per_min=900,  # Optional
)
print(f"Imported {response.imported_rag_files} files.")

REST

The code sample shows how to import Cloud Storage files using the layout parser. For more configuration options, including importing files from another source, see the ImportRagFilesConfig reference.

Before you use the request data, replace the following variables:

  • PROJECT_ID: Your project ID.
  • LOCATION: The region to process the request.
  • RAG_CORPUS_ID: The ID of the RAG corpus resource.
  • GCS_URIS: A list of Cloud Storage locations. For example: "gs://my-bucket1", "gs://my-bucket2".
  • LAYOUT_PARSER_PROCESSOR_NAME: The resource path to the layout parser processor that was created. For example: "projects/{project}/locations/{location}/processors/{processor_id}".
  • CHUNK_SIZE: Optional: The number of tokens each chunk should have.
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import

Request JSON body:

{
  "import_rag_files_config": {
    "gcs_source": {
      "uris": "GCS_URIS"
    },
    "rag_file_parsing_config": {
      "layout_parser": {
        "processor_name": "LAYOUT_PARSER_PROCESSOR_NAME"
      }
    },
    "rag_file_transformation_config": {
      "rag_file_chunking_config": {
        "fixed_length_chunking": {
          "chunk_size": CHUNK_SIZE
        }
      }
    },
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and run the following command:

curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import"

Powershell

Save the request body in a file named request.json, and run the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/ragCorpora/RAG_CORPUS_ID/ragFiles:import" | Select-Object -Expand Content

Query your data

After importing your files, you can query them to retrieve relevant information and generate responses.

Perform a retrieval query

When you provide a query, the retrieval component in RAG searches its knowledge base to find relevant information.

For an example of retrieving RAG files from a corpus based on a query text, see Retrieval query.

Generate a response

The generation component uses the retrieved contexts to generate a grounded response. For an example, see Generation.

What's next