Choose a document processing function
This document provides a comparison of the document processing functions
available in BigQuery ML, which are
ML.GENERATE_TEXT
and
ML.PROCESS_DOCUMENT
.
You can use the information in this document to help you decide which function
to use in cases where the functions have overlapping capabilities.
At a high level, the difference between these functions is as follows:
ML.GENERATE_TEXT
is a good choice for performing natural language processing (NLP) tasks where some of the content resides in documents. This function offers the following benefits:- Lower costs
- More language support
- Faster throughput
- Model tuning capability
- Availability of multimodal models
For examples of document processing tasks that work best with this approach, see Explore document processing capabilities with the Gemini API.
ML.PROCESS_DOCUMENT
is a good choice for performing document processing tasks that require document parsing and a predefined, structured response.
Function comparison
Use the following table to compare the ML.GENERATE_TEXT
and
ML.PROCESS_DOCUMENT
functions:
ML.GENERATE_TEXT |
ML.PROCESS_DOCUMENT |
|
---|---|---|
Purpose | Perform any document-related NLP task by passing a prompt to a Gemini or partner model or to an open model. For example, given a financial document for a company, you can retrieve
document information by providing a prompt such as |
Use the Document AI API to perform specialized document processing for different document types, such as invoices, tax forms, and financial statements. You can also perform document chunking. |
Billing | Incurs BigQuery ML charges for data processed. For more information, see
BigQuery ML pricing. |
Incurs BigQuery ML charges for data processed. For more information, see
BigQuery ML pricing.
Incurs charges for calls to the Document AI API. For more information, see Document AI API pricing. |
Requests per minute (RPM) | Not applicable for Gemini models. Between 25 and 60 for partner models. For more information, see Requests per minute limits. | 120 RPM per processor type, with an overall limit of 600 RPM per project. For more information, see Quotas list. |
Tokens per minute | Ranges from 8,192 to over 1 million, depending on the model used. | No token limit. However, this function does have different page limits depending on the processor you use. For more information, see Limits. |
Supervised tuning | Supervised tuning is supported for some models. | Not supported. |
Supported languages | Support varies based on the LLM you choose. | Language support depends on the document processor type; most only support English. For more information, see Processor list. |
Supported regions | Supported in all Generative AI for Vertex AI regions. | Supported in the EU and US multi-regions
for all processors. Some processors are also available in certain single
regions. For more information, see
Regional and multi-regional support. |