Stay organized with collections
Save and categorize content based on your preferences.
Template-based extraction
You can train a high-performing model with as little as three training and three test
documents for fixed-layout use cases. Accelerate development and reduce time to
production for templated document types like W9, 1040, ACORD, surveys, and questionnaires.
Dataset configuration
A document dataset is required to train, up-train, or evaluate a processor version.
Document AI processors learn from examples, just like humans. Dataset fuels
processor stability in terms of performance.
Train dataset
To improve the model and its accuracy, train a dataset on your documents. The model is
made up of documents with ground-truth. You need a minimum of three documents to train a new model.
Test dataset
The test dataset is what the model uses to generate an F1 score (accuracy). It is
made up of documents with ground-truth. To see how often the model is right, the
ground truth is used to compare the model's predictions (extracted fields from
the model) with the correct answers. The test dataset should have at least three documents.
Proper labeling is one of the most important steps to achieving high accuracy.
Template mode has some unique labeling methodology that differs from other training modes:
Draw bounding boxes around the entire area you expect data to be in (per label)
within a document, even if the label is empty in the training document you're labeling.
You may label empty fields for template-based training. Don't label empty fields
for model-based training.
Build and evaluate a custom extractor with template mode
Set dataset location. Select the default option folder (Google-managed). This
might be done automatically shortly after creating the processor.
Navigate to the Build tab and select Import documents with auto-labeling
enabled. Adding more documents than the minimum of three needed typically doesn't improve quality for
template-based training. Instead of adding more, focus on labeling a small set very accurately.
Extend bounding boxes. These boxes for template mode should look like the preceding
examples. Extend the bounding boxes, following the best practices for the optimal result.
Train model.
Select Train new version.
Name the processor version.
Go to Show advanced options and select the template-based model approach.
Evaluation.
Go to Evaluate & test.
Select the version you just trained, then select View Full Evaluation.
You now see metrics such as F1, precision, and recall for the entire document and each field.
1. Decide if performance meets your production goals, and if not, reevaluate training and testing sets.
Set a new version as the default.
Navigate to Manage versions.
Select to see the settings menu, then mark Set as default.
Your model is now deployed and documents sent to this processor use your custom
version. You want to evaluate the model's performance (more details
on how to do that) to check if it requires further training.
Evaluation reference
The evaluation engine can do both exact match or fuzzy matching.
For an exact match, the extracted value must exactly match the ground truth or is counted as a miss.
Fuzzy matching extractions that had slight differences such as capitalization
differences still count as a match. This can be changed at the Evaluation screen.
Auto-labeling with the foundation model
The foundation model can accurately extract fields for a variety of document types,
but you can also provide additional training data to improve the accuracy of the
model for specific document structures.
Document AI uses the label names you define and previous annotations to make
it quicker and easier to label documents at scale with auto-labeling.
After creating a custom processor, go to the Get started tab.
Select Create New Field.
Navigate to the Build tab and then select Import documents.
Select the path of the documents and which set the documents should be imported
into. Check the auto-labeling checkbox and select the foundation model.
In the Build tab, select Manage dataset. You should see your imported
documents. Select one of your documents.
You see the predictions from the model highlighted in purple, you need to review
each label predicted by the model and ensure it's correct. If there are missing
fields, you need to add those as well.
Once the document has been reviewed, select Mark as labeled.
The document is now ready to be used by the model. Make sure the document is
in either the testing or training set.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[[["\u003cp\u003eTemplate-based extraction allows for training a high-performing model with a minimum of three training and three test documents, ideal for fixed-layout documents like W9s and questionnaires.\u003c/p\u003e\n"],["\u003cp\u003eA document dataset, comprising documents with ground-truth data, is essential for training, up-training, and evaluating a processor version, as the processor learns from these examples.\u003c/p\u003e\n"],["\u003cp\u003eFor template mode labeling, it is recommended to draw bounding boxes around the entire expected data area within a document, even if the field is empty in the training document, unlike model-based training.\u003c/p\u003e\n"],["\u003cp\u003eWhen building a custom extractor, auto-labeling can be enabled during document import, and it is advised to focus on accurately labeling a small set of documents rather than adding more documents during template-based training.\u003c/p\u003e\n"],["\u003cp\u003eThe foundation model allows for auto-labeling, which can be improved in accuracy and performance with the addition of training data with descriptive label names, while ensuring that all fields are accurate.\u003c/p\u003e\n"]]],[],null,["# Template-based extraction\n=========================\n\nYou can train a high-performing model with as little as three training and three test\ndocuments for fixed-layout use cases. Accelerate development and reduce time to\nproduction for templated document types like W9, 1040, ACORD, surveys, and questionnaires.\n\n\nDataset configuration\n---------------------\n\nA document dataset is required to train, up-train, or evaluate a processor version. Document AI processors learn from examples, just like humans. Dataset fuels processor stability in terms of performance. \n\n### Train dataset\n\nTo improve the model and its accuracy, train a dataset on your documents. The model is made up of documents with ground-truth. You need a minimum of three documents to train a new model. Ground-truth is the correctly labeled data, as determined by humans.\n\n### Test dataset\n\nThe test dataset is what the model uses to generate an F1 score (accuracy). It is made up of documents with ground-truth. To see how often the model is right, the ground truth is used to compare the model's predictions (extracted fields from the model) with the correct answers. The test dataset should have at least three documents.\n\n\u003cbr /\u003e\n\nBefore you begin\n----------------\n\nIf not already done, enable:\n\n- [Billing](/document-ai/docs/setup#billing)\n- [Document AI API](/document-ai/docs/setup)\n\nTemplate-mode labeling best practices\n-------------------------------------\n\nProper labeling is one of the most important steps to achieving high accuracy.\nTemplate mode has some unique labeling methodology that differs from other training modes:\n\n- Draw bounding boxes around the entire area you expect data to be in (per label) within a document, even if the label is empty in the training document you're labeling.\n- You may label empty fields for template-based training. Don't label empty fields for model-based training.\n\n| **Recommended.** Labeling example for template-based training to extract the top section of a 1040.\n| **Not recommended.** Labeling example for template-based training to extract the top section of a 1040. This is the labeling technique you should use for model-based training for documents with layout variation across documents.\n\nBuild and evaluate a custom extractor with template mode\n--------------------------------------------------------\n\n1. Create a custom extractor. [Create a processor](/document-ai/docs/workbench/build-custom-processor#create_a_processor)\n and [define fields](/document-ai/docs/workbench/build-custom-processor#define_processor_fields)\n you want to extract following [best practices](/document-ai/docs/workbench/label-documents#name-fields),\n which is important because it impacts extraction quality.\n\n2. Set dataset location. Select the default option folder (Google-managed). This\n might be done automatically shortly after creating the processor.\n\n3. Navigate to the **Build** tab and select **Import documents** with auto-labeling\n enabled. Adding more documents than the minimum of three needed typically doesn't improve quality for\n template-based training. Instead of adding more, focus on labeling a small set very accurately.\n\n | **Note:** You can experiment by increasing the training set size if you observe template variations in your dataset. Try to include at least three training documents per variation. At least three training documents, three test documents, and three schema labels are required per set.\n4. Extend bounding boxes. These boxes for template mode should look like the preceding\n examples. Extend the bounding boxes, following the best practices for the optimal result.\n\n5. Train model.\n\n 1. Select **Train new version**.\n 2. Name the processor version.\n 3. Go to **Show advanced options** and select the template-based model approach.\n\n | **Note:** It takes some time for the training to complete.\n6. Evaluation.\n\n 1. Go to **Evaluate \\& test**.\n 2. Select the version you just trained, then select **View Full Evaluation**.\n\n You now see metrics such as F1, precision, and recall for the entire document and each field.\n 1. Decide if performance meets your production goals, and if not, reevaluate training and testing sets.\n7. Set a new version as the default.\n\n 1. Navigate to **Manage versions**.\n 2. Select to see the settings menu, then mark **Set as default**.\n\n Your model is now deployed and documents sent to this processor use your custom\n version. You want to evaluate the model's performance ([more details](/document-ai/docs/workbench/evaluate)\n on how to do that) to check if it requires further training.\n\nEvaluation reference\n--------------------\n\nThe evaluation engine can do both exact match or [fuzzy matching](/document-ai/docs/workbench/evaluate#fuzzy_matching).\nFor an exact match, the extracted value must exactly match the ground truth or is counted as a miss.\n\nFuzzy matching extractions that had slight differences such as capitalization\ndifferences still count as a match. This can be changed at the **Evaluation** screen.\n\nAuto-labeling with the foundation model\n---------------------------------------\n\nThe foundation model can accurately extract fields for a variety of document types,\nbut you can also provide additional training data to improve the accuracy of the\nmodel for specific document structures.\n\nDocument AI uses the label names you define and previous annotations to make\nit quicker and easier to label documents at scale with auto-labeling.\n\n1. After creating a custom processor, go to the **Get started** tab.\n2. Select **Create New Field**.\n\n | **Note:** The label name with the foundation model can greatly affect model accuracy and performance. Be sure to give a descriptive name.\n\n3. Navigate to the **Build** tab and then select **Import documents**.\n\n4. Select the path of the documents and which set the documents should be imported\n into. Check the auto-labeling checkbox and select the foundation model.\n\n5. In the **Build** tab, select **Manage dataset**. You should see your imported\n documents. Select one of your documents.\n\n6. You see the predictions from the model highlighted in purple, you need to review\n each label predicted by the model and ensure it's correct. If there are missing\n fields, you need to add those as well.\n\n | **Note:** It's important that all fields are as accurate as possible or model performance is going to be affected. For more [details on labeling](/document-ai/docs/workbench/label-documents).\n\n7. Once the document has been reviewed, select **Mark as labeled**.\n\n8. The document is now ready to be used by the model. Make sure the document is\n in either the testing or training set."]]