Stay organized with collections
Save and categorize content based on your preferences.
Model and risk governance is the process by which models are determined to be
sufficient by all stakeholder groups. Your process might include new model
validation, model monitoring, security and compliance standards, support
processes, risk coverage, operations manuals, and user guides, among other
topics.
As an owner of a risk framework, the following artifacts provide you with useful
resources for integrating AML AI into your overall risk
management landscape. AML AI contributes documentation relevant
to model and risk governance, as well as various outputs from tuning, training,
and evaluating your AML AI model.
Model and risk governance documentation
The following set of concept documentation, available on request for
AML AI customers, serves as governance artifacts in your overall
risk management and AI/ML model and risk governance framework:
Model architecture:
Describes the particular model architecture used for AML AI
to calculate risk scores.
Labeling methodology:
Describes the approaches used to define labeled training examples for
tuning, training, and backtesting of AML AI models.
Model training methodology:
Describes the training and validation approach for AML AI
models.
Model tuning methodology:
Describes the process by which AML AI optimizes model
hyperparameters based on your data.
Model evaluation methodology:
Describes the metrics that are used for model evaluation and backtesting.
Feature families overview:
Describes the supported feature families and how they are used for
explainability (and elsewhere) in AML AI.
Risk typology schema:
Describes how AML AI supports risk typologies and the
methodology it uses to demonstrate coverage.
Engine version stability and support policy:
Describes what does and does not change between AML AI
engine versions, and how long each engine version is supported for different
operations.
Model outputs as governance artifacts
The following artifacts are generated as outputs by regular
AML AI operations:
Model quality
Engine configuration output includes expected recall (before and
after tuning) captured in the engine config metadata.
Backtest results allow you to measure trained model performance on a
set of examples not included in training.
Data quality
Missingness output indicates the share of missing values per feature
family in your datasets used for tuning, training, backtesting, and
prediction. Significant changes can indicate an inconsistency in your
underlying data which can impact model performance.
Data validation errors prevent completion of AML AI
operations, so to successfully produce a model and predictions, you must
resolve these errors.
Prediction results
Risk scores vary from 0 to 1, and within this range a higher score
indicates higher risk for the party for the predicted month. Risk scores
shouldn't be interpreted directly as a probability of money laundering
activity, or of the success of a possible investigation.
Explainable AI output augments high risk scores with attribution
scores indicating contribution of each feature family to the risk score.
Long-running operations (LROs) allow you to track all
AML AI processes used in model preparation and predictions.
For more information, see
Manage long-running operations.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eModel and risk governance involves ensuring models are deemed sufficient by all stakeholders, encompassing processes like validation, monitoring, and compliance.\u003c/p\u003e\n"],["\u003cp\u003eAML AI provides documentation relevant to model and risk governance, including model architecture, labeling methodology, and risk typology schema.\u003c/p\u003e\n"],["\u003cp\u003eAML AI customers can request concept documentation that serves as governance artifacts for overall risk management and AI/ML model governance.\u003c/p\u003e\n"],["\u003cp\u003eRegular AML AI operations generate outputs like backtest results, missingness output, risk scores, and explainable AI output, which serve as additional governance artifacts.\u003c/p\u003e\n"],["\u003cp\u003eAML AI's long-running operations (LROs) help track all model preparation and prediction processes.\u003c/p\u003e\n"]]],[],null,["# Collect model and risk governance artifacts\n\nModel and risk governance is the process by which models are determined to be\nsufficient by all stakeholder groups. Your process might include new model\nvalidation, model monitoring, security and compliance standards, support\nprocesses, risk coverage, operations manuals, and user guides, among other\ntopics.\n\nAs an owner of a risk framework, the following artifacts provide you with useful\nresources for integrating AML AI into your overall risk\nmanagement landscape. AML AI contributes documentation relevant\nto model and risk governance, as well as various outputs from tuning, training,\nand evaluating your AML AI model.\n\nModel and risk governance documentation\n---------------------------------------\n\nThe following set of concept documentation, available on request for\nAML AI customers, serves as governance artifacts in your overall\nrisk management and AI/ML model and risk governance framework:\n\n- **Model architecture**: Describes the particular model architecture used for AML AI to calculate risk scores.\n- **Labeling methodology**: Describes the approaches used to define labeled training examples for tuning, training, and backtesting of AML AI models.\n- **Model training methodology**: Describes the training and validation approach for AML AI models.\n- **Model tuning methodology**: Describes the process by which AML AI optimizes model hyperparameters based on your data.\n- **Model evaluation methodology**: Describes the metrics that are used for model evaluation and backtesting.\n- **Feature families overview**: Describes the supported feature families and how they are used for explainability (and elsewhere) in AML AI.\n- **Risk typology schema**: Describes how AML AI supports risk typologies and the methodology it uses to demonstrate coverage.\n- **Engine version stability and support policy**: Describes what does and does not change between AML AI engine versions, and how long each engine version is supported for different operations.\n\nModel outputs as governance artifacts\n-------------------------------------\n\nThe following artifacts are generated as outputs by regular\nAML AI operations:\n\n- **Model quality**\n - **Engine configuration output** includes expected recall (before and after tuning) captured in the engine config metadata.\n - **Backtest results** allow you to measure trained model performance on a set of examples not included in training.\n- **Data quality**\n - **Missingness output** indicates the share of missing values per feature family in your datasets used for tuning, training, backtesting, and prediction. Significant changes can indicate an inconsistency in your underlying data which can impact model performance.\n - **Data validation errors** prevent completion of AML AI operations, so to successfully produce a model and predictions, you must resolve these errors.\n- **Prediction results**\n - **Risk scores** vary from 0 to 1, and within this range a higher score indicates higher risk for the party for the predicted month. Risk scores shouldn't be interpreted directly as a probability of money laundering activity, or of the success of a possible investigation.\n - **Explainable AI output** augments high risk scores with attribution scores indicating contribution of each feature family to the risk score.\n- **Long-running operations (LROs)** allow you to track all AML AI processes used in model preparation and predictions. For more information, see [Manage long-running operations](/financial-services/anti-money-laundering/docs/manage-long-running-operations)."]]