Stay organized with collections
Save and categorize content based on your preferences.
This page outlines best practices for using risk scores and explainability.
Using risk scores
Risk scores can be used in your investigation process to prioritize
investigations of high-risk parties.
Common approaches include alerting based on investigator load or based on a
chosen risk level:
Capacity led: Alert or investigate the top n parties in the output table
based on risk score in descending order, depending on available investigator
volume.
Risk led: Alert or investigate all parties with a risk score above a
fixed threshold that is fixed month-to-month. This is also based on backtest
results which give an acceptable level of a recall of previous cases and
discovery of new risk. For more information, see Collect model and risk
governance artifacts.
Using explainability
The feature families with the highest positive attribution scores can be
provided to investigators to direct their investigations, to decrease the time
needed per investigation or increase success rate. Experience suggests that
negative scores (which indicates a feature family has reduced the risk of a
case) can be difficult for an investigator to use and some AML AI
customers don't show these to their investigators. For best results, consider
what training or guidance your investigators need to handle investigations
related to different feature families.
You might also use explainability for other purposes:
determining if a customer's behavior has changed enough to merit a fresh
investigation for a 2nd or repeated alert for this customer
deriving aggregate insights from feature family contributions over time
Filtering out repeated alerts
AML AI risk scores identify high-risk parties, but don't
separate out repeat alerts. For example, a customer presenting a high risk in
March 2023 may have a similarly high score in April 2023, generating two
consecutive cases despite their behavior remaining the same. You might want to
apply rules to filter out repeated alerts to avoid re-alerting a party with a
current or recently completed investigation without significant change in risk
score or explainability.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-04-09 UTC."],[[["Risk scores are used to prioritize investigations of high-risk parties, allowing for capacity-led or risk-led approaches to investigations."],["Capacity-led investigations involve alerting or investigating the top-scoring parties based on available investigator volume, while risk-led investigations focus on parties exceeding a fixed risk threshold."],["Explainability features with high positive attribution scores can guide investigators, decreasing the time per investigation or increasing success rates."],["Explainability can also assess changes in customer behavior, or derive aggregate insights from feature family contributions over time."],["To prevent redundant investigations, rules can be implemented to filter out repeated alerts for parties without significant changes in risk score or explainability."]]],[]]