AI Protection helps you manage the security posture of your AI workloads by detecting threats and helping you to mitigate risks to your AI asset inventory. This document provides a general overview of AI Protection including its benefits and several key concepts.
AI Protection overview
AI Protection provides several capabilities to help you manage threats and risks to your AI systems, including the following:
- Assess your AI inventory: Assess and understand your AI systems and AI assets including your models and datasets.
- Manage risks and compliance: Proactively manage risks to your AI assets and verify that your AI deployments adhere to relevant security standards.
- Mitigate legal and financial risks: Reduce the financial, reputational, and legal risks associated with security breaches and regulatory noncompliance.
- Detect and manage threats: Detect and respond to potential threats to your AI systems and assets in a timely manner.
- View one dashboard: Manage all of your AI-related risks and threats from one centralized dashboard.
Use cases
AI Protection helps organizations enhance their security by identifying and mitigating threats and risks related to AI systems and sensitive data. The following use cases are examples of how AI Protection can be used in different organizations:
Financial services institution: customer financial data
A large financial services institution uses AI models that process sensitive financial data.
- Challenge: Processing highly sensitive financial data with AI models entails several risks, including the risk of data breaches, data exfiltration during training or inference, and vulnerabilities in the underlying AI infrastructure.
- Use case: AI Protection continuously monitors AI workflows for suspicious activity, works to detect unauthorized data access and anomalous model behavior, performs sensitive data classification, and aids in improving your compliance with regulations such as PCI DSS and GDPR.
Healthcare provider: patient privacy and compliance
A major healthcare provider manages electronic health records and uses AI for diagnostics and treatment planning, dealing with Protected Health Information (PHI).
- Challenge: PHI analyzed by AI models is subject to strict regulations like HIPAA. Risks include accidental PHI exposure through misconfigurations or malicious attacks that target AI systems for patient data.
- Use case: AI Protection identifies and alerts on potential HIPAA violations, detects unauthorized PHI access by models or users, flags vulnerable and potentially misconfigured AI services, and monitors for data leakage.
Manufacturing and robotics company: proprietary intellectual property
A manufacturing company specializing in advanced robotics and automation relies heavily on AI for optimizing production lines and robotic control, with vital intellectual property (IP) embedded within its AI algorithms and manufacturing data.
- Challenge: Proprietary AI algorithms and sensitive operational data are vulnerable to theft from insider threats or external adversaries, potentially leading to competitive disadvantage or operational disruption.
- Use case: AI Protection monitors for unauthorized access to AI models and code repositories, detects attempts to exfiltrate trained models and unusual data access patterns, and flags vulnerabilities in AI development environments to prevent IP theft.
AI Protection framework
AI Protection consists of a framework that includes specific cloud controls that are deployed automatically in detective mode. Detective mode means that the cloud control is applied to the defined resources for monitoring purposes. Any violations are detected and alerts are generated. You use frameworks and cloud controls to define your AI Protection requirements and apply those requirements to your Google Cloud environment. AI Protection includes the Default framework, which defines recommended baseline controls for AI Protection. When you enable AI Protection, the default framework is automatically applied to the Google Cloud organization in detective mode.
If required, you can make copies of the framework to create custom AI Protection frameworks. You can add the cloud controls to your custom frameworks and apply the custom frameworks to the organization, folders, or projects. For example, you can create custom frameworks that apply specific jurisdictional controls to specific folders to ensure that data within those folders stays within a particular geographical region.
Cloud controls in the default AI Protection framework
The following cloud controls are part of the default AI Protection framework.
Cloud control name | Description |
---|---|
Block Default VPC Network for Vertex AI Workbench Instances |
Don't create Workbench instances in the default VPC network to help prevent the use of its over-permissive default firewall rules. |
Block Public IP Address for Vertex AI Workbench Instances |
Don't permit external IP addresses for Workbench instances to reduce exposure to the internet and minimize the risk of unauthorized access. |
Enable CMEK for Vertex AI Custom Jobs |
Require customer-managed encryption keys (CMEK) on Vertex AI custom training jobs to gain more control over the encryption of job inputs and outputs. |
Enable CMEK for Vertex AI Featurestore |
Require customer-managed encryption keys (CMEK) for Vertex AI featurestore to gain more control over data encryption and access. |
Enable CMEK for Vertex AI Hyperparameter Tuning Jobs |
Require customer-managed encryption keys (CMEK) on hyperparameter tuning jobs to gain more control over the encryption of model training data and job configuration. |
Enable CMEK for Vertex AI Models |
Require customer-managed encryption keys (CMEK) for Vertex AI models to gain more control over data encryption and key management. |
Enable CMEK for Vertex AI Notebook Runtime Templates |
Require customer-managed encryption keys (CMEK) for Colab Enterprise runtime templates to help secure runtime environments and associated data. |
Enable CMEK for Vertex AI TensorBoard |
Require customer-managed encryption keys (CMEK) for Vertex AI TensorBoard to gain more control over the encryption of experiment data and model visualizations. |
Enable CMEK for Vertex AI Training Pipelines |
Require customer-managed encryption keys (CMEK) on Vertex AI training pipelines to gain more control over the encryption of training data and resulting artifacts. |
Enable CMEK for Vertex AI Workbench Instances |
Require customer-managed encryption keys (CMEK) for Vertex AI Workbench instances to gain more control over data encryption. |
Enable Idle Shutdown for Vertex AI Runtime Templates |
Enable automatic idle shutdown in Colab Enterprise runtime templates to optimize cloud costs, improve resource management, and enhance security. |
Enable Integrity Monitoring for Vertex AI Workbench Instances |
Enable integrity monitoring on Workbench instances to continuously attest the boot integrity of your VMs against a trusted baseline. |
Enable Secure Boot for Vertex AI Workbench Instances |
Enable secure boot for Workbench instances to help prevent unauthorized or malicious software from running during the boot process. |
Enable vTPM on Vertex AI Workbench Instances |
Enable the virtual trusted platform module (vTPM) on Workbench instances to safeguard the boot process and gain more control over encryption. |
Restrict Use of Default Service Account for Vertex AI Workbench Instances |
Restrict the use of the highly permissive default service account for Workbench instances to reduce the risk of unauthorized access to Google Cloud services. |
Supported functional areas
This section defines functional areas that AI Protection can help secure.
- AI workloads: AI application workloads range from internal tools aimed at improving employee productivity to consumer-facing solutions designed to enhance the user experience and drive business. Examples include AI agents, virtual assistants, conversational AI chatbots, and personalized recommendations.
- AI models: AI models are classified into foundation AI models, fine-tuned AI models, standard first-party AI models, and custom AI models. Examples include Gemini, Llama, translation models, and custom models for specific tasks.
- AI assets: AI assets contribute to machine learning operation pipelines
and are used by AI workloads. Types of AI assets include the following:
- Declarative AI assets: AI lifecycle management tools, such as Vertex AI, track these assets.
- Inferred AI assets: General-purpose assets, such as compute and storage assets, used to process AI data or workloads.
- Model-as-a-Service (API only): Assets that have programmatic calls into first-party or third-party AI models.
Use the AI Security dashboard
The AI Security dashboard provides a comprehensive view of your organization's AI asset inventory and proposes potential mitigations for enhanced risk and threat management.
Access the dashboard
To access the AI Security dashboard, in the Google Cloud console, go to Risk Overview > AI Security.
For more information, see AI Security dashboard
Understand risk information
This section provides information about potential risks that are associated with AI systems. You can view the top risks in your AI inventory.
You can click any issue to open a details pane that provides a visualization of the issue.
View AI threats
This section provides insights into threats associated with AI systems. You can view the top 5 recent threats associated with your AI resources.
On this page, you can do the following:
- Click View all to see threats that are associated with your AI resources.
- Click any threat to see further details about the threat.
Visualize inventory
You can view a visualization of your AI inventory on the dashboard that provides a summary of the projects that involve generative AI, the first-party and third-party models in active use, and the datasets that are used in training the third-party models.
On this page, you can do the following:
- To view the inventory details page, click any of the nodes in the visualization.
- To view a detailed listing of individual assets (such as foundational models and custom-built models), click the tooltip.
- To open a detailed view of the model, click the model. This view displays details such as the endpoints where the model hosts and the dataset used to train the model. If Sensitive Data Protection is enabled, the datasets view also displays if the dataset contains any sensitive data.
Review findings summary
This section helps you assess and manage the findings generated by AI framework and data security policies. This section includes the following:
- Findings: This section displays a summary of findings generated by AI security policies and data security policies. Click View all findings or click the count against each finding category to view the findings detail page. Click a finding to display additional information about that finding.
- Sensitive data in Vertex AI datasets: This section displays a summary of the findings based on sensitive data in datasets as reported by Sensitive Data Protection.
Examine Model Armor findings
A graph shows the total number of prompts or responses scanned by Model Armor and the number of issues that Model Armor detected. In addition, it displays summary statistics for various types of issues detected, such as prompt injection, jailbreak detection, and sensitive data detection.
This information is populated based on the metrics that Model Armor publishes to Cloud Monitoring.
What's next
- Learn how to configure AI Protection.
- To assess risk, access dashboard data.