This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to use AI to help you improve the security of your cloud workloads.
Because of the increasing number and sophistication of cyber attacks, it's important to take advantage of AI's potential to help improve security. AI can help to reduce the number of threats, reduce the manual effort required by security professionals, and help compensate for the scarcity of experts in the cyber-security domain.
Principle overview
Use AI capabilities to improve your existing security systems and processes. You can use Gemini in Security as well as the intrinsic AI capabilities that are built into Google Cloud services.
These AI capabilities can transform security by providing assistance across every stage of the security lifecycle. For example, you can use AI to do the following:
- Analyze and explain potentially malicious code without reverse engineering.
- Reduce repetitive work for cyber-security practitioners.
- Use natural language to generate queries and interact with security event data.
- Surface contextual information.
- Offer recommendations for quick responses.
- Aid in the remediation of events.
- Summarize high-priority alerts for misconfigurations and vulnerabilities, highlight potential impacts, and recommend mitigations.
Levels of security autonomy
AI and automation can help you achieve better security outcomes when you're dealing with ever-evolving cyber-security threats. By using AI for security, you can achieve greater levels of autonomy to detect and prevent threats and improve your overall security posture. Google defines four levels of autonomy when you use AI for security, and they outline the increasing role of AI in assisting and eventually leading security tasks:
- Manual: Humans run all of the security tasks (prevent, detect, prioritize, and respond) across the entire security lifecycle.
- Assisted: AI tools, like Gemini, boost human productivity by summarizing information, generating insights, and making recommendations.
- Semi-autonomous: AI takes primary responsibility for many security tasks and delegates to humans only when required.
- Autonomous: AI acts as a trusted assistant that drives the security lifecycle based on your organization's goals and preferences, with minimal human intervention.
Recommendations
The following sections describe the recommendations for using AI for security. The sections also indicate how the recommendations align with Google's Secure AI Framework (SAIF) core elements and how they're relevant to the levels of security autonomy.
- Enhance threat detection and response with AI
- Simplify security for experts and non-experts
- Automate time-consuming security tasks with AI
- Incorporate AI into risk management and governance processes
- Implement secure development practices for AI systems
Enhance threat detection and response with AI
This recommendation is relevant to the following focus areas:
- Security operations (SecOps)
- Logging, auditing, and monitoring
AI can analyze large volumes of security data, offer insights into threat actor behavior, and automate the analysis of potentially malicious code. This recommendation is aligned with the following SAIF elements:
- Extend detection and response to bring AI into your organization's threat universe.
- Automate defenses to keep pace with existing and new threats.
Depending on your implementation, this recommendation can be relevant to the following levels of autonomy:
- Assisted: AI helps with threat analysis and detection.
- Semi-autonomous: AI takes on more responsibility for the security task.
Google Threat Intelligence, which uses AI to analyze threat actor behavior and malicious code, can help you implement this recommendation.
Simplify security for experts and non-experts
This recommendation is relevant to the following focus areas:
- Security operations (SecOps)
- Cloud governance, risk, and compliance
AI-powered tools can summarize alerts and recommend mitigations, and these capabilities can make security more accessible to a wider range of personnel. This recommendation is aligned with the following SAIF elements:
- Automate defenses to keep pace with existing and new threats.
- Harmonize platform-level controls to ensure consistent security across the organization.
Depending on your implementation, this recommendation can be relevant to the following levels of autonomy:
- Assisted: AI helps you to improve the accessibility of security information.
- Semi-autonomous: AI helps to make security practices more effective for all users.
Gemini in Security Command Center can provide summaries of alerts for misconfigurations and vulnerabilities.
Automate time-consuming security tasks with AI
This recommendation is relevant to the following focus areas:
- Infrastructure security
- Security operations (SecOps)
- Application security
AI can automate tasks such as analyzing malware, generating security rules, and identifying misconfigurations. These capabilities can help to reduce the workload on security teams and accelerate response times. This recommendation is aligned with the SAIF element about automating defenses to keep pace with existing and new threats.
Depending on your implementation, this recommendation can be relevant to the following levels of autonomy:
- Assisted: AI helps you to automate tasks.
- Semi-autonomous: AI takes primary responsibility for security tasks, and only requests human assistance when needed.
Gemini in Google SecOps can help to automate high-toil tasks by assisting analysts, retrieving relevant context, and making recommendations for next steps.
Incorporate AI into risk management and governance processes
This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance.
You can use AI to build a model inventory and risk profiles. You can also use AI to implement policies for data privacy, cyber risk, and third-party risk. This recommendation is aligned with the SAIF element about contextualizing AI system risks in surrounding business processes.
Depending on your implementation, this recommendation can be relevant to the semi-autonomous level of autonomy. At this level, AI can orchestrate security agents that run processes to achieve your custom security goals.
Implement secure development practices for AI systems
This recommendation is relevant to the following focus areas:
- Application security
- AI and ML security
You can use AI for secure coding, cleaning training data, and validating tools and artifacts. This recommendation is aligned with the SAIF element about expanding strong security foundations to the AI ecosystem.
This recommendation can be relevant to all levels of security autonomy, because a secure AI system needs to be in place before AI can be used effectively for security. The recommendation is most relevant to the assisted level, where security practices are augmented by AI.
To implement this recommendation, follow the Supply-chain Levels for Software Artifacts (SLSA) guidelines for AI artifacts and use validated container images.