Google Cloud 架构完善框架安全支柱中的这一原则提供了相关建议,可帮助您保护 AI 系统的安全。这些建议与 Google 的安全 AI 框架 (SAIF) 相一致,该框架提供了一种切实可行的方法来解决 AI 系统的安全和风险问题。SAIF 是一个概念框架,旨在为负责任地构建和部署 AI 提供行业范围内的标准。
原则概览
为确保 AI 系统满足安全性、隐私保护和合规性要求,您必须采取全面的策略,从初始设计阶段开始,一直延伸到部署和运营阶段。您可以应用 SAIF 的六个核心要素来实现这一整体策略。
Google 利用 AI 来增强安全措施,例如识别威胁、自动执行安全任务和提高检测能力,同时让人员参与关键决策。
Google 强调采用协作方式来提升 AI 安全性。这种方法包括与客户、行业和政府合作,以增强 SAIF 指南并提供切实可行的资源。
如需安全地使用 AI,您需要同时采取基础安全控制措施和 AI 专用安全控制措施。本部分概述了相关建议,可帮助确保您的 AI 和 ML 部署满足组织的安全、隐私权和合规性要求。
如需简要了解 Google Cloud中针对 AI 和机器学习工作负载的架构原则和建议,请参阅 Well-Architected 框架中的 AI 和机器学习视角。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-02-05。"],[[["\u003cp\u003eThis content focuses on securing AI systems through recommendations aligned with Google's Secure AI Framework (SAIF), which provides industry-wide standards for responsible AI development and deployment.\u003c/p\u003e\n"],["\u003cp\u003eImplementing a holistic security strategy for AI systems requires addressing security, privacy, and compliance from initial design through to deployment and operations, with a focus on foundational and AI-specific controls.\u003c/p\u003e\n"],["\u003cp\u003eSecure AI usage involves maintaining data security, preventing data loss or mishandling, and ensuring AI pipelines are robust against tampering, aligning with SAIF's emphasis on expanding security foundations.\u003c/p\u003e\n"],["\u003cp\u003eAI governance, guided by principles like fairness, transparency, accountability, privacy, and security, can be supported using Google Cloud tools such as Vertex AI, which helps in detecting and addressing bias, and by tracking data lineage.\u003c/p\u003e\n"],["\u003cp\u003eEstablishing accountability for AI systems can be achieved by using Cloud Logging for auditing AI system events and decisions and Error Reporting to analyze and improve model accuracy and prevent biases.\u003c/p\u003e\n"]]],[],null,["# Use AI securely and responsibly\n\nThis principle in the security pillar of the\n[Google Cloud Well-Architected Framework](/architecture/framework)\nprovides recommendations to help you secure your AI systems. These\nrecommendations are aligned with Google's\n[Secure AI Framework (SAIF)](https://safety.google/cybersecurity-advancements/saif/),\nwhich provides a practical approach to address the security and risk concerns of\nAI systems. SAIF is a conceptual framework that aims to provide industry-wide\nstandards for building and deploying AI responsibly.\n\nPrinciple overview\n------------------\n\nTo help ensure that your AI systems meet your security, privacy, and compliance\nrequirements, you must adopt a holistic strategy that starts with the initial\ndesign and extends to deployment and operations. You can implement this holistic\nstrategy by applying the\n[six core elements of SAIF](https://developers.google.com/machine-learning/resources/saif).\n\nGoogle uses AI to enhance security measures, such as identifying threats,\nautomating security tasks, and improving detection capabilities, while keeping\nhumans in the loop for critical decisions.\n\nGoogle emphasizes a collaborative approach to advancing AI security. This\napproach involves partnering with customers, industries, and governments to\nenhance the SAIF guidelines and offer practical, actionable resources.\n\nThe recommendations to implement this principle are grouped within the\nfollowing sections:\n\n- [Recommendations to use AI securely](#recommendations_to_use_ai_securely)\n- [Recommendations for AI governance](#recommendations_for_ai_governance)\n\nRecommendations to use AI securely\n----------------------------------\n\nTo use AI securely, you need both foundational security controls and\nAI-specific security controls. This section provides an overview of\nrecommendations to ensure that your AI and ML deployments meet the security,\nprivacy, and compliance requirements of your organization.\n\n\nFor an overview of architectural principles and recommendations that are specific to AI\nand ML workloads in Google Cloud, see the\n[AI and ML perspective](/architecture/framework/perspectives/ai-ml)\nin the Well-Architected Framework.\n\n### Define clear goals and requirements for AI usage\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Cloud governance, risk, and compliance\n- AI and ML security\n\nThis recommendation aligns with the SAIF element about contextualizing AI\nsystem risks in the surrounding business processes. When you design and evolve\nAI systems, it's important to understand your specific business goals, risks, and\ncompliance requirements.\n\n### Keep data secure and prevent loss or mishandling\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Infrastructure security\n- Identity and access management\n- Data security\n- Application security\n- AI and ML security\n\nThis recommendation aligns with the following SAIF elements:\n\n- Expand strong security foundations to the AI ecosystem. This element includes data collection, storage, access control, and protection against data poisoning.\n- Contextualize AI system risks. Emphasize data security to support business objectives and compliance.\n\n### Keep AI pipelines secure and robust against tampering\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Infrastructure security\n- Identity and access management\n- Data security\n- Application security\n- AI and ML security\n\nThis recommendation aligns with the following SAIF elements:\n\n- Expand strong security foundations to the AI ecosystem. As a key element of establishing a secure AI system, secure your code and model artifacts.\n- Adapt controls for faster feedback loops. Because it's important for mitigation and incident response, track your assets and pipeline runs.\n\n### Deploy apps on secure systems using secure tools and artifacts\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Infrastructure security\n- Identity and access management\n- Data security\n- Application security\n- AI and ML security\n\nUsing secure systems and validated tools and artifacts in AI-based applications\naligns with the SAIF element about expanding strong security foundations to the\nAI ecosystem and supply chain. This recommendation can be addressed through the\nfollowing steps:\n\n- Implement a secure environment for ML training and deployment\n- Use validated container images\n- Apply [Supply-chain Levels for Software Artifacts (SLSA)](https://slsa.dev) guidelines\n\n### Protect and monitor inputs\n\nThis recommendation is relevant to the following\n[focus areas](/architecture/framework/security#focus_areas_of_cloud_security):\n\n- Logging, auditing, and monitoring\n- Security operations\n- AI and ML security\n\nThis recommendation aligns with the SAIF element about extending detection and\nresponse to bring AI into an organization's threat universe. To prevent issues,\nit's critical to manage prompts for generative AI systems, monitor inputs, and\ncontrol user access.\n\nRecommendations for AI governance\n---------------------------------\n\nAll of the recommendations in this section are relevant to the following\n[focus area](/architecture/framework/security#focus_areas_of_cloud_security):\nCloud governance, risk, and compliance.\n\nGoogle Cloud offers a robust set of tools and services that you can use\nto build responsible and ethical AI systems. We also offer a framework of\npolicies, procedures, and ethical considerations that can guide the development,\ndeployment, and use of AI systems.\n\nAs reflected in our recommendations, Google's approach for AI governance is\nguided by the following principles:\n\n- Fairness\n- Transparency\n- Accountability\n- Privacy\n- Security\n\n### Use fairness indicators\n\n[Vertex AI](/vertex-ai/docs/start/introduction-unified-platform)\ncan detect bias during the data collection or post-training evaluation process.\nVertex AI provides\n[model evaluation metrics](/vertex-ai/docs/evaluation/intro-evaluation-fairness)\nlike *data bias* and *model bias* to help you evaluate your model for bias.\n\nThese metrics are related to fairness across different categories like race,\ngender, and class. However, interpreting statistical deviations isn't a\nstraightforward exercise, because differences across categories might not be a\nresult of bias or a signal of harm.\n\n### Use Vertex Explainable AI\n\nTo understand how the AI models make decisions, use Vertex Explainable AI. This\nfeature helps you to identify potential biases that might be hidden in the\nmodel's logic.\n\nThis explainability feature is integrated with\n[BigQuery ML](/bigquery/docs/xai-overview)\nand\n[Vertex AI](/vertex-ai/docs/explainable-ai/overview),\nwhich provide feature-based explanations. You can either perform explainability\nin BigQuery ML or\n[register your model](/bigquery/docs/managing-models-vertex#register_models)\nin Vertex AI and perform explainability in\nVertex AI.\n\n### Track data lineage\n\nTrack the origin and transformation of data that's used in your AI systems.\nThis tracking helps you understand the data's journey and identify potential\nsources of bias or error.\n\n[Data lineage](/data-catalog/docs/concepts/about-data-lineage)\nis a Dataplex Universal Catalog feature that lets you track how data moves through your\nsystems: where it comes from, where it's passed to, and what transformations are\napplied to it.\n\n### Establish accountability\n\nEstablish clear responsibility for the development, deployment, and outcomes of\nyour AI systems.\n\nUse\n[Cloud Logging](/logging/docs/overview)\nto log key events and decisions made by your AI systems. The logs provide an\naudit trail to help you understand how the system is performing and identify\nareas for improvement.\n\nUse\n[Error Reporting](/error-reporting/docs/grouping-errors)\nto systematically analyze errors made by the AI systems. This analysis can\nreveal patterns that point to underlying biases or areas where the model needs\nfurther refinement.\n\n### Implement differential privacy\n\nDuring model training,\n[add noise](/bigquery/docs/differential-privacy#add_noise)\nto the data in order to make it difficult to identify individual data points but\nstill enable the model to learn effectively. With\n[SQL in BigQuery](/bigquery/docs/introduction-sql),\nyou can transform the results of a query with differentially private\n[aggregations](/bigquery/docs/differential-privacy)."]]