O Cloud Data Loss Prevention (Cloud DLP) agora faz parte da Proteção de dados sensíveis. O nome da API continua o mesmo: API Cloud Data Loss Prevention (DLP). Para saber mais sobre os serviços que compõem a Proteção de dados sensíveis, consulte Visão geral da Proteção de dados sensíveis.
Crie e programe um gatilho de jobs que pesquisa dados confidenciais em um bucket do Cloud Storage, uma tabela do BigQuery ou um tipo do Datastore. Um gatilho de job automatiza a criação de jobs de Proteção de Dados Sensíveis periodicamente.
Verifique uma tabela do BigQuery e envie as descobertas ao
Data Catalog para criar tags automaticamente com base nas
descobertas da Proteção de dados sensíveis.
Analise uma arquitetura de referência para criar um pipeline de transformação de dados
automatizado que desidentifique dados sensíveis, como informações de identificação pessoal (PII).
Visão geral da arquitetura e práticas recomendadas para governança de dados ao criar,
implantar e operar um data warehouse no Google Cloud, incluindo a
desidentificação de dados, o tratamento diferencial de dados confidenciais e
os controles de acesso no nível da coluna.
Contribuições da comunidade
Os itens a seguir são de propriedade e gerenciados pelos membros da comunidade, não pela equipe de proteção de dados sensíveis. No caso de perguntas sobre esses itens, entre em contato com os
respectivos proprietários.
Inspecione os dados do BigQuery usando a API Cloud Data Loss Prevention e, em seguida, use a API Data Catalog para criar tags no nível da coluna de acordo com os elementos confidenciais encontrados pela Proteção de dados sensíveis.
Configure um aplicativo de programação sem servidor e orientado a eventos que usa a API Cloud Data Loss Prevention para inspecionar os dados do BigQuery.
Acompanhe um padrão de inteligência artificial em tempo real (IA) para detectar anomalias nos arquivos de registros. Esta prova de conceito usa o Pub/Sub, o Dataflow, o BigQuery ML e a Proteção de dados sensíveis.
Use o Dataflow e a Proteção de dados sensíveis para tokenizar e importar com segurança
dados de um banco de dados relacional para o BigQuery. Este exemplo descreve como tokenizar dados de PII antes de se tornarem persistentes.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-04 UTC."],[],[],null,["# Using Sensitive Data Protection with BigQuery\n\n\u003cbr /\u003e\n\nThis page contains references to pages that provide information on how\nto use Sensitive Data Protection with [BigQuery](/bigquery).\n\nQuickstart guides\n-----------------\n\n[Quickstart: Scheduling a Sensitive Data Protection inspection scan](/sensitive-data-protection/docs/schedule-inspection-scan)\n: Schedule periodic inspection of a\n Cloud Storage bucket, a BigQuery table, or a\n Datastore kind. For detailed instructions, see\n [Creating and scheduling Sensitive Data Protection inspection jobs](/sensitive-data-protection/docs/creating-job-triggers).\n\nHow-to guides\n-------------\n\nThis section provides a categorized list of task-based guides that demonstrate\nhow to use Sensitive Data Protection with BigQuery.\n\n### Inspection\n\n[Inspecting storage and databases for sensitive data](/sensitive-data-protection/docs/inspecting-storage)\n: Create a one-time job that searches for sensitive data in a Cloud Storage\n bucket, a BigQuery table, or a Datastore kind.\n\n[Creating and scheduling Sensitive Data Protection inspection jobs](/sensitive-data-protection/docs/creating-job-triggers)\n: Create and schedule a job trigger that searches for sensitive data in a\n Cloud Storage bucket, a BigQuery table, or a\n Datastore kind. A job trigger automates the creation of\n Sensitive Data Protection jobs on a periodic basis.\n\n### Working with scan results\n\n[Sending Sensitive Data Protection scan results to Data Catalog](/sensitive-data-protection/docs/sending-results-to-dc)\n: Scan a BigQuery table, and then send the findings to\n Data Catalog to automatically create tags based on\n Sensitive Data Protection findings.\n\n[Sending Sensitive Data Protection scan results to Security Command Center](/sensitive-data-protection/docs/sending-results-to-scc)\n: Scan a Cloud Storage bucket, a BigQuery table, or a\n Datastore kind, and then send the findings to Security Command Center.\n\n[Analyzing and reporting on Sensitive Data Protection findings](/sensitive-data-protection/docs/analyzing-and-reporting)\n: Use BigQuery to run analytics on Sensitive Data Protection\n findings.\n\n[Querying Sensitive Data Protection findings in BigQuery](/sensitive-data-protection/docs/querying-findings)\n: Look through sample queries that you can use in BigQuery to\n analyze findings that Sensitive Data Protection identified.\n\n### Re-identification risk analysis\n\n[Measuring re-identification and disclosure risk](/sensitive-data-protection/docs/compute-risk-analysis)\n\n: Analyze structured data stored in a BigQuery table and compute\n the following re-identification risk metrics:\n\n - [*k*-anonymity](/sensitive-data-protection/docs/compute-k-anonymity)\n - [*l*-diversity](/sensitive-data-protection/docs/compute-l-diversity)\n - [*k*-map](/sensitive-data-protection/docs/compute-k-map)\n - [*δ*-presence](/sensitive-data-protection/docs/compute-d-presence)\n\n[Computing numerical and categorical statistics](/sensitive-data-protection/docs/compute-stats)\n\n: Determine minimum, maximum, and quantile values for an individual\n BigQuery column.\n\n[Visualizing re-identification risk using Looker Studio](/sensitive-data-protection/docs/visualizing_re-id_risk)\n\n: Measure the *k*-anonymity of a dataset, and then visualize it in\n Looker Studio.\n\nTutorials\n---------\n\n[De-identify BigQuery data at query time](/sensitive-data-protection/docs/deidentify-bq-tutorial)\n: Follow a step-by-step tutorial that uses BigQuery remote\n functions to de-identify and re-identify data in real-time query results.\n\n[De-identification and re-identification of PII in large-scale datasets using Sensitive Data Protection](/architecture/de-identification-re-identification-pii-using-cloud-dlp)\n: Review a reference architecture for creating an automated data transformation\n pipeline that de-identifies sensitive data like personally identifiable\n information (PII).\n\nBest practices\n--------------\n\n[Secure a BigQuery data warehouse that stores confidential data](/architecture/confidential-data-warehouse-blueprint)\n: Architectural overview and best practices for data governance when creating,\n deploying, and operating a data warehouse in Google Cloud, including data\n de-identification, differential handling of confidential data, and\n column-level access controls.\n\nCommunity contributions\n-----------------------\n\nThe following are owned and managed by community members, and not by the\nSensitive Data Protection team. For questions on these items, contact their\nrespective owners.\n\n[Create Data Catalog tags by inspecting BigQuery data with Sensitive Data Protection](/community/tutorials/dlp-to-datacatalog-tags)\n: Inspect BigQuery data using the Cloud Data Loss Prevention API, and then use the\n Data Catalog API to create column-level tags according to the sensitive\n elements that Sensitive Data Protection found.\n\n[Event-driven serverless scheduling architecture with Sensitive Data Protection](/community/tutorials/event-driven-serverless-scheduling-framework-dlp)\n: Set up an event-driven, serverless scheduling application that uses the\n Cloud Data Loss Prevention API to inspect BigQuery data.\n\n[Real-time anomaly detection using Google Cloud stream analytics and AI services](https://github.com/GoogleCloudPlatform/df-ml-anomaly-detection)\n: Walk through a real-time artificial intelligence (AI) pattern for detecting\n anomalies in log files. This proof-of-concept uses Pub/Sub,\n Dataflow, BigQuery ML, and Sensitive Data Protection.\n\n[Relational database import to BigQuery with Dataflow and Sensitive Data Protection](https://github.com/GoogleCloudPlatform/dlp-rdb-bq-import)\n: Use Dataflow and Sensitive Data Protection to securely tokenize and import\n data from a relational database to BigQuery. This example\n describes how to tokenize PII data before it's made persistent.\n\nPricing\n-------\n\nWhen you inspect a BigQuery table, you incur\nSensitive Data Protection costs, according to the [storage inspection job pricing](/sensitive-data-protection/pricing#storage-pricing).\n\nIn addition, when you save inspection findings to a BigQuery\ntable, [BigQuery charges](/bigquery/pricing#data_ingestion_pricing)\napply."]]