Import data from Google Cloud into a secured BigQuery data warehouse

Last reviewed 2021-12-16 UTC

Many organizations deploy data warehouses that store confidential information so that they can analyze the data for a variety of business purposes. This document is intended for data engineers and security administrators who deploy and secure data warehouses using BigQuery. It's part of a security blueprint that's made up of the following:

  • A GitHub repository that contains a set of Terraform configurations and scripts. The Terraform configuration sets up an environment in Google Cloud that supports a data warehouse that stores confidential data.

  • A guide to the architecture, design, and security controls that you use this blueprint to implement (this document).

  • A walkthrough that deploys a sample environment.

This document discusses the following:

  • The architecture and Google Cloud services that you can use to help secure a data warehouse in a production environment.

  • Best practices for data governance when creating, deploying, and operating a data warehouse in Google Cloud, including data de-identification, differential handling of confidential data, and column-level access controls.

This document assumes that you have already configured a foundational set of security controls as described in the Google Cloud enterprise foundations blueprint. It helps you to layer additional controls onto your existing security controls to help protect confidential data in a data warehouse.

Data warehouse use cases

The blueprint supports the following use cases:


Data warehouses such as BigQuery let businesses analyze their business data for insights. Analysts access the business data that is stored in data warehouses to create insights. If your data warehouse includes confidential data, you must take measures to preserve the security, confidentiality, integrity, and availability of the business data while it is stored, while it is in transit, or while it is being analyzed. In this blueprint, you do the following:

  • Configure controls that help secure access to confidential data.
  • Configure controls that help secure the data pipeline.
  • Configure an appropriate separation of duties for different personas.
  • Set up templates to find and de-identify confidential data.
  • Set up appropriate security controls and logging to help protect confidential data.
  • Use data classification and policy tags to restrict access to specific columns in the data warehouse.


To create a confidential data warehouse, you need to categorize data as confidential and non-confidential, and then store the data in separate perimeters. The following image shows how ingested data is categorized, de-identified, and stored. It also shows how you can re-identify confidential data on demand for analysis.

The confidential data warehouse architecture.

The architecture uses a combination of the following Google Cloud services and features:

  • Identity and Access Management (IAM) and Resource Manager restrict access and segment resources. The access controls and resource hierarchy follow the principle of least privilege.

  • VPC Service Controls creates security perimeters that isolate services and resources by setting up authorization, access controls, and secure data exchange. The perimeters are as follows:

    • A data ingestion perimeter that accepts incoming data (in batch or stream) and de-identifies it. A separate landing zone helps to protect the rest of your workloads from incoming data.

    • A confidential data perimeter that can re-identify the confidential data and store it in a restricted area.

    • A governance perimeter that stores the encryption keys and defines what is considered confidential data.

    These perimeters are designed to protect incoming content, isolate confidential data by setting up additional access controls and monitoring, and separate your governance from the actual data in the warehouse. Your governance includes key management, data catalog management, and logging.

  • Cloud Storage and Pub/Sub receives data as follows:

    • Cloud Storage: receives and stores batch data before de-identification. Cloud Storage uses TLS to encrypt data in transit and encrypts data in storage by default. The encryption key is a customer-managed encryption key (CMEK). You can help to secure access to Cloud Storage buckets using security controls such as Identity and Access Management, access control lists (ACLs), and policy documents. For more information about supported access controls, see Overview of access control.

    • Pub/Sub: receives and stores streaming data before de-identification. Pub/Sub uses authentication, access controls, and message-level encryption with a CMEK to protect your data.

  • Two Dataflow pipelines de-identify and re-identify confidential data as follows:

    • The first pipeline de-identifies confidential data using pseudonymization.
    • The second pipeline re-identifies confidential data when authorized users require access.

    To protect data, Dataflow uses a unique service account and encryption key for each pipeline, and access controls. To help secure pipeline execution by moving it to the backend service, Dataflow uses Streaming Engine. For more information, see Dataflow security and permissions.

  • Sensitive Data Protection de-identifies confidential data during ingestion.

    Sensitive Data Protection de-identifies structured and unstructured data based on the infoTypes or records that are detected.

  • Cloud HSM hosts the key encryption key (KEK). Cloud HSM is a cloud-based Hardware Security Module (HSM) service.

  • Data Catalog automatically categorizes confidential data with metadata, also known as policy tags, during ingestion. Data Catalog also uses metadata to manage access to confidential data. For more information, see Data Catalog overview. To control access to data within the data warehouse, you apply policy tags to columns that include confidential data.

  • BigQuery stores the confidential data in the confidential data perimeter.

    BigQuery uses various security controls to help protect content, including access controls, column-level security for confidential data, and data encryption.

  • Security Command Center monitors and reviews security findings from across your Google Cloud environment in a central location.

Organization structure

You group your organization's resources so that you can manage them and separate your testing environments from your production environment. Resource Manager lets you logically group resources by project, folder, and organization.

The following diagram shows you a resource hierarchy with folders that represent different environments such as bootstrap, common, production, non-production (or staging), and development. You deploy most of the projects in the blueprint into the production folder, and the data governance project in the common folder which is used for governance.

The resource hierarchy for a confidential data warehouse.


You use folders to isolate your production environment and governance services from your non-production and testing environments. The following table describes the folders from the enterprise foundations blueprint that are used by this blueprint.

Folder Description
Prod Contains projects that have cloud resources that have been tested and are ready to use.
Common Contains centralized services for the organization, such as the governance project.

You can change the names of these folders to align with your organization's folder structure, but we recommend that you maintain a similar structure. For more information, see the Google Cloud enterprise foundations blueprint.


You isolate parts of your environment using projects. The following table describes the projects that are needed within the organization. You create these projects when you run the Terraform code. You can change the names of these projects, but we recommend that you maintain a similar project structure.

Project Description
Data ingestion Contains services that are required in order to receive data and de-identify confidential data.
Governance Contains services that provide key management, logging, and data cataloging capabilities.
Non-confidential data Contains services that are required in order to store data that has been de-identified.
Confidential data Contains services that are required in order to store and re-identify confidential data.

In addition to these projects, your environment must also include a project that hosts a Dataflow Flex Template job. The Flex Template job is required for the streaming data pipeline.

Mapping roles and groups to projects

You must give different user groups in your organization access to the projects that make up the confidential data warehouse. The following sections describe the blueprint recommendations for user groups and role assignments in the projects that you create. You can customize the groups to match your organization's existing structure, but we recommend that you maintain a similar segregation of duties and role assignment.

Data analyst group

Data analysts analyze the data in the warehouse. This group requires roles in different projects, as described in the following table.

Project mapping Roles
Data ingestion

Additional role for data analysts that require access to confidential data:

Confidential data
  • roles/bigquery.dataViewer
  • roles/bigquery.jobUser
  • roles/bigquery.user
  • roles/dataflow.viewer
  • roles/dataflow.developer
  • roles/logging.viewer
Non-confidential data
  • roles/bigquery.dataViewer
  • roles/bigquery.jobUser
  • roles/bigquery.user
  • roles/logging.viewer

Data engineer group

Data engineers set up and maintain the data pipeline and warehouse. This group requires roles in different projects, as described in the following table.

Project mapping Roles
Data ingestion
Confidential data
  • roles/bigquery.dataEditor
  • roles/bigquery.jobUser
  • roles/cloudbuild.builds.editor
  • roles/cloudkms.viewer
  • roles/compute.networkUser
  • roles/dataflow.admin
  • roles/logging.viewer
Non-confidential data
  • roles/bigquery.dataEditor
  • roles/bigquery.jobUser
  • roles/cloudkms.viewer
  • roles/logging.viewer

Network administrator group

Network administrators configure the network. Typically, they are members of the networking team.

Network administrators require the following roles at the organization level:

Security administrator group

Security administrators administer security controls such as access, keys, firewall rules, VPC Service Controls, and the Security Command Center.

Security administrators require the following roles at the organization level:

Security analyst group

Security analysts monitor and respond to security incidents and Sensitive Data Protection findings.

Security analysts require the following roles at the organization level:

Understanding the security controls you need

This section discusses the security controls within Google Cloud that you use to help to secure your data warehouse. The key security principles to consider are as follows:

  • Secure access by adopting least privilege principles.

  • Secure network connections through segmentation design and policies.

  • Secure the configuration for each of the services.

  • Classify and protect data based on its risk level.

  • Understand the security requirements for the environment that hosts the data warehouse.

  • Configure sufficient monitoring and logging for detection, investigation, and response.

Security controls for data ingestion

To create your data warehouse, you must transfer data from another Google Cloud source (for example, a data lake). You can use one of the following options to transfer your data into the data warehouse on BigQuery:

  • A batch job that uses Cloud Storage.

  • A streaming job that uses Pub/Sub. To help protect data during ingestion, you can use firewall rules, access policies, and encryption.

Network and firewall rules

Virtual Private Cloud (VPC) firewall rules control the flow of data into the perimeters. You create firewall rules that deny all egress, except for specific TCP port 443 connections from the special domain names. The domain has the following benefits:

  • It helps reduce your network attack surface by using Private Google Access when workloads communicate to Google APIs and services.
  • It ensures that you only use services that support VPC Service Controls.

For more information, see Configuring Private Google Access.

You must configure separate subnets for each Dataflow job. Separate subnets ensure that data that is being de-identified is properly separated from data that is being re-identified.

The data pipeline requires you to open TCP ports in the firewall, as defined in the file in the dwh-networking module repository. For more information, see Configuring internet access and firewall rules.

To deny resources the ability to use external IP addresses, the compute.vmExternalIpAccess organization policy is set to deny all.

Perimeter controls

As shown in the architecture diagram, you place the resources for the confidential data warehouse into separate perimeters. To enable services in different perimeters to share data, you create perimeter bridges. Perimeter bridges let protected services make requests for resources outside of their perimeter. These bridges make the following connections:

  • They connect the data ingestion project to the governance project so that de-identification can take place during ingestion.

  • They connect the non-confidential data project and the confidential data project so that confidential data can be re-identified when a data analyst requests it.

  • They connect the confidential project to the data governance project so that re-identification can take place when a data analyst requests it.

In addition to perimeter bridges, you use egress rules to let resources protected by service perimeters access resources that are outside the perimeter. In this solution, you configure egress rules to obtain the external Dataflow Flex Template jobs that are located in Cloud Storage in an external project. For more information, see Access a Google Cloud resource outside the perimeter.

Access policy

To help ensure that only specific identities (user or service) can access resources and data, you enable IAM groups and roles.

To help ensure that only specific sources can access your projects, you enable an access policy for your Google organization. We recommend that you create an access policy that specifies the allowed IP address range for requests and only allows requests from specific users or service accounts. For more information, see Access level attributes.

Key management and encryption for ingestion

Both ingestion options use Cloud HSM to manage the CMEK. You use the CMEK keys to help protect your data during ingestion. Sensitive Data Protect