Stay organized with collections
Save and categorize content based on your preferences.
Confidential Space provides an isolated environment to operate on sensitive data
from multiple parties, so the owners of that data can keep it confidential.
Sensitive data might include personally identifiable information (PII),
protected health information (PHI), intellectual property, cryptographic
secrets, machine learning (ML) models, or otherwise.
You might use Confidential Space to operate on sensitive data that's only visible
to its original owners and a mutually agreed-upon workload. Alternatively, you
could use it to offer end customers stronger data privacy, as the operator or
owner of a Confidential Space environment can't access the data that is being
processed.
Confidential Space uses a trusted execution environment (TEE) that can be used to
release your secrets only to authorized workloads. An attestation process and
hardened OS image help protect the workload and the data that the workload
processes from an operator.
A Confidential Space system has three core components:
A workload: A containerized image containing code that processes the
protected resources. This is run on top of a
Confidential Space image, a
hardened OS based on
Container-Optimized OS.
This in turn runs on a Confidential VM, a cloud-based TEE
that offers hardware isolation and remote attestation capabilities. The
workload is typically located in a separate project from the protected
resources.
The attestation service: A remote attestation verifier that returns
attestation evidence in the form of
OpenID Connect
(OIDC) tokens. The tokens contain identification attributes for the workload.
The attestation service runs in the same region that the workload is running
in.
Protected resources: Managed resources that are protected by an
authentication and authorization system. If the resources are in Google Cloud,
they can be managed cloud resources such as Cloud Key Management Service (Cloud KMS)
keys or Cloud Storage buckets. If the resources aren't in Google Cloud (for
example, in an on-premises environment or in another cloud), then they can be
any resource.
Confidential Space roles
The components in a Confidential Space system are managed by people with three
distinct roles:
Data collaborators: The people or organizations who own the protected
resources being operated on by the workload. Data collaborators can access
their own data, set permissions on that data, and depending on the workload's
output might access results based on that data.
Data collaborators can't access each other's data or modify the workload code.
Workload author: The person who creates the application that accesses and
processes the confidential data. They add the application to a containerized
image, for example, using Docker, and upload the
image to a repository such as Artifact Registry.
The workload author has no access to the data or the results, and can't
control access to them either.
Workload operator: The person who runs the workload. The workload operator
typically has full administrative privileges on the project they run the
workload in. For example, they can manage resources in their project (such as
Compute Engine instances, disks, and networking rules) and can interact with any
Google Cloud API that acts on them.
The workload operator has no access to the data, and can't control access to
it either. They can't influence or modify the workload code or execution
environment.
See Deploy workloads to learn more
about the workload operator's role.
A person can assume one or more of these roles. For the highest security,
Confidential Space supports a trust model where data collaborators, workload
authors, and workload operators are separate, mutually distrusting parties. All
roles must collaborate with each other to get the results they need.
An example Confidential Space flow
Confidential Space makes use of multiple Google Cloud services to help private
information be operated on confidentially. In general, setting up a
Confidential Space might look similar to the following process:
Multiple data collaborators store encrypted confidential data in their own
isolated Google Cloud projects, often in different organizations. They want
to compare and process that data without revealing it to each other or
external parties. The encrypted data might live in
Cloud Storage,
BigQuery, or another service.
The data collaborators create mock, non-confidential data for a test
workload to operate on. This data might be different files, or live in a
different place like a separate Cloud Storage bucket.
The data collaborators create a
workload identity pool (WIP).
Later, a workload running in a workload operator's separate, isolated
project can use that WIP to access the collaborators' confidential data.
The workload author writes code to process the confidential data. In a
project separate from the data collaborators and workload operator, they
build a containerized image with Docker and upload it to
Artifact Registry.
The workload operator creates a service account in an isolated project that
has read access to where the data collaborators' encrypted confidential data
is stored, and write access to somewhere (for example, a
Cloud Storage bucket) to output the result of operating on the
decrypted data. Later, this service account is attached to a Confidential VM
which runs the workload.
The data collaborators add a
provider to their
workload identity pools. They add details to the provider like the
attestation service that must be used,
attribute mappings
to create an audit trail in logs, and
attribute conditions.
These details verify the
assertions made by
the Confidential Space image, the workload container, and the workload VM
instance. If the workload passes this verification, it's allowed to access
and decrypt the confidential data.
The workload is tested on the non-confidential data by starting a
Confidential VM instance in the workload operator's project. The VM is based on
a debug version of the Confidential Space image which loads the containerized
workload.
After the VM's attestations are verified and the workload passes the data
collaborators' conditions, the service account attached to the VM is allowed
to access the confidential data, process it, and output a result.
After monitoring and debugging is
complete, the workload is updated for production use. The data collaborators
update and lock down their workload identity providers further if required,
and they might choose to test the production workload on non-confidential
data first.
All parties sign off on the production workload, and it's ready to begin
working on confidential data.
Requirements
Confidential Space requires Confidential VM to work. Confidential VM instances must use
AMD SEV, Intel TDX, or Intel TDX with NVIDIA Confidential Computing (Preview) as the
Confidential Computing technology. See
Supported configurations to learn
about hardware configuration options, and what locations Confidential VM instances
can be created in.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eConfidential Space offers a secure environment for processing sensitive data, ensuring that data owners retain confidentiality and control over who can access their information.\u003c/p\u003e\n"],["\u003cp\u003eThe system utilizes a trusted execution environment (TEE) with attestation processes and a hardened OS to safeguard workloads and data from unauthorized access, even from the operator.\u003c/p\u003e\n"],["\u003cp\u003eConfidential Space is built around three core components: a containerized workload, an attestation service for verifying workload identity, and protected resources managed by authentication and authorization systems.\u003c/p\u003e\n"],["\u003cp\u003eThree distinct roles—data collaborators, workload authors, and workload operators—manage the system, enabling a trust model where these parties can be separate and mutually distrusting for enhanced security.\u003c/p\u003e\n"],["\u003cp\u003eSetting up Confidential Space involves a detailed process that includes encrypted data storage, test workload creation, service account setup, code development, attestation verification, and rigorous testing before production use on confidential data.\u003c/p\u003e\n"]]],[],null,["# Confidential Space overview\n\n*** ** * ** ***\n\nConfidential Space provides an isolated environment to operate on sensitive data\nfrom multiple parties, so the owners of that data can keep it confidential.\nSensitive data might include personally identifiable information (PII),\nprotected health information (PHI), intellectual property, cryptographic\nsecrets, machine learning (ML) models, or otherwise.\n\nYou might use Confidential Space to operate on sensitive data that's only visible\nto its original owners and a mutually agreed-upon workload. Alternatively, you\ncould use it to offer end customers stronger data privacy, as the operator or\nowner of a Confidential Space environment can't access the data that is being\nprocessed.\n\nConfidential Space uses a trusted execution environment (TEE) that can be used to\nrelease your secrets only to authorized workloads. An attestation process and\nhardened OS image help protect the workload and the data that the workload\nprocesses from an operator.\n\nFor more detail about Confidential Space's use cases and security model, see\n[Confidential Space security overview](/docs/security/confidential-space).\n\nConfidential Space components\n-----------------------------\n\nA Confidential Space system has three core components:\n\n- **A workload** : A containerized image containing code that processes the\n protected resources. This is run on top of a\n [Confidential Space image](/confidential-computing/confidential-space/docs/confidential-space-images), a\n hardened OS based on\n [Container-Optimized OS](/container-optimized-os/docs/concepts/features-and-benefits).\n This in turn runs on a [Confidential VM](/confidential-computing/confidential-vm/docs), a cloud-based TEE\n that offers hardware isolation and remote attestation capabilities. The\n workload is typically located in a separate project from the protected\n resources.\n\n- **The attestation service** : A remote attestation verifier that returns\n attestation evidence in the form of\n [OpenID Connect](https://developers.google.com/identity/protocols/oauth2/openid-connect)\n (OIDC) tokens. The tokens contain identification attributes for the workload.\n The attestation service runs in the same region that the workload is running\n in.\n\n- **Protected resources**: Managed resources that are protected by an\n authentication and authorization system. If the resources are in Google Cloud,\n they can be managed cloud resources such as Cloud Key Management Service (Cloud KMS)\n keys or Cloud Storage buckets. If the resources aren't in Google Cloud (for\n example, in an on-premises environment or in another cloud), then they can be\n any resource.\n\nConfidential Space roles\n------------------------\n\nThe components in a Confidential Space system are managed by people with three\ndistinct roles:\n\n- **Data collaborators**: The people or organizations who own the protected\n resources being operated on by the workload. Data collaborators can access\n their own data, set permissions on that data, and depending on the workload's\n output might access results based on that data.\n\n Data collaborators can't access each other's data or modify the workload code.\n\n See\n [Create and grant access to confidential resources](/confidential-computing/confidential-space/docs/create-grant-access-confidential-resources)\n to learn more about the role of data collaborators.\n- **Workload author** : The person who creates the application that accesses and\n processes the confidential data. They add the application to a containerized\n image, for example, using [Docker](https://www.docker.com/), and upload the\n image to a repository such as [Artifact Registry](/artifact-registry/docs).\n\n The workload author has no access to the data or the results, and can't\n control access to them either.\n\n See\n [Create and customize workloads](/confidential-computing/confidential-space/docs/create-customize-workloads)\n to learn more about the workload author's role.\n- **Workload operator**: The person who runs the workload. The workload operator\n typically has full administrative privileges on the project they run the\n workload in. For example, they can manage resources in their project (such as\n Compute Engine instances, disks, and networking rules) and can interact with any\n Google Cloud API that acts on them.\n\n The workload operator has no access to the data, and can't control access to\n it either. They can't influence or modify the workload code or execution\n environment.\n\n See [Deploy workloads](/confidential-computing/confidential-space/docs/deploy-workloads) to learn more\n about the workload operator's role.\n\nA person can assume one or more of these roles. For the highest security,\nConfidential Space supports a trust model where data collaborators, workload\nauthors, and workload operators are separate, mutually distrusting parties. All\nroles must collaborate with each other to get the results they need.\n\nAn example Confidential Space flow\n----------------------------------\n\nConfidential Space makes use of multiple Google Cloud services to help private\ninformation be operated on confidentially. In general, setting up a\nConfidential Space might look similar to the following process:\n\n1. Multiple data collaborators store encrypted confidential data in their own\n isolated Google Cloud projects, often in different organizations. They want\n to compare and process that data without revealing it to each other or\n external parties. The encrypted data might live in\n [Cloud Storage](/storage/docs),\n [BigQuery](/bigquery/docs), or another service.\n\n2. The data collaborators create mock, non-confidential data for a test\n workload to operate on. This data might be different files, or live in a\n different place like a separate Cloud Storage bucket.\n\n3. The data collaborators create a\n [workload identity pool](/iam/docs/workload-identity-federation) (WIP).\n Later, a workload running in a workload operator's separate, isolated\n project can use that WIP to access the collaborators' confidential data.\n\n4. The workload author writes code to process the confidential data. In a\n project separate from the data collaborators and workload operator, they\n build a containerized image with Docker and upload it to\n [Artifact Registry](/artifact-registry).\n\n5. The workload operator creates a service account in an isolated project that\n has read access to where the data collaborators' encrypted confidential data\n is stored, and write access to somewhere (for example, a\n Cloud Storage bucket) to output the result of operating on the\n decrypted data. Later, this service account is attached to a Confidential VM\n which runs the workload.\n\n6. The data collaborators add a\n [provider](/iam/docs/workload-identity-federation#providers) to their\n workload identity pools. They add details to the provider like the\n attestation service that must be used,\n [attribute mappings](/iam/docs/workforce-identity-federation#attribute-mappings)\n to create an audit trail in logs, and\n [attribute conditions](/iam/docs/workload-identity-federation#conditions).\n These details verify the\n [assertions](/confidential-computing/confidential-space/docs/reference/attestation-assertions) made by\n the Confidential Space image, the workload container, and the workload VM\n instance. If the workload passes this verification, it's allowed to access\n and decrypt the confidential data.\n\n7. The workload is tested on the non-confidential data by starting a\n Confidential VM instance in the workload operator's project. The VM is based on\n a debug version of the Confidential Space image which loads the containerized\n workload.\n\n After the VM's attestations are verified and the workload passes the data\n collaborators' conditions, the service account attached to the VM is allowed\n to access the confidential data, process it, and output a result.\n8. After [monitoring and debugging](/confidential-computing/confidential-space/docs/monitor-debug) is\n complete, the workload is updated for production use. The data collaborators\n update and lock down their workload identity providers further if required,\n and they might choose to test the production workload on non-confidential\n data first.\n\n9. All parties sign off on the production workload, and it's ready to begin\n working on confidential data.\n\nRequirements\n------------\n\nConfidential Space requires Confidential VM to work. Confidential VM instances must use\nAMD SEV, Intel TDX, or Intel TDX with NVIDIA Confidential Computing ([Preview](/products#product-launch-stages)) as the\nConfidential Computing technology. See\n[Supported configurations](/confidential-computing/confidential-vm/docs/supported-configurations) to learn\nabout hardware configuration options, and what locations Confidential VM instances\ncan be created in.\n\nWhat's next\n-----------\n\n- Learn more about\n [Confidential Space images](/confidential-computing/confidential-space/docs/confidential-space-images).\n\n- [Create your first Confidential Space environment](/confidential-computing/confidential-space/docs/create-your-first-confidential-space-environment)."]]