Access control with IAM

This page describes permissions to control access to Container Registry.

After you have configured permissions, you can then configure authentication for Docker clients that you use to push and pull images.

If you use Artifact Analysis to work with container metadata, such as vulnerabilities found in images, see the Artifact Analysis documentation for information about granting access to view or manage metadata.

Before you begin

Verify that you have permissions to manage users. You must have permissions in one of the following roles:

  • Project IAM Admin (roles/resourcemanager.projectIamAdmin)
  • Security Admin (roles/iam.securityAdmin)

As an alternative to granting these roles, you can use a custom role or predefined role with the same permissions.

Permissions and roles

All users, service accounts, and other identities that interact with Container Registry must have the appropriate Identity and Access Management (IAM) permissions for Cloud Storage.

  • Google Cloud services that typically access Container Registry are configured with default permissions to registries in the same Google Cloud project. If the default permissions don't meet your needs, you must configure the appropriate permissions.
  • For other identities, you must configure the required permissions.

You control access to Container Registry hosts with Cloud Storage permissions. The following table lists the Cloud Storage roles that have the permissions required by Container Registry.

Some additional permissions are required when viewing Container Registry images using the Google Cloud console, see Common permissions required for using the Cloud console.

Required access Role Where to grant permissions
Pull images (read only) from an existing registry Storage Object Viewer (roles/storage.objectViewer) Grant the role on the registry storage bucket.
Push (write) images to and pull (read) images from an existing registry host in a project Storage Legacy Bucket Writer (roles/storage.legacyBucketWriter) Grant the role on the registry storage bucket. This permission is only available at the bucket level, you cannot grant it at the project level.
Add registry hosts to Google Cloud projects and create the associated storage buckets. Storage Admin (roles/storage.admin) Grant the role at the project level

Pushing images requires object read and write permissions as well as the storage.buckets.get permission. The Storage Legacy Bucket Writer role includes the required permissions in a single Cloud Storage role but does not grant full control over storage buckets and objects.

The Storage Admin role grants full control over storage buckets and objects. If you grant this permission at the project level, the principal has access to all storage buckets in the project, including buckets that are not used by Container Registry. Carefully consider which principals require this role.

  • By default, the Cloud Build service account has permissions in the Storage Admin role. This service account can therefore add registries to its parent project with the first push and push images to existing registries in its parent project.
  • If you use Docker or other tools to create and push images to a registry, consider adding registries to your project using an account with the more permissive Storage Admin role, and then grant Storage Legacy Bucket Writer or Storage Object Viewer roles to other accounts that need to push or pull images.

For more information about Cloud Storage roles and permissions, see the Cloud Storage documentation.

Grant IAM permissions

Container Registry uses Cloud Storage buckets as the underlying storage for container images. You control access to your images by granting permissions to the bucket for a registry.

The first image push to a hostname adds the registry host and its storage bucket to a project. For example, the first push to gcr.io/my-project adds the gcr.io registry host to the project with the project ID my-project and creates a storage bucket for the registry. The bucket name has one of the following formats:

  • artifacts.PROJECT-ID.appspot.com for images stored on the host gcr.io
  • STORAGE-REGION.artifacts.PROJECT-ID.appspot.com for images stored on other registry hosts

To successfully perform this first image push, the account performing the push must have permissions in the Storage Admin role.

After the initial image push to a registry host, you then grant permissions on the registry storage bucket to control access to images in the registry:

  • Storage Legacy Bucket Writer to push and pull
  • Storage Object Viewer to pull only

You can grant permission for a bucket using the Google Cloud console or the Google Cloud CLI.

Limitations and restrictions

You can only grant permissions at the storage bucket level for Container Registry hosts.

  • Container Registry ignores permissions set on individual objects within a Cloud Storage bucket.
  • You cannot grant permissions on repositories within a registry. If you need more granular access control, Artifact Registry provides repository-level access control and might better suit your needs.
  • If you enable uniform bucket-level access for any Container Registry storage bucket, you must explicitly grant permissions to all users and service accounts that access your registries. In this case, the Owner and Editor roles by themselves might not grant the required permissions.

Grant permissions

  1. If the registry host does not yet exist in the project, an account with permissions in the Storage Admin role must push the first image to the registry. This creates the storage bucket for the registry host.

    Cloud Build has the required permissions to perform the initial image push within the same project. If you are pushing images with another tool, verify the permissions for the Google Cloud account that you are using to authenticate with Container Registry.

    For more information on pushing the initial image with Docker, see Adding a registry.

  2. In the project with Container Registry, grant the appropriate permissions on the Cloud Storage bucket used by the registry host.

    Console

    1. Go to the Cloud Storage page in the Google Cloud console.
    2. Click the link artifacts.PROJECT-ID.appspot.com or STORAGE-REGION.artifacts.PROJECT-ID.appspot.com for the bucket.

      Replace PROJECT-ID with the Google Cloud project ID of the project that hosts Container Registry and STORAGE-REGION with the multi-region (asia, eu, or us) of the registry hosting the image.

    3. Select the Permissions tab.

    4. Click Add.

    5. In the Principals field, enter the email addresses of accounts that require access, separated by commas. This email address can be one of the following:

      • A Google account (for example, someone@example.com)
      • A Google group (for example, my-developer-team@googlegroups.com)
      • An IAM service account.

        See the list of Google Cloud services that typically access registries to find the email address of the associated service account. If the service is running in a different project than Container Registry, make sure that you use the email address of the service account in the other project.

    6. From the Select a role drop-down menu, select the Cloud Storage category, and then select the appropriate permission.

      • Storage Object Viewer to pull images only
      • Storage Legacy Bucket Writer to push and pull images
    7. Click Add.

    gcloud

    1. Run the following command to list buckets in the project:

      gcloud storage ls
      

      The response looks like the following example:

      gs://[BUCKET_NAME1]/
      gs://[BUCKET_NAME2]/
      gs://[BUCKET_NAME3]/ ...
      

      Find the bucket for the registry host in the returned bucket list. The bucket that stores your images has the name BUCKET-NAME in one of the following forms:

      • artifacts.PROJECT-ID.appspot.com for images stored on the host gcr.io
      • STORAGE-REGION.artifacts.PROJECT-ID.appspot.com for images stored on other registry hosts

      Where

      • PROJECT-ID is your Google Cloud project ID.
      • STORAGE-REGION is the location of the storage bucket:
        • us for registries in the host us.gcr.io
        • eu for registries in the host eu.gcr.io
        • asia for registries in the host asia.gcr.io
    2. Run the following command in your shell or terminal window:

      gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME \
      --member=TYPE:EMAIL-ADDRESS \
      --role=ROLE
      

      Where

      • BUCKET_NAME is the name of the Cloud Storage bucket in the form artifacts.PROJECT-ID.appspot.com or STORAGE-REGION.artifacts.PROJECT-ID.appspot.com
      • TYPE can be one of the following:
        • serviceAccount, if EMAIL-ADDRESS specifies a service account.
        • user, if the EMAIL-ADDRESS is a Google account.
        • group, if the EMAIL-ADDRESS is a Google group.
      • EMAIL-ADDRESS can be one of the following:

        • A Google account (for example, someone@example.com)
        • A Google group (for example, my-developer-team@googlegroups.com)
        • An IAM service account.

          See the list of Google Cloud services that typically access registries to find the email address of the associated service account. If the service is running in a different project than Container Registry, make sure that you use the email address of the service account in the other project.

      • ROLE is the Cloud Storage role you want to grant.

        • objectViewer to pull images
        • legacyBucketWriter push and pull images

    For example, this command grants the service account my-account@my-project.iam.gserviceaccount.com with permissions to push and pull images in the bucket my-example-bucket:

    gcloud storage buckets add-iam-policy-binding gs://my-example-bucket \
      --member=serviceAccount:my-account@my-project.iam.gserviceaccount.com \
      --role=roles/storage.objectUser
    

    The gcloud storage buckets add-iam-policy-binding command changes the IAM permissions of the storage bucket where the registry is hosted. Additional examples are in the gcloud CLI documentation.

  3. If you are configuring access for Compute Engine VMs or GKE nodes that will push images to Container Registry, see Configuring VMs and clusters for additional configuration steps.

Configure public access to images

Container Registry is publicly accessible if the host location's underlying storage bucket is publicly accessible. Within a project, all images in each host location are either public or not. Within a project's host, it is not possible to publicly serve only specific images. If you have specific images you want to make public:

  • Take care to keep them in a separate host location which you make public, or
  • Create a new project to hold publicly accessible images.

To serve container images publicly, make the underlying storage bucket publicly accessible by following these steps:

  1. Ensure you have pushed an image to Container Registry so that the underlying storage bucket exists.

  2. Find the name of the Cloud Storage bucket for that registry. To do so, list the buckets:

    gcloud storage ls
    

    Your Container Registry bucket URL will be listed as gs://artifacts.PROJECT-ID.appspot.com or gs://STORAGE-REGION.artifacts.PROJECT-ID.appspot.com, where:

    • PROJECT-ID is your Google Cloud project ID. Domain-scoped projects will have the domain name as part of the project ID.
    • STORAGE-REGION is the location of the storage bucket:
      • us for registries in the host us.gcr.io
      • eu for registries in the host eu.gcr.io
      • asia for registries in the host asia.gcr.io
  3. Make the storage bucket of the Container Registry publicly accessible by running the following command. This command will make all images in the bucket publicly accessible.

    gcloud storage buckets add-iam-policy-binding gs://BUCKET-NAME \
        --member=allUsers --role=roles/storage.objectViewer
    

    where:

    • gs://BUCKET-NAME is the Container Registry's bucket URL

Remove public access to images

Console

  1. Ensure you have pushed an image to Container Registry so that the underlying storage bucket exists.

  2. Open the Container Registry page in the Google Cloud console.

    Open the Container Registry page

  3. On the left panel, click on Settings.

  4. On the Settings page under Public access, toggle the visibility to Private. This setting controls the access to the underlying storage bucket.

gcloud storage

  1. Find the name of the Cloud Storage bucket for that registry. To do so, list the buckets:

    gcloud storage ls
    

    Your Container Registry bucket URL will be listed as gs://artifacts.PROJECT-ID.appspot.com or gs://STORAGE-REGION.artifacts.PROJECT-ID.appspot.com, where:

    • PROJECT-ID is your Google Cloud console project ID. Domain-scoped projects will have the domain name as part of the project ID.
    • STORAGE-REGION is the location of the storage bucket:
      • us for registries in the host us.gcr.io
      • eu for registries in the host eu.gcr.io
      • asia for registries in the host asia.gcr.io
  2. To remove public access to your storage bucket, run the following command in your shell or terminal window:

    gcloud storage bucket remove-iam-policy-binding gs://BUCKET-NAME \
      --member=allUsers --role=roles/storage.objectViewer
    

where:

  • BUCKET-NAME is the name of the desired bucket

Revoke permissions

Use the following steps to revoke IAM permissions.

Console

  1. Visit the Cloud Storage page in the Google Cloud console.
  2. Click the link artifacts.PROJECT-ID.appspot.com or STORAGE-REGION.artifacts.PROJECT-ID.appspot.com for the bucket. Here, PROJECT-ID is the Google Cloud project ID of the project that hosts Container Registry and STORAGE-REGION is the multi-region (asia, eu, or us) of the registry hosting the image.

  3. Select the Permissions tab.

  4. Click the trash icon next to any principal you wish to remove.

gcloud

Run the following command in your shell or terminal window:

gcloud storage bucket remove-iam-policy-binding gs://BUCKET-NAME \
    --member=PRINCIPAL --all

where:

  • BUCKET-NAME is the name of the desired bucket
  • PRINCIPAL can be one of the following:
    • user:EMAIL-ADDRESS for a Google account
    • serviceAccount:EMAIL-ADDRESS for an IAM service account
    • group:EMAIL-ADDRESS for a Google group.
    • allUsers for revoking public access

Integrate with Google Cloud services

For most Google Cloud service accounts, configuring access to a registry only requires granting the appropriate IAM permissions.

Default permissions for Google Cloud services

Google Cloud services such as Cloud Build or Google Kubernetes Engine use a default service account or a service agent to interact with resources within the same project.

You must configure or modify permissions yourself if:

  • The Google Cloud service is in a different project than Container Registry.
  • The default permissions do not meet your needs. For example, the default Compute Engine service account has read-only access to storage in the same project. If you want to push an image from the VM to a registry, you must modify permissions for the VM service account or authenticate to the registry with an account that has write access to storage.
  • You are using a custom service account to interact with Container Registry

The following service accounts typically access Container Registry. The email address for the service account includes the Google Cloud project ID or project number of the project where the service is running.

Service Service account Email address Permissions
App Engine flexible environment App Engine default service account PROJECT-ID@appspot.gserviceaccount.com Editor role, can read and write to storage
Compute Engine Compute Engine default service account PROJECT-NUMBER-compute@developer.gserviceaccount.com Editor role, limited to read-only access to storage
Cloud Build Cloud Build service account PROJECT-NUMBER@cloudbuild.gserviceaccount.com Default permissions include creating storage buckets, and read and write access to storage.
Cloud Run Compute Engine default service account
The default runtime service account for revisions.
PROJECT-NUMBER-compute@developer.gserviceaccount.com Editor role, limited to read-only access to storage
GKE Compute Engine default service account
The default service account for nodes.
PROJECT-NUMBER-compute@developer.gserviceaccount.com Editor role, limited to read-only access to storage

Configure VMs and clusters to push images

Compute Engine and any Google Cloud service that uses Compute Engine has Compute Engine default service account as the default identity.

Both IAM permissions and access scopes impact the ability of VMs to read and write to storage.

  • IAM permissions determine access a resources.
  • Access scopes determine the default OAuth scopes for requests made through the gcloud CLI and client libraries on a VM instance. As a result, access scopes can further limit access to API methods when authenticating with Application Default Credentials.
    • To pull a private image, the VM service account must have read permission to the image's storage bucket.
    • To push a private images, the VM service account must have the read-write, cloud-platform, or full-control access scope to the image's storage bucket.

The Compute Engine default service account has the Editor role by default, which includes permissions to create and update resources for most Google Cloud services. However, for both the default service account or a custom service account that you associate with a VM, the default access scope for storage buckets is read only. This means that by default, VMs cannot push images.

If you only intend to deploy images to environments such as Compute Engine and GKE, then you do not need to modify the access scope. If you want to run applications in these environments that push images to the registry, you must perform additional configuration.

The following setups require changes to either IAM permissions or access scope configuration.

Pushing images from a VM or cluster
If you want to push images, the VM instance service account must have the storage-rw scope instead of storage-ro.
The VM and Container Registry are in separate projects
You must grant the service account with IAM permissions to access the storage bucket used by Container Registry.
Running gcloud commands on VMs
The service account must have the cloud-platform scope. This scope grants permissions to push and pull images, as well as run gcloud commands.

Steps to configure scopes are in the following sections.

Configure scopes for VMs

To set access scopes when creating a VM, use the --scopes option.

gcloud compute instances create INSTANCE --scopes=SCOPE

Where

  • INSTANCE is the VM instance name.
  • SCOPE is the scope you want to configure for the VM service account:
    • Pull images: storage-ro
    • Pull and push images: storage-rw
    • Pull and push images, run gcloud commands: cloud-platform

To change scopes for an existing VM instance:

Set the access scope with the --scopes option.

  1. Stop the VM instance. See Stopping an instance.

  2. Change the access scope with the following command.

    gcloud compute instances set-service-account INSTANCE --scopes=SCOPE
    

    Where

    • INSTANCE is the VM instance name.
    • SCOPE is the scope you want to configure for the VM service account:
      • Pull images: storage-ro
      • Pull and push images: storage-rw
      • Pull and push images, run gcloud commands: cloud-platform
  3. Restart the VM instance. See Starting a stopped instance.

If you want to use a custom service account for VMs instead of the default service account, you can specify the service account and the access scopes to use when you create the VM or modify VM settings.

Configure scopes for Google Kubernetes Engine clusters

By default, new GKE clusters are created with read-only permissions for Cloud Storage buckets.

To set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option. For example, the following command creates a cluster with the scopes bigquery, storage-rw, and compute-ro:

gcloud container clusters create example-cluster \
--scopes=bigquery,storage-rw,compute-ro

For more information about scopes you can set when creating a new cluster, refer to the documentation for the command gcloud container clusters create.

The Container Registry service account

The Container Registry Service Agent acts on behalf of Container Registry when interacting with Google Cloud services. The service agent has the minimum set of required permissions if you enabled the Container Registry API after October 5, 2020. The service agent previously had the Editor role. For more information about the service agent and modifying its permissions, see Container Registry service account.

Try it for yourself

If you're new to Google Cloud, create an account to evaluate how Container Registry performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.

Try Container Registry free