Set up the Connect gateway with third-party identities

This guide is for platform administrators who need to set up the Connect gateway in a project that contains users who don't have Google identities and don't belong to Google Workspace. In this guide, these identities are referred to as "third-party identities". Before reading this guide, you should be familiar with the concepts in the Connect gateway overview. To authorize individual Google accounts, see Setting up the Connect gateway. For Google Groups support, refer to Setting up the Connect gateway with Google Groups.

The setup in this guide lets users log in to fleet clusters using the Google Cloud CLI, the Connect gateway, and the Google Cloud console.

Supported cluster types

You can set up access control with third-party identities through the Connect gateway for the following registered cluster types:

If you need to upgrade on-premises clusters to use this feature, see Upgrading GKE Enterprise clusters for VMWare and Upgrading GKE Enterprise clusters on bare metal.

If you have a use case for GKE clusters environments other than those listed above, please contact Cloud Customer Care or the Connect gateway team.

How it works

As described in the overview, users may be using identity providers that are not Google Workspace or Cloud Identity. By using Workforce Identity Federation, users can use their third-party identity providers, such as Okta or Azure Active Directory, to get access to their clusters through Connect Gateway. Unlike Google accounts, third-party users are represented by an Identity and Access Management (IAM) principal that follows the format:

  • The WORKFORCE_POOL_ID is the name of the workforce pool that contains the relevant third-party identity provider.

  • The SUBJECT_VALUE is the mapping of the third-party identity to a Google subject.

For third-party groups, the IAM principal follows the format:


The following diagram shows a typical flow for a third-party user authenticating to and running commands against a cluster with this service enabled. For this flow to be successful, a role-based access control (RBAC) policy needs to be applied on the cluster for either the user or a group.

For individual users, an RBAC policy that uses the full IAM principal name of the user must exist on the cluster.

If using group functionality, an RBAC policy that uses the full IAM principal name must exist on the cluster for a group that:

  1. Contains the user as a member.

  2. Is included in a mapping for an identity provider within a workforce pool that is in Alice's Google Cloud organization.

Diagram showing the gateway third-party identity flow

  1. The user logs in to gcloud with their third-party identity, using the third-party browser-based sign in. To use the cluster from the command line, the user gets the cluster's gateway kubeconfig as described in Using the Connect gateway.
  2. The user sends a request by running a kubectl command or opening the Google Kubernetes Engine Workloads or Object Browser pages in the Google Cloud console.
  3. The request is received by the Connect gateway, which handles the third-party authentication using Workforce Identity Federation.
  4. The Connect gateway performs an authorization check with IAM.
  5. The Connect service forwards the request to the Connect Agent running on the cluster. The request is accompanied with the user's credential information for use in authentication and authorization on the cluster.
  6. The Connect Agent forwards the request to the Kubernetes API server.
  7. The Kubernetes API server forwards the request to GKE Identity Service, which validates the request.
  8. GKE Identity Service returns the third-party user and group information to the Kubernetes API server. The Kubernetes API server can then use this information to authorize the request based on the cluster's configured RBAC policies.

Before you begin

  • Ensure that you have the following command line tools installed:

    • The latest version of the Google Cloud CLI the command line tool for interacting with Google Cloud.
    • The Kubernetes command line tool, kubectl, for interacting with your clusters.

    If you are using Cloud Shell as your shell environment for interacting with Google Cloud, these tools are installed for you.

  • Ensure that you have initialized the gcloud CLI for use with your project.

  • This guide assumes that you have roles/owner in your project. If you are not a project owner, you may need additional permissions to perform some of the setup steps.

  • For clusters outside Google Cloud, GKE Identity Service needs to call Google APIs from your cluster to complete authentication. Check if your network policy requires outbound traffic to go through a proxy.

Set up third-party identity attribute mappings using Workforce Identity

Ensure there is a Workforce pool and identity provider set up for your Google Cloud organization by following the instructions corresponding to your identity provider:

Enable APIs

To add the gateway to your project, enable the Connect gateway API and its required dependency APIs. If your users only want to authenticate to clusters using the Google Cloud console, you don't need to enable, but do need to enable the remaining APIs.

gcloud services enable --project=PROJECT_ID  \ \ \ \ \

Set up GKE Identity Service

Connect gateway's third-party identity support feature uses GKE Identity Service to get third-party group membership information from Google. You can find out more about GKE Identity Service in Introducing GKE Identity Service.

Ensure GKE Identity Service is installed

GKE Identity Service is installed by default on GKE clusters from version 1.7 onwards (though third-party identity support requires version 1.13 or higher). You can confirm that it is installed correctly on your cluster by running the following command:

kubectl --kubeconfig CLUSTER_KUBECONFIG get all -n anthos-identity-service

Replace CLUSTER_KUBECONFIG with the path to the cluster's kubeconfig.

Configure third-party identity support for groups

If your cluster or fleet is already configured for Google Groups support, there are no additional steps and you can skip to Grant IAM roles to third-party users and groups.

If you're using Google Distributed Cloud on VMware or bare metal, the way you set up GKE Identity Service determines how you need to configure the third-party groups feature.

If you're using GKE Identity Service for the first time, you can choose between configuring third-party groups support using Fleet APIs (recommended) or using kubectl.

If you're not a first time user of GKE Identity Service, keep in mind one of the following:

  • If you have already set up GKE Identity Service for another identity provider at fleet level, the third-party groups feature is enabled for you by default. See the Fleet section below for more details and any additional setup you may require.
  • If you have already set up GKE Identity Service for another identity provider on a per-cluster basis, see the Kubectl section below for instructions to update your configuration for the third-party groups feature.


You can use the Google Cloud console or command line to configure access to third-party groups using Fleet Feature APIs.


If you have not previously set up GKE Identity Service for a fleet, follow the instructions in Configure clusters for GKE Identity Service.

Select clusters and update configuration

  1. In the Google Cloud console, go to the GKE Enterprise Features page.

    Go to GKE Enterprise Features

  2. In the Features table, click Details in the Identity Service row. Your project's cluster details are displayed.

  3. Click Update identity service to open the setup pane.

  4. Select the clusters you want to configure. You can choose individual clusters, or specify that you want all clusters to be configured with the same identity configuration.

  5. In the Configure Identity Providers section, you can choose to retain, add, update, or remove an identity provider.

  6. Click Continue to go to the next configuration step. If you've selected at least one eligible cluster for this setup, the Google Authentication section is displayed.

  7. Select Enable to enable Google authentication for the selected clusters. If you need to access the Google identity provider through a proxy, enter the Proxy details.

  8. Click Update Configuration. This applies the identity configuration on your selected clusters.


If you have not previously set up GKE Identity Service for a fleet, follow the instructions in Configure clusters for GKE Identity Service. Specify only the following configuration in your auth-config.yaml file:

  - name: google-authentication-method
      disable: false

Configuring third-party groups access using a proxy

If you need to access the identity provider through a proxy, use a proxy field in your auth-config.yaml file. You might need to set this if, for example, your cluster is in a private network and needs to connect to a public identity provider. You must add this configuration even if you have already configured GKE Identity Service for another provider.

To configure the proxy, here's how you can update the authentication section of the existing configuration file auth-config.yaml.

    - name: authentication-method
        disable: false
      proxy: PROXY_URL


  • disable (optional) denotes if you want to opt in or out of the third-party groups feature for clusters. This value is set to false by default. If you'd like to opt out of this feature, you can set it to true.

  • PROXY_URL (optional) is the proxy server address to connect to the Google identity. For example: http://user:password@

Apply the configuration

To apply the configuration to a cluster, run the following command:

gcloud container fleet identity-service apply \
--membership=CLUSTER_NAME \


CLUSTER_NAME is your cluster's unique membership name within the fleet.

Once applied, this configuration is managed by the GKE Identity Service controller. Any local changes made to GKE Identity Service client configuration is reconciled back by the controller to the configuration specified in this setup.


To configure your cluster to use GKE Identity Service with the third-party groups feature, you need to update the cluster's GKE Identity Service ClientConfig. This is a Kubernetes custom resource type (CRD) used for cluster configuration. Each GKE Enterprise cluster has a ClientConfig resource named default in the kube-public namespace that you update with your configuration details.

To edit the configuration, use the following command.

kubectl --kubeconfig CLUSTER_KUBECONFIG -n kube-public edit clientconfig default

If there are multiple contexts in the kubeconfig, the current context is used. You might need to reset the current context to the correct cluster before running the command.

Here's an example of how you can update the ClientConfig with a new authentication method having a configuration of type google to enable the third-party groups feature. If the internalServer field is empty, make sure it's set to https://kubernetes.default.svc, as shown below.

  - google:
    name: google-authentication-method
    proxy: PROXY_URL
  internalServer: https://kubernetes.default.svc


CLUSTER_IDENTIFIER (required) denotes the membership details of your cluster. You can retrieve your cluster's membership details using the command:

kubectl --kubeconfig CLUSTER_KUBECONFIG get memberships membership -o yaml


CLUSTER_KUBECONFIG is the path to the kubeconfig file for the cluster. In the response, refer to the field to retrieve the cluster's membership details.

Here's an example response showing a cluster's membership details:

id: //

which corresponds to the following format: //

Grant IAM roles to third-party users and groups

Third-party identities need the following additional Google Cloud roles to interact with connected clusters through the gateway:

  • roles/gkehub.gatewayAdmin. This role allows users to access the Connect gateway API.
    • If users only need read-only access to connected clusters, roles/gkehub.gatewayReader can be used instead.
    • If users need read/write access to connected clusters, roles/gkehub.gatewayEditor can be used instead.
  • roles/gkehub.viewer. This role allows users to view registered cluster memberships.

The following shows you how to add the necessary roles to individual identities and mapped groups:

Single identities

To grant the necessary roles to a single identity for project PROJECT_ID, run the following command:

gcloud projects add-iam-policy-binding PROJECT_ID \
    --role=GATEWAY_ROLE \

gcloud projects add-iam-policy-binding PROJECT_ID \
    --role=roles/gkehub.viewer \


  • PROJECT_ID: is the ID of the project.
  • GATEWAY_ROLE is one of roles/gkehub.gatewayAdmin, roles/gkehub.gatewayReader or gkehub.gatewayEditor.
  • WORKFORCE_POOL_ID: is the workforce identity pool ID.
  • SUBJECT_VALUE: is the user identity.


To grant the necessary roles to all identities within a specific group for project PROJECT_ID, run the following command:

gcloud projects add-iam-policy-binding PROJECT_ID \
    --role=GATEWAY_ROLE \

gcloud projects add-iam-policy-binding PROJECT_ID \
    --role=roles/gkehub.viewer \


  • PROJECT_ID: is the ID of the project.
  • GATEWAY_ROLE is one of roles/gkehub.gatewayAdmin, roles/gkehub.gatewayReader or gkehub.gatewayEditor.
  • WORKFORCE_POOL_ID: is the workforce pool ID.
  • GROUP_ID: is a group in the mapped google.groups claim.

Refer to the setup for your identity provider listed in Set up third-party mappings using Workforce Identity for more customizations, like specifying department attributes, when applying the RBAC policy.

You can find out more about granting IAM permissions and roles in Granting, changing, and revoking access to resources.

Configure role-based access control (RBAC) policies

Finally, each cluster's Kubernetes API server needs to be able to authorize kubectl commands that come through the gateway from your specified third-party user and groups. For each cluster, you need to add an RBAC permissions policy that specifies which permissions the subject has on the cluster.

The subjects in RBAC policies must use the same format as the IAM bindings, with third-party users starting with principal:// and third-party groups starting with principalSet:// If GKE Identity Service is not configured for the cluster, you will need impersonation policies in addition to roles/clusterroles for a third-party user. In that case, follow these RBAC setup steps, adding the third-party principal that starts with principal:// as the user.

The following example shows how to grant members of a third-party group cluster-admin permissions on a cluster where GKE Identity Service is configured. You can then save the policy file as /tmp/admin-permission.yaml and apply it to the cluster associated with the current context.

cat <<EOF > /tmp/admin-permission.yaml
kind: ClusterRoleBinding
  name: gateway-cluster-admin-group
- kind: Group
  name: "principalSet://"
  kind: ClusterRole
  name: cluster-admin
# Apply permission policy to the cluster.
kubectl apply --kubeconfig=KUBECONFIG_PATH -f /tmp/admin-permission.yaml

You can find out more about specifying RBAC permissions in Using RBAC authorization.

What's next?