Prepare to set up with the GKE Gateway API

The configuration described in this document is supported for Preview customers, but we do not recommended it for new Cloud Service Mesh users. For more information, see the Cloud Service Mesh overview.

This guide tells you how to prepare the environment for using the Google Kubernetes Engine Gateway API with Cloud Service Mesh. At a high level, you need to perform the following steps:

  1. Enable the required Google Cloud API services.
  2. Deploy a GKE cluster.
  3. Configure IAM permissions.
  4. Install the required custom resource definitions (CRDs).
  5. Register the cluster to a fleet.
  6. [Optional] Enable multi-cluster Services (Multi-Cluster Service Discovery).
  7. Enable the service mesh.

If you are not using GKE, use the service routing APIs and create a Mesh resource.

Before you begin

Make sure that the components of your deployment meet these requirements:

  • GKE must be version 1.20 or later.
  • Only data planes with the xDS version 3 API and later are supported.
    • Minimum Envoy version of 1.20.0
    • Minimum gRPC bootstrap generator version of v0.14.0
  • GKE clusters must be in VPC-native (Alias IP) mode.
  • Self-managed Kubernetes clusters on Compute Engine, as opposed to GKE are not supported.
  • Any additional restrictions listed for the Gateway functionality on GKE apply to the Cloud Service Mesh integration with the GKE Gateway API.
  • The service account for your GKE nodes and Pods must have permission to access the Traffic Director API. For more information on the required permissions, see Enabling the service account to access the Traffic Director API.
  • Per-project resource usage and backend service quota limitations apply.

Enable the required Google Cloud API services

  1. Run the following command to enable the required APIs, if they are not already enabled in your project:

    gcloud services enable --project=PROJECT_ID \ \ \ \ \
  2. If you plan to include more than one cluster in your fleet, enable the multiclusterservicediscovery API:

    gcloud services enable --project=PROJECT_ID \

Deploy a GKE cluster

Use these instructions to deploy a GKE cluster.

  1. Create a GKE cluster called gke-1 in the us-west1-a zone:

    gcloud container clusters create gke-1 \
      --zone=us-west1-a \
      --enable-ip-alias \ \
      --scopes= \
      --enable-mesh-certificates \
      --release-channel=regular \
    • --enable-ip-alias: This flag creates a VPC-native cluster and makes the Pods' IP addresses routable within the VPC network.
    • --workload-pool: This flag lets your cluster participate in the project's workload identity pool.
    • --scopes: This flag specifies the OAuth scopes assigned to the cluster nodes.
    • --release-channel: This flag designates the regular channel.
    • --enable-mesh-certificates: This flag enables Cloud Service Mesh's auto mTLS feature if it potentially becomes available in the future.
  2. Get the cluster credentials:

    gcloud container clusters get-credentials gke-1 --zone=us-west1-a
  3. Rename the cluster context:

    kubectl config rename-context gke_PROJECT_ID_us-west1-a_gke-1 gke-1

Configure the IAM permissions for the data plane

For this demonstration deployment, you grant the Cloud Service Mesh client role roles/trafficdirector.client to all authenticated users, including all service accounts, in the GKE cluster. This IAM role is required to authorize Cloud Service Mesh clients in the data plane, such as Envoys, to receive configuration from Cloud Service Mesh.

If you do not want to grant the client role to all authenticated users and prefer to restrict the role to service accounts, see the GKE workload identity guide to set up a specialized Kubernetes service account with the role roles/trafficdirector.client for your services.

  1. Grant the client role to the service accounts:

    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member "" \
      --role "roles/trafficdirector.client"

Install the required custom resource definitions

  1. Install the custom resource definitions (CRDs) required for using the Gateway API with Cloud Service Mesh:

    kubectl apply -k ""
    kubectl kustomize "" \
    | kubectl apply -f -
  2. Verify that required CRDs are installed automatically in the cluster by running the following command:

    kubectl get crds

    The output lists the following CRDs and others not related to the Gateway API, all with different creation dates:

    NAME                                             CREATED AT                           2023-08-08T05:29:03Z                                 2023-08-08T05:29:03Z                               2023-08-08T05:29:03Z                               2023-08-08T05:29:03Z                          2023-08-08T05:29:04Z                                2023-08-08T05:29:04Z                                            2023-08-08T05:29:23Z                                                2023-08-08T05:29:23Z                                2023-08-08T05:29:05Z                                2023-08-08T05:29:05Z

The custom resources and are installed in the previous step.

The CRDs that are part of the API group are specific to GKE. These resources are not part of the OSS Gateway API implementation, which is in the API group.

Register the cluster to a fleet

After the cluster is successfully created, you must register the cluster to a fleet. Registering your cluster to a fleet lets you selectively enable features on the registered cluster.

  1. Register the cluster to the fleet:

    gcloud container hub memberships register gke-1 \
      --gke-cluster us-west1-a/gke-1 \
      --location global \
  2. Confirm that the cluster is registered with the fleet:

    gcloud container hub memberships list --project=PROJECT_ID

    The output is similar to the following:

    gke-1  657e835d-3b6b-4bc5-9283-99d2da8c2e1b

(Optional) Enable Multi-Cluster Service Discovery

The Multi-Cluster Service Discovery feature lets you export cluster local services to all clusters registered to the fleet. This step is optional if you don't plan to include more than one cluster in your fleet.

  1. Enable Multi-Cluster Service Discovery:

    gcloud container hub multi-cluster-services enable \
      --project PROJECT_ID
  2. Grant the Identity and Access Management (IAM) role required for Multi-Cluster Service Discovery:

    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member "[gke-mcs/gke-mcs-importer]" \
      --role "roles/compute.networkViewer"
  3. Confirm that Multi-Cluster Service Discovery is enabled for the registered cluster. It might take several minutes for all of the clusters to be displayed:

    gcloud container hub multi-cluster-services describe --project=PROJECT_ID

    You should see the memberships for gke-1, which are similar to the following:

    createTime: '2021-04-02T19:34:57.832055223Z'
          code: OK
          description: Firewall successfully updated
          updateTime: '2021-05-27T11:03:07.770208064Z'
    name: projects/PROJECT_NUM/locations/global/features/multiclusterservicediscovery
      state: ACTIVE
    spec: {}
    updateTime: '2021-04-02T19:34:58.983512446Z'

Enable the Cloud Service Mesh GKE service mesh

In this section, you enable the service mesh.

  1. Enable the Cloud Service Mesh GKE service mesh on the cluster that you registered with your fleet:

    gcloud container hub ingress enable \
    --config-membership=projects/PROJECT_ID/locations/global/memberships/gke-1 \
  2. Confirm that the feature is enabled:

    gcloud container hub ingress describe --project=PROJECT_ID

    You should see output similar to the following:

    createTime: '2021-05-26T13:27:37.460383111Z'
          code: OK
          updateTime: '2021-05-27T15:08:19.397896080Z'
     state: ACTIVE
        configMembership: projects/PROJECT_ID/locations/global/memberships/gke-1
        code: OK
        description: Ready to use
        updateTime: '2021-05-26T13:27:37.899549111Z'
    updateTime: '2021-05-27T15:08:19.397895711Z'
  3. Grant the following Identity and Access Management (IAM) roles, which are required by the Gateway API controller:

    export PROJECT_NUMBER=$(gcloud projects describe PROJECT_ID --format="value (projectNumber)")
    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member "serviceAccount:service-${PROJECT_NUMBER}" \
      --role "roles/container.developer"
    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member "serviceAccount:service-${PROJECT_NUMBER}" \
      --role "roles/compute.networkAdmin"

What's next

To set up an example deployment, read these guides: