This document shows you how accomplish the following tasks:
- Deploy globally distributed applications exposed through GKE Gateway and Cloud Service Mesh.
- Expose an application to multiple clients by combining Cloud Load Balancing with Cloud Service Mesh.
- Integrate load balancers with a service mesh deployed across multiple Google Cloud regions.
This deployment guide is intended for platform administrators. It's also intended for advanced practitioners who run Cloud Service Mesh. The instructions also work for Istio on GKE.
Architecture
The following diagram shows the default ingress topology of a service mesh—an external TCP/UDP load balancer that exposes the ingress gateway proxies on a single cluster:
This deployment guide uses Google Kubernetes Engine (GKE) Gateway resources. Specifically, it uses a multi-cluster gateway to configure multi-region load balancing in front of multiple Autopilot clusters that are distributed across two regions.
The preceding diagram shows how data flows through cloud ingress and mesh ingress scenarios. For more information, see the explanation of the architecture diagram in the associated reference architecture document.
Objectives
- Deploy a pair of GKE Autopilot clusters on Google Cloud to the same fleet.
- Deploy an Istio-based Cloud Service Mesh to the same fleet.
- Configure a load balancer using GKE Gateway to terminate public HTTPS traffic.
- Direct public HTTPS traffic to applications hosted by Cloud Service Mesh that are deployed across multiple clusters and regions.
- Deploy the whereami sample application to both Autopilot clusters.
Cost optimization
In this document, you use the following billable components of Google Cloud:
- Google Kubernetes Engine
- Cloud Load Balancing
- Cloud Service Mesh
- Multi Cluster Ingress
- Google Cloud Armor
- Certificate Manager
- Cloud Endpoints
To generate a cost estimate based on your projected usage,
use the pricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, activate Cloud Shell.
You run all of the terminal commands for this deployment from Cloud Shell.
Set your default Google Cloud project:
export PROJECT=YOUR_PROJECT export PROJECT_NUMBER=$(gcloud projects describe PROJECT_ID --format="value(projectNumber)") gcloud config set project PROJECT_ID
Replace
PROJECT_ID
with the ID of the project that you want to use for this deployment.Create a working directory:
mkdir -p ${HOME}/edge-to-mesh-multi-region cd ${HOME}/edge-to-mesh-multi-region export WORKDIR=`pwd`
Create GKE clusters
In this section, you create GKE clusters to host the applications and supporting infrastructure, which you create later in this deployment guide.
In Cloud Shell, create a new
kubeconfig
file. This step ensures that you don't create a conflict with your existing (default)kubeconfig
file.touch edge2mesh_mr_kubeconfig export KUBECONFIG=${WORKDIR}/edge2mesh_mr_kubeconfig
Define the environment variables that are used when creating the GKE clusters and the resources within them. Modify the default region choices to suit your purposes.
export CLUSTER_1_NAME=edge-to-mesh-01 export CLUSTER_2_NAME=edge-to-mesh-02 export CLUSTER_1_REGION=us-central1 export CLUSTER_2_REGION=us-east4 export PUBLIC_ENDPOINT=frontend.endpoints.PROJECT_ID.cloud.goog
Enable the Google Cloud APIs that are used throughout this guide:
gcloud services enable \ container.googleapis.com \ mesh.googleapis.com \ gkehub.googleapis.com \ multiclusterservicediscovery.googleapis.com \ multiclusteringress.googleapis.com \ trafficdirector.googleapis.com \ certificatemanager.googleapis.com
Create a GKE Autopilot cluster with private nodes in
CLUSTER_1_REGION
. Use the--async
flag to avoid waiting for the first cluster to provision and register to the fleet:gcloud container clusters create-auto --async \ ${CLUSTER_1_NAME} --region ${CLUSTER_1_REGION} \ --release-channel rapid --labels mesh_id=proj-${PROJECT_NUMBER} \ --enable-private-nodes --enable-fleet
Create and register a second Autopilot cluster in
CLUSTER_2_REGION
:gcloud container clusters create-auto \ ${CLUSTER_2_NAME} --region ${CLUSTER_2_REGION} \ --release-channel rapid --labels mesh_id=proj-${PROJECT_NUMBER} \ --enable-private-nodes --enable-fleet
Ensure that the clusters are running. It might take up to 20 minutes until all clusters are running:
gcloud container clusters list
The output is similar to the following:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS edge-to-mesh-01 us-central1 1.27.5-gke.200 34.27.171.241 e2-small 1.27.5-gke.200 RUNNING edge-to-mesh-02 us-east4 1.27.5-gke.200 35.236.204.156 e2-small 1.27.5-gke.200 RUNNING
Gather the credentials for
CLUSTER_1_NAME
.You createdCLUSTER_1_NAME
asynchronously so you could run additional commands while the cluster provisioned.gcloud container clusters get-credentials ${CLUSTER_1_NAME} \ --region ${CLUSTER_1_REGION}
To clarify the names of the Kubernetes contexts, rename them to the names of the clusters:
kubectl config rename-context gke_PROJECT_ID_${CLUSTER_1_REGION}_${CLUSTER_1_NAME} ${CLUSTER_1_NAME} kubectl config rename-context gke_PROJECT_ID_${CLUSTER_2_REGION}_${CLUSTER_2_NAME} ${CLUSTER_2_NAME}
Install a service mesh
In this section, you configure the managed Cloud Service Mesh with fleet API. Using the fleet API to enable Cloud Service Mesh provides a declarative approach to provision a service mesh.
In Cloud Shell, enable Cloud Service Mesh on the fleet:
gcloud container fleet mesh enable
Enable automatic control plane and data plane management:
gcloud container fleet mesh update \ --management automatic \ --memberships ${CLUSTER_1_NAME},${CLUSTER_2_NAME}
Wait about 20 minutes. Then verify that the control plane status is
ACTIVE
:gcloud container fleet mesh describe
The output is similar to the following:
createTime: '2023-11-30T19:23:21.713028916Z' membershipSpecs: projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01: mesh: management: MANAGEMENT_AUTOMATIC projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02: mesh: management: MANAGEMENT_AUTOMATIC membershipStates: projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' implementation: ISTIOD state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: |- Revision ready for use: asm-managed-rapid. All Canonical Services have been reconciled successfully. updateTime: '2024-06-27T09:00:21.333579005Z' projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' implementation: ISTIOD state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: |- Revision ready for use: asm-managed-rapid. All Canonical Services have been reconciled successfully. updateTime: '2024-06-27T09:00:24.674852751Z' name: projects/e2m-private-test-01/locations/global/features/servicemesh resourceState: state: ACTIVE spec: {} updateTime: '2024-06-04T17:16:28.730429993Z'
Deploy an external Application Load Balancer and create ingress gateways
In this section, you deploy an external Application Load Balancer through the
GKE Gateway controller and create ingress gateways for both
clusters. The gateway
and gatewayClass
resources automate the provisioning
of the load balancer and backend health checking. To provide TLS termination on
the load balancer, you create
Certificate Manager
resources and attach them to the load balancer. Additionally, you use
Endpoints
to automatically provision a public DNS name for the application.
Install an ingress gateway on both clusters
As a security best practice, we recommend that you deploy the ingress gateway in a different namespace from the mesh control plane.
In Cloud Shell, create a dedicated
asm-ingress
namespace on each cluster:kubectl --context=${CLUSTER_1_NAME} create namespace asm-ingress kubectl --context=${CLUSTER_2_NAME} create namespace asm-ingress
Add a namespace label to the
asm-ingress
namespaces:kubectl --context=${CLUSTER_1_NAME} label namespace asm-ingress istio-injection=enabled kubectl --context=${CLUSTER_2_NAME} label namespace asm-ingress istio-injection=enabled
The output is similar to the following:
namespace/asm-ingress labeled
Labeling the
asm-ingress
namespaces withistio-injection=enabled
instructs Cloud Service Mesh to automatically inject Envoy sidecar proxies when a pod is deployed.Generate a self-signed certificate for future use:
openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \ -subj "/CN=frontend.endpoints.PROJECT_ID.cloud.goog/O=Edge2Mesh Inc" \ -keyout ${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key \ -out ${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt
The certificate provides an additional layer of encryption between the load balancer and the service mesh ingress gateways. It also enables support for HTTP/2-based protocols like gRPC. Instructions about how to attach the self-signed certificate to the ingress gateways are provided later in Create external IP address, DNS record, and TLS certificate resources.
For more information about the requirements of the ingress gateway certificate, see Encryption from the load balancer to the backends.
Create a Kubernetes secret on each cluster to store the self-signed certificate:
kubectl --context ${CLUSTER_1_NAME} -n asm-ingress create secret tls \ edge2mesh-credential \ --key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key \ --cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt kubectl --context ${CLUSTER_2_NAME} -n asm-ingress create secret tls \ edge2mesh-credential \ --key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key \ --cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt
To integrate with external Application Load Balancer, create a kustomize variant to configure the ingress gateway resources:
mkdir -p ${WORKDIR}/asm-ig/base cat <<EOF > ${WORKDIR}/asm-ig/base/kustomization.yaml resources: - github.com/GoogleCloudPlatform/anthos-service-mesh-samples/docs/ingress-gateway-asm-manifests/base EOF mkdir ${WORKDIR}/asm-ig/variant cat <<EOF > ${WORKDIR}/asm-ig/variant/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: asm-ingressgateway namespace: asm-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] EOF cat <<EOF > ${WORKDIR}/asm-ig/variant/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: asm-ingressgateway namespace: asm-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: asm-ingressgateway subjects: - kind: ServiceAccount name: asm-ingressgateway EOF cat <<EOF > ${WORKDIR}/asm-ig/variant/service-proto-type.yaml apiVersion: v1 kind: Service metadata: name: asm-ingressgateway namespace: asm-ingress spec: ports: - name: status-port port: 15021 protocol: TCP targetPort: 15021 - name: http port: 80 targetPort: 8080 appProtocol: HTTP - name: https port: 443 targetPort: 8443 appProtocol: HTTP2 type: ClusterIP EOF cat <<EOF > ${WORKDIR}/asm-ig/variant/gateway.yaml apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: asm-ingressgateway namespace: asm-ingress spec: servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" # IMPORTANT: Must use wildcard here when using SSL, as SNI isn't passed from GFE tls: mode: SIMPLE credentialName: edge2mesh-credential EOF cat <<EOF > ${WORKDIR}/asm-ig/variant/kustomization.yaml namespace: asm-ingress resources: - ../base - role.yaml - rolebinding.yaml patches: - path: service-proto-type.yaml target: kind: Service - path: gateway.yaml target: kind: Gateway EOF
Apply the ingress gateway configuration to both clusters:
kubectl --context ${CLUSTER_1_NAME} apply -k ${WORKDIR}/asm-ig/variant kubectl --context ${CLUSTER_2_NAME} apply -k ${WORKDIR}/asm-ig/variant
Expose ingress gateway pods to the load balancer using a multi-cluster service
In this section, you export the ingress gateway pods through a ServiceExport
custom resource. You must export the ingress gateway pods
through a ServiceExport
custom resource for the following reasons:
- Allows the load balancer to address the ingress gateway pods across multiple clusters.
- Allows the ingress gateway pods to proxy requests to services running within the service mesh.
In Cloud Shell, enable multi-cluster Services (MCS) for the fleet:
gcloud container fleet multi-cluster-services enable
Grant MCS the required IAM permissions to the project or fleet:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:PROJECT_ID.svc.id.goog[gke-mcs/gke-mcs-importer]" \ --role "roles/compute.networkViewer"
Create the
ServiceExport
YAML file:cat <<EOF > ${WORKDIR}/svc_export.yaml kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: asm-ingressgateway namespace: asm-ingress EOF
Apply the
ServiceExport
YAML file to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/svc_export.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/svc_export.yaml
If you receive the following error message, wait a few moments for the MCS custom resource definitions (CRDs) to install. Then re-run the commands to apply the
ServiceExport
YAML file to both clusters.error: resource mapping not found for name: "asm-ingressgateway" namespace: "asm-ingress" from "svc_export.yaml": no matches for kind "ServiceExport" in version "net.gke.io/v1" ensure CRDs are installed first
Create external IP address, DNS record, and TLS certificate resources
In this section, you create networking resources that support the load-balancing resources that you create later in this deployment.
In Cloud Shell, reserve a static external IP address:
gcloud compute addresses create mcg-ip --global
A static IP address is used by the GKE Gateway resource. It lets the IP address remain the same, even if the external load balancer is recreated.
Get the static IP address and store it as an environment variable:
export MCG_IP=$(gcloud compute addresses describe mcg-ip --global --format "value(address)") echo ${MCG_IP}
To create a stable, human-friendly mapping to your Gateway IP address, you must have a public DNS record.
You can use any DNS provider and automation scheme that you want. This deployment uses Endpoints instead of creating a managed DNS zone. Endpoints provides a free Google-managed DNS record for an external IP address.
Run the following command to create a YAML file named
dns-spec.yaml
:cat <<EOF > ${WORKDIR}/dns-spec.yaml swagger: "2.0" info: description: "Cloud Endpoints DNS" title: "Cloud Endpoints DNS" version: "1.0.0" paths: {} host: "frontend.endpoints.PROJECT_ID.cloud.goog" x-google-endpoints: - name: "frontend.endpoints.PROJECT_ID.cloud.goog" target: "${MCG_IP}" EOF
The
dns-spec.yaml
file defines the public DNS record in the form offrontend.endpoints.PROJECT_ID.cloud.goog
, wherePROJECT_ID
is your unique project identifier.Deploy the
dns-spec.yaml
file to create the DNS entry. This process takes a few minutes.gcloud endpoints services deploy ${WORKDIR}/dns-spec.yaml
Create a certificate using Certificate Manager for the DNS entry name you created in the previous step:
gcloud certificate-manager certificates create mcg-cert \ --domains="frontend.endpoints.PROJECT_ID.cloud.goog"
A Google-managed TLS certificate is used to terminate inbound client requests at the load balancer.
Create a certificate map:
gcloud certificate-manager maps create mcg-cert-map
The load balancer references the certificate through the certificate map entry you create in the next step.
Create a certificate map entry for the certificate you created earlier in this section:
gcloud certificate-manager maps entries create mcg-cert-map-entry \ --map="mcg-cert-map" \ --certificates="mcg-cert" \ --hostname="frontend.endpoints.PROJECT_ID.cloud.goog"
Create backend service policies and load balancer resources
In this section you accomplish the following tasks;
- Create a Google Cloud Armor security policy with rules.
- Create a policy that lets the load balancer check the responsiveness of
the ingress gateway pods through the
ServiceExport
YAML file you created earlier. - Use the GKE Gateway API to create a load balancer resource.
- Use the
GatewayClass
custom resource to set the specific load balancer type. - Enable multi-cluster load balancing for the fleet and designate one of the clusters as the configuration cluster for the fleet.
In Cloud Shell, create a Google Cloud Armor security policy:
gcloud compute security-policies create edge-fw-policy \ --description "Block XSS attacks"
Create a rule for the security policy:
gcloud compute security-policies rules create 1000 \ --security-policy edge-fw-policy \ --expression "evaluatePreconfiguredExpr('xss-stable')" \ --action "deny-403" \ --description "XSS attack filtering"
Create a YAML file for the security policy, and reference the
ServiceExport
YAML file through a correspondingServiceImport
YAML file:cat <<EOF > ${WORKDIR}/cloud-armor-backendpolicy.yaml apiVersion: networking.gke.io/v1 kind: GCPBackendPolicy metadata: name: cloud-armor-backendpolicy namespace: asm-ingress spec: default: securityPolicy: edge-fw-policy targetRef: group: net.gke.io kind: ServiceImport name: asm-ingressgateway EOF
Apply the Google Cloud Armor policy to both clusters:
kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/cloud-armor-backendpolicy.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/cloud-armor-backendpolicy.yaml
Create a custom YAML file that lets the load balancer perform health checks against the Envoy health endpoint (port
15021
on path/healthz/ready
) of the ingress gateway pods in both clusters:cat <<EOF > ${WORKDIR}/ingress-gateway-healthcheck.yaml apiVersion: networking.gke.io/v1 kind: HealthCheckPolicy metadata: name: ingress-gateway-healthcheck namespace: asm-ingress spec: default: config: httpHealthCheck: port: 15021 portSpecification: USE_FIXED_PORT requestPath: /healthz/ready type: HTTP targetRef: group: net.gke.io kind: ServiceImport name: asm-ingressgateway EOF
Apply the custom YAML file you created in the previous step to both clusters:
kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/ingress-gateway-healthcheck.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/ingress-gateway-healthcheck.yaml
Enable multi-cluster load balancing for the fleet, and designate
CLUSTER_1_NAME
as the configuration cluster:gcloud container fleet ingress enable \ --config-membership=${CLUSTER_1_NAME} \ --location=${CLUSTER_1_REGION}
Grant IAM permissions for the Gateway controller in the fleet:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-multiclusteringress.iam.gserviceaccount.com" \ --role "roles/container.admin"
Create the load balancer YAML file through a Gateway custom resource that references the
gke-l7-global-external-managed-mc
gatewayClass
and the static IP address you created earlier:cat <<EOF > ${WORKDIR}/frontend-gateway.yaml kind: Gateway apiVersion: gateway.networking.k8s.io/v1 metadata: name: external-http namespace: asm-ingress annotations: networking.gke.io/certmap: mcg-cert-map spec: gatewayClassName: gke-l7-global-external-managed-mc listeners: - name: http # list the port only so we can redirect any incoming http requests to https protocol: HTTP port: 80 - name: https protocol: HTTPS port: 443 allowedRoutes: kinds: - kind: HTTPRoute addresses: - type: NamedAddress value: mcg-ip EOF
Apply the
frontend-gateway
YAML file to both clusters. OnlyCLUSTER_1_NAME
is authoritative unless you designate a different configuration cluster as authoritative:kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/frontend-gateway.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/frontend-gateway.yaml
Create an
HTTPRoute
YAML file calleddefault-httproute.yaml
that instructs the Gateway resource to send requests to the ingress gateways:cat << EOF > ${WORKDIR}/default-httproute.yaml apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: default-httproute namespace: asm-ingress spec: parentRefs: - name: external-http namespace: asm-ingress sectionName: https rules: - backendRefs: - group: net.gke.io kind: ServiceImport name: asm-ingressgateway port: 443 EOF
Apply the
HTTPRoute
YAML file you created in the previous step to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/default-httproute.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/default-httproute.yaml
To perform HTTP to HTTP(S) redirects, create an additional
HTTPRoute
YAML file calleddefault-httproute-redirect.yaml
:cat << EOF > ${WORKDIR}/default-httproute-redirect.yaml kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1 metadata: name: http-to-https-redirect-httproute namespace: asm-ingress spec: parentRefs: - name: external-http namespace: asm-ingress sectionName: http rules: - filters: - type: RequestRedirect requestRedirect: scheme: https statusCode: 301 EOF
Apply the redirect
HTTPRoute
YAML file to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/default-httproute-redirect.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/default-httproute-redirect.yaml
Inspect the Gateway resource to check the progress of the load balancer deployment:
kubectl --context=${CLUSTER_1_NAME} describe gateway external-http -n asm-ingress
The output shows the information you entered in this section.
Deploy the whereami sample application
This guide uses
whereami
as a sample application to provide direct feedback about which clusters are
replying to requests. The following section sets up two separate deployments of
whereami across both clusters: a frontend
deployment and a backend
deployment.
The frontend
deployment is the first workload to receive the request. It then
calls the backend
deployment.
This model is used to demonstrate a multi-service application architecture.
Both frontend
and backend
services are deployed to both clusters.
In Cloud Shell, create the namespaces for a whereami
frontend
and a whereamibackend
across both clusters and enable namespace injection:kubectl --context=${CLUSTER_1_NAME} create ns frontend kubectl --context=${CLUSTER_1_NAME} label namespace frontend istio-injection=enabled kubectl --context=${CLUSTER_1_NAME} create ns backend kubectl --context=${CLUSTER_1_NAME} label namespace backend istio-injection=enabled kubectl --context=${CLUSTER_2_NAME} create ns frontend kubectl --context=${CLUSTER_2_NAME} label namespace frontend istio-injection=enabled kubectl --context=${CLUSTER_2_NAME} create ns backend kubectl --context=${CLUSTER_2_NAME} label namespace backend istio-injection=enabled
Create a kustomize variant for the whereami
backend
:mkdir -p ${WORKDIR}/whereami-backend/base cat <<EOF > ${WORKDIR}/whereami-backend/base/kustomization.yaml resources: - github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8s EOF mkdir ${WORKDIR}/whereami-backend/variant cat <<EOF > ${WORKDIR}/whereami-backend/variant/cm-flag.yaml apiVersion: v1 kind: ConfigMap metadata: name: whereami data: BACKEND_ENABLED: "False" # assuming you don't want a chain of backend calls METADATA: "backend" EOF cat <<EOF > ${WORKDIR}/whereami-backend/variant/service-type.yaml apiVersion: "v1" kind: "Service" metadata: name: "whereami" spec: type: ClusterIP EOF cat <<EOF > ${WORKDIR}/whereami-backend/variant/kustomization.yaml nameSuffix: "-backend" namespace: backend commonLabels: app: whereami-backend resources: - ../base patches: - path: cm-flag.yaml target: kind: ConfigMap - path: service-type.yaml target: kind: Service EOF
Apply the whereami
backend
variant to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -k ${WORKDIR}/whereami-backend/variant kubectl --context=${CLUSTER_2_NAME} apply -k ${WORKDIR}/whereami-backend/variant
Create a kustomize variant for the whereami
frontend
:mkdir -p ${WORKDIR}/whereami-frontend/base cat <<EOF > ${WORKDIR}/whereami-frontend/base/kustomization.yaml resources: - github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8s EOF mkdir whereami-frontend/variant cat <<EOF > ${WORKDIR}/whereami-frontend/variant/cm-flag.yaml apiVersion: v1 kind: ConfigMap metadata: name: whereami data: BACKEND_ENABLED: "True" BACKEND_SERVICE: "http://whereami-backend.backend.svc.cluster.local" EOF cat <<EOF > ${WORKDIR}/whereami-frontend/variant/service-type.yaml apiVersion: "v1" kind: "Service" metadata: name: "whereami" spec: type: ClusterIP EOF cat <<EOF > ${WORKDIR}/whereami-frontend/variant/kustomization.yaml nameSuffix: "-frontend" namespace: frontend commonLabels: app: whereami-frontend resources: - ../base patches: - path: cm-flag.yaml target: kind: ConfigMap - path: service-type.yaml target: kind: Service EOF
Apply the whereami
frontend
variant to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -k ${WORKDIR}/whereami-frontend/variant kubectl --context=${CLUSTER_2_NAME} apply -k ${WORKDIR}/whereami-frontend/variant
Create a
VirtualService
YAML file to route requests to the whereamifrontend
:cat << EOF > ${WORKDIR}/frontend-vs.yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: whereami-vs namespace: frontend spec: gateways: - asm-ingress/asm-ingressgateway hosts: - 'frontend.endpoints.PROJECT_ID.cloud.goog' http: - route: - destination: host: whereami-frontend port: number: 80 EOF
Apply the
frontend-vs
YAML file to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/frontend-vs.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/frontend-vs.yaml
Now that you have deployed
frontend-vs.yaml
to both clusters, attempt to call the public endpoint for your clusters:curl -s https://frontend.endpoints.PROJECT_ID.cloud.goog | jq
The output is similar to the following:
{ "backend_result": { "cluster_name": "edge-to-mesh-02", "gce_instance_id": "8396338201253702608", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "whereami-backend.backend.svc.cluster.local", "metadata": "backend", "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-645h", "pod_ip": "10.124.0.199", "pod_name": "whereami-backend-7cbdfd788-8mmnq", "pod_name_emoji": "📸", "pod_namespace": "backend", "pod_service_account": "whereami-backend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T03:46:24", "zone": "us-east4-b" }, "cluster_name": "edge-to-mesh-01", "gce_instance_id": "1047264075324910451", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog", "metadata": "frontend", "node_name": "gk3-edge-to-mesh-01-pool-2-d687e3c0-5kf2", "pod_ip": "10.54.1.71", "pod_name": "whereami-frontend-69c4c867cb-dgg8t", "pod_name_emoji": "🪴", "pod_namespace": "frontend", "pod_service_account": "whereami-frontend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T03:46:24", "zone": "us-central1-c" }
If you run the curl
command a few times, you'll see that the responses (both
from frontend
and backend
) come from different regions. In its response,
the load balancer is providing geo-routing. That means the
load balancer is routing requests from the client to the nearest active cluster,
but the requests are still landing randomly. When requests occasionally go from
one region to another, it increases latency and cost.
In the next section, you implement locality load balancing in the service mesh to keep requests local.
Enable and test locality load balancing for whereami
In this section, you implement locality load balancing in the service mesh to keep requests local. You also perform some tests to see how whereami handles various failure scenarios.
When you make a request to the whereami frontend
service, the load balancer
sends the request to the cluster with the lowest latency relative to the
client. That means the ingress gateway pods within the mesh load
balance requests to whereami frontend
pods across both clusters. This section
will address that issue by enabling locality load balancing within the mesh.
In Cloud Shell, create a
DestinationRule
YAML file that enables locality load balancing regional failover to thefrontend
service:cat << EOF > ${WORKDIR}/frontend-dr.yaml apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: frontend namespace: frontend spec: host: whereami-frontend.frontend.svc.cluster.local trafficPolicy: connectionPool: http: maxRequestsPerConnection: 0 loadBalancer: simple: LEAST_REQUEST localityLbSetting: enabled: true outlierDetection: consecutive5xxErrors: 1 interval: 1s baseEjectionTime: 1m EOF
The preceding code sample only enables local routing for the
frontend
service. You also need an additional configuration that handles the backend.Apply the
frontend-dr
YAML file to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/frontend-dr.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/frontend-dr.yaml
Create a
DestinationRule
YAML file that enables locality load balancing regional failover to thebackend
service:cat << EOF > ${WORKDIR}/backend-dr.yaml apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: n ame: backend namespace: backend spec: host: whereami-backend.backend.svc.cluster.local trafficPolicy: connectionPool: http: maxRequestsPerConnection: 0 loadBalancer: simple: LEAST_REQUEST localityLbSetting: enabled: true outlierDetection: consecutive5xxErrors: 1 interval: 1s baseEjectionTime: 1m EOF
Apply the
backend-dr
YAML file to both clusters:kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/backend-dr.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/backend-dr.yaml
With both sets of
DestinationRule
YAML files applied to both clusters, requests remain local to the cluster that the request is routed to.To test failover for the
frontend
service, reduce the number of replicas for the ingress gateway in your primary cluster.From the perspective of the multi-regional load balancer, this action simulates a cluster failure. It causes that cluster to fail its load balancer health checks. This example uses the cluster in
CLUSTER_1_REGION
. You should only see responses from the cluster inCLUSTER_2_REGION
.Reduce the number of replicas for the ingress gateway in your primary cluster to zero and call the public endpoint to verify that requests have failed over to the other cluster:
kubectl --context=${CLUSTER_1_NAME} -n asm-ingress scale --replicas=0 deployment/asm-ingressgateway
The output should resemble the following:
$ curl -s https://frontend.endpoints.PROJECT_ID.cloud.goog | jq { "backend_result": { "cluster_name": "edge-to-mesh-02", "gce_instance_id": "2717459599837162415", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "whereami-backend.backend.svc.cluster.local", "metadata": "backend", "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-dxs2", "pod_ip": "10.124.1.7", "pod_name": "whereami-backend-7cbdfd788-mp8zv", "pod_name_emoji": "🏌🏽♀", "pod_namespace": "backend", "pod_service_account": "whereami-backend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T05:41:18", "zone": "us-east4-b" }, "cluster_name": "edge-to-mesh-02", "gce_instance_id": "6983018919754001204", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog", "metadata": "frontend", "node_name": "gk3-edge-to-mesh-02-pool-3-d42ddfbf-qmkn", "pod_ip": "10.124.1.142", "pod_name": "whereami-frontend-69c4c867cb-xf8db", "pod_name_emoji": "🏴", "pod_namespace": "frontend", "pod_service_account": "whereami-frontend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T05:41:18", "zone": "us-east4-b" }
To resume typical traffic routing, restore the ingress gateway replicas to the original value in the cluster:
kubectl --context=${CLUSTER_1_NAME} -n asm-ingress scale --replicas=3 deployment/asm-ingressgateway
Simulate a failure for the
backend
service, by reducing the number of replicas in the primary region to 0:kubectl --context=${CLUSTER_1_NAME} -n backend scale --replicas=0 deployment/whereami-backend
Verify that the responses from the
frontend
service come from theus-central1
primary region through the load balancer, and the responses from thebackend
service come from theus-east4
secondary region.The output should also include a response for the
frontend
service from the primary region (us-central1
), and a response for thebackend
service from the secondary region (us-east4
), as expected.Restore the backend service replicas to the original value to resume typical traffic routing:
kubectl --context=${CLUSTER_1_NAME} -n backend scale --replicas=3 deployment/whereami-backend
You now have a global HTTP(S) load balancer serving as a frontend to your service-mesh-hosted, multi-region application.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this deployment, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Delete the individual resources
If you want to keep the Google Cloud project you used in this deployment, delete the individual resources:
In Cloud Shell, delete the
HTTPRoute
resources:kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/default-httproute-redirect.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/default-httproute-redirect.yaml kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/default-httproute.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/default-httproute.yaml
Delete the GKE Gateway resources:
kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/frontend-gateway.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/frontend-gateway.yaml
Delete the policies:
kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/ingress-gateway-healthcheck.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/ingress-gateway-healthcheck.yaml kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/cloud-armor-backendpolicy.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/cloud-armor-backendpolicy.yaml
Delete the service exports:
kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/svc_export.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/svc_export.yaml
Delete the Google Cloud Armor resources:
gcloud --project=PROJECT_ID compute security-policies rules delete 1000 --security-policy edge-fw-policy --quiet gcloud --project=PROJECT_ID compute security-policies delete edge-fw-policy --quiet
Delete the Certificate Manager resources:
gcloud --project=PROJECT_ID certificate-manager maps entries delete mcg-cert-map-entry --map="mcg-cert-map" --quiet gcloud --project=PROJECT_ID certificate-manager maps delete mcg-cert-map --quiet gcloud --project=PROJECT_ID certificate-manager certificates delete mcg-cert --quiet
Delete the Endpoints DNS entry:
gcloud --project=PROJECT_ID endpoints services delete "frontend.endpoints.PROJECT_ID.cloud.goog" --quiet
Delete the static IP address