Migrate Istio ServiceEntry to GCPBackend for Compute Engine VMs
This page shows you how to migrate from ServiceEntry to GCPBackend, demonstrating how Istio's traffic management capabilities can ensure a smooth and safe transition.
Migrating to GCPBackend provides the following benefits:
- Endpoint Discovery - VM endpoints in the Backend Service are automatically updated when VM instances are added or removed.
- Centralized Health Checking - VM endpoints are health checked and traffic is automatically routed away from unhealthy to healthy backends.
- Global load balancing and advanced load balancing algorithms - VM endpoints can be deployed in multiple regions. Use our load balancing algorithms to configure load balancing behavior across these regions. See https://cloud.google.com/service-mesh/docs/service-routing/advanced-load-balancing-overview and https://cloud.google.com/service-mesh/docs/service-routing/advanced-load-balancing-overview for more details.
- Seamless migration: Utilize traffic splitting and mirroring to safely migrate traffic without disrupting the application.
- Enhanced manageability: Benefit from the streamlined configuration and management.
Before you begin
The following sections assume that you have:
- A GKE Cluster with Cloud Service Mesh enabled.
- An Istio Service Entry.
- Configured GCPBackend resource for Compute Engine VM.
Create or modify the existing VirtualService to include both the ServiceEntry and GCPBackend as destinations
You can use traffic splitting to gradually shift traffic from the ServiceEntry to the GCPBackend. You should start with a small percentage of traffic directed to the GCPBackend and gradually increase it while monitoring for any issues.
The following example describes migrating 10% of the requests to the GCPBackend.
cat <<EOF > virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gcpbackend-migration
namespace: NAMESPACE
spec:
hosts:
- service-entry.com
http:
- route:
- destination:
host: gcpbackend.com
weight: 10 # 10% traffic to gcp backend.
- destination:
host: service-entry.com
weight: 90 # 90% traffic to service entry
EOF
kubectl apply -f virtual-service.yaml
Where:
- NAMESPACE is the namespace name.
In this example:
- VIRTUAL_SERVICE is
gcpbackend-migration
. - SERVICE_ENTRY_HOSTNAME is
service-entry.com
. - GCP_BACKEND_HOSTNAME is
gcpbackend.com
.
(Optional) Configure VirtualService for Traffic Mirroring
To further ensure a smooth transition, you can configure traffic mirroring to send a copy of the traffic to the GCPBackend while still primarily directing traffic to the ServiceEntry. This allows for testing and validation of the GCPBackend configuration without impacting the primary traffic flow. For more information, see the Istio Virtual Service API.
Validate functionality
Refer to your application logs or Cloud Service Mesh metrics to check error rate of requests to $SERVICE_ENTRY_HOSTNAME. There shouldn't be any errors.
To test outside of your application, you can deploy a curl client. If the request is routed to using the GCPBackend API then the request doesn't need an IAM token explicitly attached to the request because Cloud Service Mesh attaches it automatically.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: testcurl
namespace: default
spec:
containers:
- name: curl
image: curlimages/curl
command: ["sleep", "3000"]
EOF
kubectl exec testcurl -c curl -- curl "$SERVICE_ENTRY_HOSTNAME"
The output should be a valid HTTP 200 response.