Configuration updates for modernization
This document describes configuration updates you may need to make to your
managed Cloud Service Mesh before modernizing your mesh to
the TRAFFIC_DIRECTOR
control plane from the ISTIOD
control plane.
For more information on the modernization workflow, see the Managed control plane modernization page.
Migrate from Istio secrets to multicluster_mode
Multi-cluster secrets are not supported when a cluster is using the
TRAFFIC_DIRECTOR
control plane. This document describes how you
can modernize from using Istio multi-cluster secrets to using multicluster_mode
.
Istio secrets versus declarative API overview
Open source istio multi-cluster endpoint discovery works by
using istioctl
or other tools to create a Kubernetes Secret in a
cluster. This secret allows a cluster to load balance traffic to another cluster
in the mesh. The ISTIOD
control plane then reads this
secret and begins routing traffic to that other cluster.
Cloud Service Mesh has a declarative API to control multi-cluster traffic instead of directly creating Istio secrets. This API treats Istio secrets as an implementation detail and is more reliable than creating Istio secrets manually. Future Cloud Service Mesh features will depend on the declarative API, and you won't be able to use those new features with Istio secrets directly. The declarative API is the only supported path forward.
If you are using Istio Secrets, migrate to using the declarative API as
soon as possible. Note that the multicluster_mode
setting directs each cluster
to direct traffic to every other cluster in the mesh. Using secrets allows a
more flexible configuration, letting you configure for each cluster which other
cluster it should direct traffic to in the mesh.
For a full list of the differences between the supported
features of the declarative API and Istio secrets, see
Supported features using Istio APIs.
Migrate from Istio secrets to declarative API
If you provisioned Cloud Service Mesh using automatic management with the
fleet feature API, you don't
need to follow these instructions.
These steps only apply if you onboarded using asmcli --managed
.
Note, this process changes secrets that point to a cluster. During this process, the endpoints are removed and then re-added. In between the endpoints being removed and added, the traffic will briefly revert to routing locally instead of load balancing to other clusters. For more information, see the GitHub issue.
To move from using Istio secrets to the declarative API, follow these steps. Execute these steps at the same time or in close succession:
Enable the declarative API for each cluster in the fleet where you want to enable multi cluster endpoint discovery by setting
multicluster_mode=connected
. Note that you need to explicitly setmulticluster_mode=disconnected
if you don't want the cluster to be discoverable.Use the following command to opt in a cluster for multi cluster endpoint discovery:
kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"connected"}}'
Use the following command to opt a cluster out of endpoint discovery:
kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"disconnected"}}'
Delete old secrets.
After setting
multicluster_mode=connected
on your clusters, each cluster will have a new secret generated for every other cluster that also hasmulticluster_mode=connected
set. The secret is placed in the istio-system namespace and have the following format:istio-remote-secret-projects-PROJECT_NAME-locations-LOCATION-memberships-MEMBERSHIPS
Each secret will also have the label
istio.io/owned-by: mesh.googleapis.com
applied.Once the new secrets are created, you can delete any secrets manually created with
istioctl create-remote-secret
:kubectl delete secret SECRET_NAME -n istio-system
Once migrated, check your request metrics to make sure they're routed as expected.