[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-19。"],[],[],null,["# Uninstall Cloud Service Mesh\n============================\n\n| **Note:** This guide only supports Cloud Service Mesh with Istio APIs and does not support Google Cloud APIs. For more information see, [Cloud Service Mesh overview](/service-mesh/docs/overview).\n\nThis page explains how to uninstall Cloud Service Mesh if you are using\nthe Istio APIs. If you are using Compute Engine APIs, no steps are\nnecessary. See the [Cloud Service Mesh overview](/service-mesh/docs/overview)\nto understand the differences.\n\nFollowing these instructions to uninstall Cloud Service Mesh removes\nall configurations regardless of control plane type (in-cluster or managed).\nIf you are doing a migration from in-cluster to managed, follow the\n[Migration guide](/service-mesh/docs/tutorials/migrate-in-cluster-to-managed-on-new-cluster)\ninstead.\n\nUninstall Cloud Service Mesh\n----------------------------\n\nUse the following commands to uninstall all Cloud Service Mesh components. These\ncommands also delete the `istio-system` namespace and all custom resource\ndefinitions (CRDs), including any CRDs that you applied.\n| **Warning:** If you created CRDs, make sure you have copies of them before proceeding through the following steps.\n\n1. To prevent interruption of application traffic:\n\n - Downgrade any STRICT mTLS policies to PERMISSIVE.\n - Remove any AuthorizationPolicy that may block traffic.\n2. Disable Automatic Management on this cluster (whether you applied it\n directly or using the fleet-default configuration):\n\n gcloud container fleet mesh update \\\n --management manual \\\n --memberships \u003cvar translate=\"no\"\u003eMEMBERSHIP_NAME\u003c/var\u003e \\\n --project \u003cvar translate=\"no\"\u003eFLEET_PROJECT_ID\u003c/var\u003e \\\n --location \u003cvar translate=\"no\"\u003eMEMBERSHIP_LOCATION\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eMEMBERSHIP_NAME\u003c/var\u003e is the membership name listed when you verified that your cluster was registered to the fleet.\n - \u003cvar translate=\"no\"\u003eMEMBERSHIP_LOCATION\u003c/var\u003e is the location of your membership (either a region, or `global`).\n3. Disable sidecar auto-injection on your namespace(s), if it is enabled. Run\n the following command to display namespace labels:\n\n kubectl get namespace \u003cvar translate=\"no\"\u003eYOUR_NAMESPACE\u003c/var\u003e --show-labels\n\n The output is similar to the following:\n\n \u003cbr /\u003e\n\n ```bash\n NAME STATUS AGE LABELS\n demo Active 4d17h istio.io/rev=asm-181-5\n ```\n\n \u003cbr /\u003e\n\n If you see `istio.io/rev=` in the output under the `LABELS` column,\n remove it: \n\n kubectl label namespace \u003cvar translate=\"no\"\u003eYOUR_NAMESPACE\u003c/var\u003e istio.io/rev-\n\n If you see `istio-injection` in the output under the `LABELS` column,\n remove it: \n\n kubectl label namespace \u003cvar translate=\"no\"\u003eYOUR_NAMESPACE\u003c/var\u003e istio-injection-\n\n If you don't see either the `istio.io/rev` or `istio-injection` labels,\n then auto-injection wasn't enabled on the namespace.\n4. Restart your workloads that have sidecars injected to remove the proxies.\n\n5. If you're using managed Cloud Service Mesh, check which\n [control plane implementation](/service-mesh/docs/supported-features-managed#identify_control_plane_implementation)\n you have in your cluster, this will help delete relevant resources in further\n steps.\n\n6. If you're using managed Cloud Service Mesh, remove all `controlplanerevision`\n resources in the cluster:\n\n kubectl delete controlplanerevision asm-managed asm-managed-rapid asm-managed-stable -n istio-system --ignore-not-found=true\n\n7. Delete webhooks from your cluster, if they exist.\n\n ### In-cluster Cloud Service Mesh\n\n Delete the `validatingwebhooksconfiguration` and `mutatingwebhookconfiguration`. \n\n kubectl delete validatingwebhookconfiguration,mutatingwebhookconfiguration -l operator.istio.io/component=Pilot,istio.io/owned-by!=mesh.googleapis.com\n\n ### Managed Cloud Service Mesh\n\n A. Delete the `validatingwebhooksconfiguration`. \n\n kubectl delete validatingwebhookconfiguration istiod-istio-system-mcp\n\n B. Delete all `mutatingwebhookconfiguration`.\n **Note:** Use the \u003cvar translate=\"no\"\u003eRELEASE_CHANNEL\u003c/var\u003e according to the [release channel](/service-mesh/docs/managed/release-channels) you provisioned. For example, Regular(`istiod-asm-managed`), Rapid (`istiod-asm-managed-rapid`), and Stable (`istiod-asm-managed-stable`). \n\n kubectl delete mutatingwebhookconfiguration istiod-\u003cvar translate=\"no\"\u003eRELEASE_CHANNEL\u003c/var\u003e\n\n8. Once all workloads come up and no proxies are observed, then you can safely\n delete the in-cluster control plane to stop billing.\n\n To remove the in-cluster control plane, run the following command: \n\n istioctl uninstall --purge\n\n If there are no other control planes, you can delete the `istio-system`\n namespace to get rid of all Cloud Service Mesh resources. Otherwise, delete the\n services corresponding to the Cloud Service Mesh revisions. This avoids deleting\n shared resources, such as CRDs.\n9. Delete the `istio-system` and `asm-system` namespaces:\n\n kubectl delete namespace istio-system asm-system --ignore-not-found=true\n\n10. Check if the deletions were successful:\n\n kubectl get ns\n\n The output should indicate a `Terminating` state and return as shown,\n otherwise you might have to manually delete any remaining resources in\n the namespaces and try again. \n\n NAME STATUS AGE\n istio-system Terminating 71m\n asm-system Terminating 71m\n\n11. If you will delete your clusters, or have already deleted them, ensure\n that each cluster is\n [unregistered](/anthos/fleet-management/docs/unregister) from your fleet.\n\n12. If you enabled managed Cloud Service Mesh fleet-default configuration and\n want to disable it for future clusters, disable it. You can skip this step\n if you're only uninstalling from a single cluster.\n\n gcloud container hub mesh disable --fleet-default-member-config --project \u003cvar translate=\"no\"\u003eFLEET_PROJECT_ID\u003c/var\u003e\n\n Where \u003cvar translate=\"no\"\u003eFLEET_PROJECT_ID\u003c/var\u003e is the ID of your Fleet Host project.\n13. If you plan to stop using Cloud Service Mesh at the fleet level, disable the service mesh feature for your fleet host project.\n\n gcloud container hub mesh disable --project \u003cvar translate=\"no\"\u003eFLEET_PROJECT_ID\u003c/var\u003e\n\n Where \u003cvar translate=\"no\"\u003eFLEET_PROJECT_ID\u003c/var\u003e is the ID of your Fleet Host project.\n14. If you enabled managed Cloud Service Mesh, check and delete managed resources if they present:\n\n | **Note:** If your cluster is an `Autopilot` cluster, you cannot delete resources from the `kube-system` namespace. If you're uninstalling managed Cloud Service Mesh and you're keeping your cluster, [contact Support](/service-mesh/docs/getting-support) to ensure that these resources are deleted.\n 1. Delete the `mdp-controller` deployment:\n\n kubectl delete deployment mdp-controller -n kube-system\n\n 2. If you have the `TRAFFIC_DIRECTOR` control plane implementation, clean up Transparent Health Check resources. Normally these are removed automatically, but you can make sure they are cleaned up by doing the following:\n\n 1. Delete the `snk` daemonset.\n\n kubectl delete daemonset snk -n kube-system\n\n 2. Delete the firewall rule.\n\n gcloud compute firewall-rules delete gke-csm-thc-\u003cvar translate=\"no\"\u003eFIRST_8_CHARS_OF_CLUSTER_ID\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eFIRST_8_CHARS_OF_CLUSTER_ID\u003c/var\u003e is the first 8 characters of the Cluster ID for your specific cluster.\n 3. Check to see if the `istio-cni-plugin-config` configmap is present:\n\n kubectl get configmap istio-cni-plugin-config -n kube-system\n\n If present, delete the `istio-cni-plugin-config` configmap: \n\n kubectl delete configmap istio-cni-plugin-config -n kube-system\n\n | **Warning:** The network reliability of your GKE cluster may be impacted if you don't delete this configmap.\n 4. Delete the `istio-cni-node` daemonset:\n\n kubectl delete daemonset istio-cni-node -n kube-system\n\n15. If you're uninstalling managed Cloud Service Mesh, [contact Support](/service-mesh/docs/getting-support) to ensure that all\n Google Cloud resources are cleaned up. The `istio-system` namespace and config\n maps may also continue to be recreated if you don't follow this step.\n\nUpon completion of these steps, all Cloud Service Mesh components, including proxies,\nin-cluster certificate authorities, and RBAC roles and bindings, are\nsystematically removed from the cluster. During the installation process, a\nGoogle-owned service account is granted the necessary permissions to establish\nthe service mesh resources within the cluster. These uninstall instructions\ndon't revoke these permissions, allowing for a seamless re-activation of\nCloud Service Mesh in the future.\n| **Note:** Some resources (e.g. Network Endpoint Groups or NEGs) will be orphaned after the systematic removal of these components. To err on the side of caution, Cloud Service Mesh must *fail open* during clean up. This is to ensure that the uninstallation of managed Cloud Service Mesh never has a negative impact other networking resources in your project or on the workloads. To have these orphaned resources removed for the purposes of hygiene or \"house cleaning\", contact [Google Cloud Support](/service-mesh/docs/getting-support)"]]