Failed to update admission webhook: KNV2013: applying changes to
admission webhook: Insufficient permission. To fix, make sure the reconciler has
sufficient permissions.:
validatingwebhookconfigurations.admissionregistration.k8s.io "admission-
webhook.configsync.gke.io" is forbidden: User "system:serviceaccount:config-
management-system:ns-reconciler-NAMESPACE" cannot update resource
"validatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at
the cluster scope
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-01。"],[],[],null,["# Prevent config drift\n\nConfig Sync reduces the risk of \"shadow ops\" through automatic self-healing,\nperiodic re-sync, and optional drift prevention. When Config Sync detects\ndrift between the cluster and the , it can either be allowed\nand quickly reverted or completely rejected.\n\nSelf-healing watches managed resources, detects drift from the source of truth,\nand reverts that drift. Self-healing is always enabled.\n\nPeriodic re-sync automatically syncs an hour after the last successful sync,\neven if no change has been made to the source of truth. Periodic re-sync is\nalways enabled.\n\nWhile self-healing and periodic re-syncs help remediate drift, drift prevention\nintercepts requests to change managed objects and validates whether the change\nshould be allowed. If the change doesn't match the , the\nchange is rejected. Drift prevention is disabled by default. When enabled, drift\nprevention protects `RootSync` objects by default, and can also be configured to protect\n`RepoSync` objects.\n\nTo use drift prevention, you must enable the\n[`RootSync` and `RepoSync` APIs](/kubernetes-engine/enterprise/config-sync/docs/reference/rootsync-reposync-fields).\n| **Note:** All of these features use [server-side apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/), which means they only prevent or revert changes to fields explicitly specified in the source of truth. Unspecified fields are allowed to drift. This is required because many Kubernetes controllers update metadata and spec fields at runtime, not just status fields.\n\nEnable drift prevention\n-----------------------\n\n1. If you used the Google Cloud console or gcloud CLI to install Config Sync, enable drift prevention by using gcloud CLI.\n Make sure to [update your gcloud CLI](/sdk/gcloud/reference/components/update)\n to the latest version.\n Set the `spec.configSync.preventDrift` field of the gcloud\n config file to `true`, and then apply the gcloud config file.\n\n2. Wait until the Config Sync `ValidateWebhookConfiguration` object is created\n by the ConfigManagement Operator:\n\n kubectl get validatingwebhookconfiguration admission-webhook.configsync.gke.io\n\n You should see output similar to the following example: \n\n NAME WEBHOOKS AGE\n admission-webhook.configsync.gke.io 0 2m15s\n\n3. Commit a new change to the source of truth to be synced so that the\n `root-reconciler` Deployment can add webhooks into the Config Sync\n ValidatingWebhookConfiguration object. An alternative is to delete the\n `root-reconcilier` Deployment to trigger a reconciliation. The new `root-reconciler` Deployment\n would update the Config Sync ValidatingWebhookConfiguration object.\n\n4. Wait until the webhook server is ready. The Config Sync admission webhook\n Deployment log should include `serving webhook server`. This can take several minutes.\n\n kubectl logs -n config-management-system -l app=admission-webhook --tail=-1 | grep \"serving webhook server\"\n\n You should see output similar to the following example: \n\n I1201 18:05:41.805531 1 deleg.go:130] controller-runtime/webhook \"level\"=0 \"msg\"=\"serving webhook server\" \"host\"=\"\" \"port\"=10250\n I1201 18:07:04.626199 1 deleg.go:130] controller-runtime/webhook \"level\"=0 \"msg\"=\"serving webhook server\" \"host\"=\"\" \"port\"=10250\n\nDisable drift prevention\n------------------------\n\nDisable drift prevention using gcloud CLI if you installed Config Sync using the Google Cloud console or gcloud CLI.\nMake sure to [update your gcloud CLI](/sdk/gcloud/reference/components/update)\nto the latest version.\nSet the `spec.configSync.preventDrift` field of the gcloud\nconfig file to `false` or remove the field, and then apply the\ngcloud config file.\n\nThis deletes all the Config Sync admission webhook resources.\nSince the Config Sync `ValidatingWebhookConfiguration` object no longer exists,\nthe Config Sync reconcilers no longer generate the webhook configs for\nmanaged resources.\n\nEnable the admission webhook in namespace-scoped sources\n--------------------------------------------------------\n\nNamespace-scoped sources of truth are not fully protected by the webhook. The\nConfig Sync reconciler for each namespace source does not have permission to\nread or update the `ValidatingWebhookConfiguration` objects at the cluster level.\n\nThis lack of permission results in an error for the namespace reconcilers logs\nsimilar to the following example: \n\n Failed to update admission webhook: KNV2013: applying changes to\n admission webhook: Insufficient permission. To fix, make sure the reconciler has\n sufficient permissions.:\n validatingwebhookconfigurations.admissionregistration.k8s.io \"admission-\n webhook.configsync.gke.io\" is forbidden: User \"system:serviceaccount:config-\n management-system:ns-reconciler-NAMESPACE\" cannot update resource\n \"validatingwebhookconfigurations\" in API group \"admissionregistration.k8s.io\" at\n the cluster scope\n\nYou can ignore this error if you don't want to use the webhook protection for your\nnamespace-scoped source of truth. However, if you want to use the webhook, grant\npermission to the reconciler for each namespace-scoped source of truth after you have\n[configured syncing from more than one source of truth](/kubernetes-engine/enterprise/config-sync/docs/how-to/multiple-repositories).\nYou might not need to perform these steps if a RoleBinding for the\n`ns-reconciler-NAMESPACE` already exists with ClusterRole `cluster-admin` permissions.\n| **Caution:** Giving permission to read or update the ValidatingWebhookConfiguration objects to a namespace-scoped source of truth means that the namespace-scoped source owners can make changes to the Config Sync admission webhook configs in the cluster.\n\n1. In the root source of truth, declare a new ClusterRole configuration that grants\n permission to the Config Sync admission webhook. This ClusterRole only\n needs to be defined once per cluster:\n\n # ROOT_SOURCE/cluster-roles/webhook-role.yaml\n apiVersion: rbac.authorization.k8s.io/v1\n kind: ClusterRole\n metadata:\n name: admission-webhook-role\n rules:\n - apiGroups: [\"admissionregistration.k8s.io\"]\n resources: [\"validatingwebhookconfigurations\"]\n resourceNames: [\"admission-webhook.configsync.gke.io\"]\n verbs: [\"get\", \"update\"]\n\n2. For each namespace-scoped source where the admission webhook permission needs\n to be granted, declare a ClusterRoleBinding configuration to grant access\n to the admission webhook:\n\n # ROOT_SOURCE/NAMESPACE/sync-webhook-rolebinding.yaml\n kind: ClusterRoleBinding\n apiVersion: rbac.authorization.k8s.io/v1\n metadata:\n name: syncs-webhook\n subjects:\n - kind: ServiceAccount\n name: ns-reconciler-\u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e\n namespace: config-management-system\n roleRef:\n kind: ClusterRole\n name: admission-webhook-role\n apiGroup: rbac.authorization.k8s.io\n\n Replace \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e with the namespace that you\n created your namespace-scoped source in.\n3. Commit the changes to the root source of truth, for example, if syncing from a\n Git repository:\n\n git add .\n git commit -m 'Providing namespace repository the permission to update the admission webhook.'\n git push\n\n4. To verify, use `kubectl get` to make sure the ClusterRole and\n ClusterRoleBinding have been created:\n\n kubectl get clusterrole admission-webhook-role\n kubectl get clusterrolebindings syncs-webhook\n\nDisable drift prevention for abandoned resources\n------------------------------------------------\n\n| **Note:** Starting in version 1.21.0, Config Sync drift prevention doesn't reject changes to abandoned resources. This section only applies to versions earlier than 1.21.0.\n\nWhen you delete a `RootSync` or `RepoSync` object, by default Config Sync doesn't\nmodify the resources previously managed by that `RootSync` or `RepoSync` object. This can\nleave behind several [labels and\nannotations](/kubernetes-engine/enterprise/config-sync/docs/reference/labels-and-annotations) that\nConfig Sync uses to track these resource objects. If drift protection is\nenabled, this can cause any changes to the previously managed resources to be\nrejected.\n\nIf you didn't use [deletion\npropagation](/kubernetes-engine/enterprise/config-sync/docs/how-to/managing-objects#bulk_delete_objects), the\nresource objects left behind might still retain labels and annotations added by\nConfig Sync.\n\nIf you want to keep these managed resources, unmanage these resources before\ndeleting the `RootSync` or `RepoSync` objects by setting the\n`configmanagement.gke.io/managed` annotation to `disabled` on every managed\nresource declared in the source of truth. This tells Config Sync to remove\nits labels and annotations from the managed resources, without deleting these\nresources. After the sync is complete, you can remove the `RootSync` or `RepoSync`\nobject.\n\nIf you want to delete these managed resources, you have two options:\n\n- Delete the managed resources from the source of truth. Then, Config Sync will delete the managed objects from the cluster. After the sync is complete, you can remove the `RootSync` or `RepoSync` object.\n- Enable deletion propagation on the `RootSync` or `RepoSync` object before deleting it. Then, Config Sync will delete the managed objects from the cluster.\n\nIf the `RootSync` or `RepoSync` object is deleted before unmanaging or deleting its\nmanaged resources, you can recreate the `RootSync` or `RepoSync` object, and it\nadopts the resources on the cluster that match the source of truth. Then you can\nunmanage or delete the resources, wait for the changes to sync, and delete the\n`RootSync` or `RepoSync` object again.\n\nWhat's next\n-----------\n\n- Learn how to [troubleshoot the webhook](/kubernetes-engine/enterprise/config-sync/docs/troubleshooting/webhook)."]]