This page lists known issues for supported versions of Config Sync.
Many of the issues listed here have been fixed. The Fixed version column indicates the version in which the fix was introduced. To receive this fix, upgrade to the listed version or later.
If you're part of the Google Developer Program, save this page to receive notifications when a release note related to this page is published. To learn more, see Saved Pages.
To filter the known issues by a product version or problem category, select your filters from the following drop-down menus.
Select your Config Sync version:
Select your problem category:
Or, filter the known issues:
Category | Identified version | Fixed version | Issue and workaround |
---|---|---|---|
Component health | 1.15.0 |
Reconciler unschedulableConfig Sync reconcilers require varying amounts of resources, depending on the configuration of the RootSync or RepoSync. Certain configurations require more resources than others. If a reconciler is unschedulable, it might be due to requesting more resources than are available on your nodes. If you're using standard mode GKE clusters, the reconciler resource requests are set very low. This setting was chosen in an attempt to allow scheduling, even if it would lead to throttling and slow performance, so that Config Sync works on small clusters and small nodes. However, on GKE Autopilotclusters, the reconciler requests are set higher, to more realistically represent usage while syncing. Workaround: GKE Autopilot or GKE Standard with node auto-provisioning enabled should be able to see how many resources are requested and create appropriately sized nodes to allow scheduling. However, if you're manually configuring the nodes or node instance sizes, you might need to adjust those settings to accommodate the reconciler Pod resource requirements. |
|
Metrics | 1.15.0 | 1.17.2 |
Fixed: Exporting failed: Unrecognized metric labelsIn version 1.15.0, Config Sync added |
Metrics | 1.15.0 |
Exporting failed. Permission deniedBy default, when the reconciler-manager detects Application Default Credentials, the otel-collector is configured to export metrics to Prometheus, Cloud Monitoring, and Monarch. Workaround:
|
|
Metrics | 1.15.0 |
otel-collector crashing with custom configIf you try to modify or delete one of the default ConfigMaps,
Workaround: To customize the metrics export configuration, create a ConfigMap named
|
|
nomos cli | 1.15.0 | 1.17.2 |
Fixed:
|
Remediation |
Config Sync fighting with itselfConfig Sync might appear to be in a
controller fight.
with itself. This issue occurs if you set the default value for an
optional field of a resource in the Git repository. For example,
setting Workaround: Remove the field from the resource declaration. |
||
Remediation |
Config Sync fighting with Config Connector resourcesConfig Sync might appear to be
fighting
Config Connector over a resource, for example a
StorageBucket.
This issue occurs if you don't set the value of an optional field of a resource
Workaround:
You can avoid this issue by adding the |
||
Source of truth | 1.17.3 | 1.18.3 |
Fixed: Git SSH Authentication Failure with GitHub
The error message from git is:
Workaround: Use a different authentication method. |
Source of truth | 1.15.0 | 1.18.0 |
Fixed: Periodically invalid authentication credentials for Cloud Source RepositoriesConfig Sync can error periodically when the authentication token expires for Cloud Source Repositories. This issue is caused by the token refresh waiting until expiration before refreshing the token. In version 1.18.0 and later, the token is refreshed on the first request within five minutes of token expiration. This prevents the invalid authentication credentials error unless the credentials are actually invalid. |
Source of truth | 1.15.0 | 1.17.0 |
Fixed: Error syncing repository: context deadline exceededIn versions earlier than 1.17.0, Config Sync checked out the full Git repository history by default. This could lead to the fetch request timing out on large repositories with many commits. In version 1.17.0 and later, the
Git fetch is performed with If you're still experiencing this issue after upgrading, it's likely that your Source of truth has many files, your Git server is responding slowly, or there is some other networking problem. |
Source of truth | 1.13.0 |
Unable to generate access token for OCI sourceWhen Config Sync is configured to use OCI as the source of truth
and authenticate with Workload Identity Federation for GKE, Config Sync
might occasionally encounter temporary This issue is caused by the oauth2 library only refreshing the auth token after the token has already expired. The error message might include the following text:
Workaround: The error should resolve itself the next time Config Sync attempts to fetch from the source of truth. When Config Sync has errored multiple times, retries become less frequent. To force Config Sync to retry sooner, delete the reconciler Pod. This action causes Config Sync to recreate the reconciler Pod and immediately fetch from the source of truth: kubectl delete pod -n config-management-system RECONCILER_NAME RECONCILER_NAME with the reconciler name
of the RootSync or RepoSync object.
|
|
Source of truth | 1.19.0 | 1.20.0 |
Fixed: Lingering Git lock fileIf you see an error similar to the following from the
KNV2004: error in the git-sync container: ... fatal: Unable to create '/repo/source/.git/shallow.lock': File exists. ... Workaround: To work around this issue, restart the impacted reconciler Pod to give it a new ephemeral volume: kubectl delete pod -n config-management-system RECONCILER_NAME RECONCILER_NAME with the reconciler name
of the RootSync or RepoSync object.
|
Syncing | 1.15.0 |
High number of ineffective
|
|
Syncing | 1.17.0 | 1.17.3 |
Fixed: Config Sync fails to pull the latest commit from a branchIn Config Sync versions 1.17.0, 1.17.1, and 1.17.2, you might
encounter a problem where Config Sync fails to pull the latest commit
from the HEAD of a specific branch when the same branch is referenced in
multiple remotes and they are out of sync. For example, the
The following example shows what this issue might look like: git ls-remote -q [GIT_REPOSITORY_URL] main main^{} 244999b795d4a7890f237ef3c8035d68ad56515d refs/heads/main # the latest commit be2c0aec052e300028d9c6d919787624290505b6 refs/remotes/upstream/main # the commit Config Sync pulls from In version 1.17.3 and later, the git-sync dependency was updated with a different fetch mechanism. If you can't upgrade, you can set your Git revision
( |
Private registry | 1.19.0 |
Config Sync doesn't use private registry for reconciler DeploymentsConfig Sync should replace the images for all Deployments when a private registry is configured. However, Config Sync does not replace the image registry for images in the reconciler Deployments. Workaround: A workaround for this issue is to configure the image registry mirror in containerd. |
|
Syncing | 1.17.0 | 1.18.3 |
Fixed: Config Sync reconciler is crashloopingIn Config Sync versions 1.17.0 or later, you might encounter a problem where the reconciler fails to create a rest config in some Kubernetes providers. The following example shows what this issue might look like in the reconciler logs: Error creating rest config: failed to build rest config: reading local kubeconfig: loading REST config from "/.kube/config": stat /.kube/config: no such file or directory |
Terraform | Terraform version 5.41.0 |
Config Sync can't be installed or upgraded using TerraformTerraform version 5.41.0 introduced a new field to the Workaround:
|
|
Google Cloud console |
Config Sync dashboard missing data errors in the Google Cloud consoleYou might see errors such as "missing data" or "invalid cluster credentials" for Config Sync clusters on dashboards in the Google Cloud console. This issue can occur when you're not logged in to your GDC (VMware) or GDC (bare metal) clusters. Workaround: If you see these types of errors in the Google Cloud console on your GDC (VMware) or GDC (bare metal) clusters, ensure that you're logged in to your clusters with GKE Identity Service or connect gateway. |
What's next
- If you need additional support, reach out to Cloud Customer Care.