This page lists known issues for GKE. This page is for Admins and architects who manage the lifecycle of the underlying technology infrastructure, and respond to alerts and pages when service level objectives (SLOs) aren't met or applications fail.
To filter the known issues by a product version or category, select your filters from the following drop-down menus.
Select your GKE version:
Select your problem category:
Or, search for your issue:
Category | Identified version(s) | Fixed version(s) | Issue and workaround |
---|---|---|---|
Operation | 1.28,1.29 |
|
Image streaming fails because of symbolic linksA bug in the Image streaming feature might cause containers to fail to start. Containers running on a node with image streaming enabled on specific GKE versions might fail to be created with the following error: "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to mount [PATH]: too many levels of symbolic links"
If you are impacted by this issue, you can check for empty layers or duplicate layers. If you can't remove empty empty layers or duplicate layers, then disable Image streaming. |
Operation | 1.27,1.28,1.29 |
|
Image streaming fails because of missing filesA bug in the Image streaming feature might cause containers to fail because of a missing file or files. Containers running on a node with Image streaming enabled on the following versions might fail to start or run with errors informing that certain files don't exist. The following are examples of such errors:
If you are impacted by this issue, you can disable Image streaming. |
Networking,Upgrades and updates | 1.28 |
Gateway TLS configuration errorWe've identified an issue with configuring TLS for Gateways in clusters running GKE version 1.28.4-gke.1083000. This affects TLS configurations using either an SSLCertificate or a CertificateMap. If you're upgrading a cluster with existing Gateways, updates made to the Gateway will fail. For new Gateways, the load balancers won't be provisioned. This issue will be fixed in an upcoming GKE 1.28 patch version. |
|
Upgrades and updates | 1.27 | 1.27.8 or later |
GPU device plugin issue
Clusters that are running GPUs and are upgraded from 1.26 to a 1.27 patch
version earlier than 1.27.8 might experience issues with their nodes'
GPU device plugins (
|
Networking | 1.27,1.28,1.29 |
|
Intermittent connection establishment failuresClusters on control plane versions 1.26.6-gke.1900 and later might encounter intermittent connection establishment failures. The chances of failures are low and it doesn't affect all clusters. The failures should stop completely after a few days since the symptom onset. |
Operation | 1.27,1.28 |
|
Autoscaling for all workloads stops
HorizontalPodAutoscaler (HPA) and VerticalPodAutoscaler (VPA) might
stop autoscaling all workloads in a cluster if it contains misconfigured
Workaround:
Correct misconfigured
For more details on how to configure |
Networking | 1.27,1.28,1.29 |
|
DNS resolution issues with Container-Optimized OSWorkloads running on GKE clusters with Container-Optimized OS-based nodes might experience DNS resolution issues. |
Operation | 1.28,1.29 |
|
Container Threat Detection fails to deployContainer Threat Detection might fail to deploy on Autopilot clusters running the following GKE versions:
|
Networking | 1.28 | 1.28.3-gke.1090000 or later |
Network Policy drops a connection due to incorrect connection tracking lookupFor clusters with GKE Dataplane V2 enabled, when a client Pod connects to itself using a Service or the virtual IP address of an internal passthrough Network Load Balancer, the reply packet is not identified as a part of an existing connection due to incorrect conntrack lookup in the dataplane. This means that a Network Policy that restricts ingress traffic for the Pod is incorrectly enforced on the packet. The impact of this issue depends on the number of configured Pods for the Service. For example, if the Service has 1 backend Pod, the connection always fails. If the Service has 2 backend Pods, the connection fails 50% of the time. Workaround:
You can mitigate this issue by configuring the |
Networking | 1.27,1.28 |
|
Packet drops for hairpin connection flowsFor clusters with GKE Dataplane V2 enabled, when a Pod creates a TCP connection to itself using a Service, such that the Pod is both the source and destination of the connection, GKE Dataplane V2 eBPF connection tracking incorrectly tracks the connection states, leading to leaked conntrack entries. When a connection tuple (protocol, source/destination IP, and source/destination port) has been leaked, new connections using the same connection tuple might result in return packets being dropped. Workaround: Use one of the following workarounds:
|
Networking | Earlier than 1.31.0-gke.1506000 | 1.31.0-gke.1506000 and later |
Device typed network in GKE multi-network fails with long network namesCluster creation fails with the following error:
Workaround: Limit the
length of device-typed network object names to 41 characters or less. The
full path of each UNIX domain socket is composed, including the
corresponding network name. Linux has a limitation on socket path lengths
(under 107 bytes). After accounting for the directory, filename prefix, and
the |
Networking, Upgrades | 1.27, 1.28, 1.29, 1.30 |
|
Connectivity issues for
|