Google Distributed Cloud air-gapped 1.9.3 release notes

April 28, 2023 [GDC 1.9.3]


Google Distributed Cloud air-gapped 1.9.3 is now released.

See the product overview to learn about the features of Google Distributed Cloud air-gapped.


The Google Distributed Cloud air-gapped 1.9.3 audit logging (AL) operable component introduces an enhancement for AuditLoggingTargets. Kubernetes objects created by an AuditLoggingTarget CR are now self-healed if they are updated or deleted.


Updated Google Distributed Cloud version to 1.14.4-gke.4 to apply the latest security patches and important updates.

See the Google Distributed Cloud release notes for the latest information.


The Google Distributed Cloud air-gapped 1.9.3 user interface (UI) now includes a message to inform you about the maximum worker node amount with the current control plane setting and pod CIDR option when you create a cluster.



In the Google Distributed Cloud air-gapped 1.9.3 user interface (UI) component, VM creation using UI allows disks restored from a snapshot to serve as boot disks.


In the Google Distributed Cloud air-gapped 1.9.3 identity and access management (IAM) component, predefined roles enabled for upgrades are no longer subject to manual changes. Predefined role manifests override any manual changes.


Google Distributed Cloud air-gapped 1.9.3 resolves the Firewall admin account lock out issue while rotating the admin credentials in the firewall (FW) component.


Google Distributed Cloud air-gapped 1.9.3 fixes export of operational logs to Splunk in the LOG component.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where role-based access control (RBAC) and schema settings in the VM manager is stopping users from starting VM backup and restore processes.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where the vm-runtime addon is stuck during the upgrade of the gpu-org-system-cluster from 1.9.1 to 1.9.2 because the kubevm-gpu-driver-daemonset pods are in the CrashLoopBackOff state.


Google Distributed Cloud air-gapped 1.9.3 resolves an internal load balancer (ILB) services issue in the UNET component. Releases 1.9.0 - 1.9.2 contained a bug where internal load balancer (ILB) services were assigned an external IP instead of an internal IP. The impact is that the external load balancer IP pool is used more quickly as ILB services take addresses from this pool. However, the IPs assigned to ILB services were not advertised outside of the org, so the service remained internal to the org. This bug is fixed in 1.9.3 so that ILB services are assigned internal IPs.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where a user cluster does not become ready in time.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where an add-on installation fails.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where an OrganizationUpgrade status does not get updated.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where a user cluster upgrade fails to call webhooks.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where a fleet admin controller gets stuck in a crash loop with the Fleet admin controller manager stopped: failed to wait for auditloggingtarget caches to sync: timed out waiting for cache to be synced error in the logs.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where a system cluster does not become ready in time.


Google Distributed Cloud air-gapped 1.9.3 is unable to set AddOn selector labels for the root admin cluster.


Google Distributed Cloud air-gapped 1.9.3 has a known issue in the UI that lets you select an incompatible coupling of GPU to VM type.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where VMs with memory greater than 32 GB require a memory override due to an incorrect QEMU overhead calculation.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where the kube-state-metrics deployment crash loops.


Google Distributed Cloud air-gapped 1.9.3 has a known issue where alerts in organization system clusters don't reach the ticketing system.