Google Distributed Cloud (software only) for bare metal 1.32 release notes

This document lists production updates to Google Distributed Cloud (software only) for bare metal (formerly known as Google Distributed Cloud Virtual, previously known as Anthos clusters on bare metal). Check this page periodically for any new announcements.

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly.

May 06, 2025

Release 1.32.0-gke.1087

Google Distributed Cloud for bare metal 1.32.0-gke.1087 is now available for download. To upgrade, see Upgrade clusters. Google Distributed Cloud for bare metal 1.32.0-gke.1087 runs on Kubernetes v1.32.3-gke.1000.

After a release, it takes approximately 7 to 14 days for the version to become available for installations or upgrades with the GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

If you use a third-party storage vendor, check the Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of Google Distributed Cloud for bare metal.

Version 1.29 end of life: In accordance with the Version Support Policy, version 1.29 (all patch releases) of Google Distributed Cloud for bare metal has reached its end of life and is no longer supported.

  • GA: Added support for new diagnosis utility for GKE Identity Service, which provides diagnostics information related to the login flow. This makes it easier to troubleshoot login and OIDC configuration issues. For more information, see See GKE Identity Service diagnostic utility.

  • GA: For high availability control planes, Google Distributed Cloud automatically configures the Keepalived virtual router redundancy protocol (VRRP) configuration to make failover behaviour deterministic and prevent interleaving of ARP replies with different MAC addresses:

    • By default, each Keepalived instance is configured with a different priority value.
    • Each Keepalived instance is configured with nopreempt to avoid elections when a non-master instance is restarted.
  • GA: Added support for a new field, controlPlane.loadBalancer.keepalivedVRRPGARPMasterRepeat, in the cluster configuration file that maps to the vrrp_garp_master_repeat setting for Keepalived. This field specifies the number of gratuitous ARP (GARP) messages to send at a time after a control plane node transitions to the role of the master server. The default value is 5.

  • GA: Added a new controlPlane.loadBalancer.mode, field for Layer 2 load balancing. This field lets you separate control plane load balancing from data plane load balancing:

    • At cluster creation, if you set controlPlane.loadBalancer.mode to bundled and loadBalancer.nodePoolSpec is configured, the control plane load balancer runs in the control plane node pool and the data plane load balancer runs in the load balancer node pool.
    • For an existing cluster where controlPlane.loadBalancer.mode isn't set and loadBalancer.nodePoolSpec isn't specified, both the control plane load balancer and the data plane load balancer run in the control plane node pool. You can migrate the data plane load balancer to a load balancer node pool by updating the cluster spec to specify a load balancer node pool (loadBalancer.nodePoolSpec) and to add controlPlane.loadBalancer.mode set to bundled.
  • Upgraded etcd to v3.4.33-0-gke.3.

  • Upgraded containerd to version 1.7.

  • Upgraded the SR-IOV operator, sriov-network-operator, to version 1.4.

  • Added Compress=yes to /etc/systemd/journald.conf to ensure that objects larger than 512 bytes are compressed before they are written to the file system.

  • Added new, default periodic health checks to ensure that Kubernetes cluster resources are configured correctly and functioning properly.

  • Added more namespaces to the default snapshot scenarios.

  • Added preflight check for kernel fsnotify settings for Red Hat Enterprise Linux (RHEL) 8.x.

  • Removed the leading timestamp from the bmctl version response. Temporarily, we've provided -t and --timestamps flags to revert to the old format.

  • Added a check for the FailedCgroupRemoval node condition to the node problem detector (NPD) to look for orphan container processes on nodes. By default, a new plugin for NPD automatically fixes this condition on the node.

  • Updated the cluster delete process to delete worker node pools prior to removing any control plane nodes. This change applies to supported cluster deletion flows, including bmctl, the Google Cloud CLI, and the Google Cloud console.

  • Updated the log entries in the backup.log file created by the bmctl backup command to improve readability.

  • Updated the cluster upgrade operation to keep only the three latest kubeadm backups of etcd and configuration information for a node. Previously, kubeadm kept node backups for every attempted upgrade.

  • Added the kubelet config, CPU Manager state, and Memory Manager state to node snapshots.

  • Fixed an issue that resulted in an excessive creation of periodic kube-proxy-cleanup jobs on cluster nodes with high pod utilization.

  • Fixed an issue that caused cluster creation to fail because kubelet restarted before required static pods are running.

  • Fixed an issue where node upgrades failed due to a missing super-admin.conf file.

  • Fixed an issue where the bmctl update cluster command fails for user clusters that were created with the cloudOperationsServiceAccountKeyPath setting in the header section of the cluster configuration file.

  • Fixed an issue where prompting during bmctl update cluster prevented use of automation. You can now use the --quiet flag to skip prompting.

  • Fixed an issue where node machines didn't update when the registry mirror hosts field was updated.

The 1.32.0-gke.1086 release includes many vulnerability fixes. For a list of all vulnerabilities fixed in this release, see Vulnerability fixes.

For information about the latest known issues, see Google Distributed Cloud for bare metal known issues in the Troubleshooting section.