Google Distributed Cloud (software only) for VMware release notes

This page documents production updates to Google Distributed Cloud (software only) for VMware. You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated features.

See also:

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly.

February 13, 2025

Google Distributed Cloud (software only) for VMware 1.31.200-gke.58 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.31.200-gke.58 runs on Kubernetes v1.31.5-gke.300.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issue is fixed in 1.31.200-gke.58:

  • Fixed an issue that caused Runtime: out of memory errors after running gkeadm to create or upgrade clusters.

The 1.31.200-gke.58 release includes many vulnerability fixes. For a list of all vulnerabilities fixed in this release, see Vulnerability fixes.

February 05, 2025

Google Distributed Cloud (software only) for VMware 1.31.100-gke.136 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.31.100-gke.136 runs on Kubernetes v1.31.4-gke.900.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following functional change was made in 1.31.100-gke.136:

  • Removed support in the Konnectivity server (konnectivity-server) for the following weak cryptographic cipher suites: TLS_RSA_WITH_AES_256_GCM_SHA384 and TLS_RSA_WITH_AES_128_GCM_SHA256

The following issues are fixed in 1.31.100-gke.136:

  • Fixed an issue to prevent checking for add-on node IP addresses for HA admin clusters with three control-plane nodes and no add-on nodes.

  • Fixed an issue where DNS and NTP servers weren't checked for HA admin clusters or for user clusters configured for Controlplane V2.

  • Fixed an issue where the VM template used for the HA admin control plane node repair isn't refreshed in vCenter after an upgrade.

  • Fixed an issue where a race condition during migration caused admin add-on nodes to get stuck at a NotReady status.

The 1.31.100-gke.136 release includes many vulnerability fixes. For a list of all vulnerabilities fixed in this release, see Vulnerability fixes.

Google Distributed Cloud (software only) for VMware 1.30.500-gke.126 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.30.500-gke.126 runs on Kubernetes v1.30.8-gke.200.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issues are fixed in 1.30.500-gke.126:

  • Fixed an issue that caused non-HA cluster upgrades to get stuck creating or updating cluster control plane workloads.

  • Fixed an issue where DNS and NTP servers weren't checked for HA admin clusters or for user clusters configured for Controlplane V2.

  • Fixed an issue where a race condition during migration caused admin add-on nodes to get stuck at a NotReady status.

  • Fixed an issue where customer workloads with high resource requests triggered irrelevant resource validation warnings.

  • Fixed an issue where the VM template used for the HA admin control plane node repair isn't refreshed in vCenter after an upgrade.

The 1.30.500-gke.126 release includes many vulnerability fixes. For a list of all vulnerabilities fixed in this release, see Vulnerability fixes.

February 04, 2025

Google Distributed Cloud (software only) for VMware 1.29.1000-gke.94 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.1000-gke.94 runs on Kubernetes v1.29.12-gke.800.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issues are fixed in 1.29.1000-gke.94:

  • Fixed an issue to prevent checking for add-on node IP addresses for HA admin clusters with three control-plane nodes and no add-on nodes.

  • Fixed an issue where DNS and NTP servers weren't checked for HA admin clusters or for user clusters configured for Controlplane V2.

The 1.29.1000-gke.94 release includes many vulnerability fixes. For a list of all vulnerabilities fixed in this release, see Vulnerability fixes.

January 09, 2025

Google Distributed Cloud (software only) for VMware 1.29.900-gke.181 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.900-gke.181 runs on Kubernetes v1.29.11-gke.300.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Added support for configuring the GKE Identity Service to enforce a minimum transport layer security (TLS) version of 1.2 for HTTPS connections. By default, the GKE Identity Service allows TLS 1.1 and higher connections. If you require enforcement for a minimum of TLS 1.2, reach out to Cloud Customer Care for assistance.

  • Fixed an issue where customer workloads with high resource requests triggered irrelevant resource validation warnings.

  • Fixed an issue where a race condition during migration caused admin add-on nodes to get stuck at a NotReady status.

  • Fixed an issue where the VM template used for the HA admin control plane node repair isn't refreshed in vCenter after an upgrade.

The following high-severity container vulnerabilities are fixed in 1.29.900-gke.181:

January 07, 2025

The original 1.31.0-gke.889 release notes stated incorrectly that the GKE Identity Service allows TLS 1.1 and higher connections, by default. Here is the correct change description:

  • GKE Identity Service now requires that all HTTPS connections use transport layer security (TLS) 1.2 or 1.3. For versions 1.31.0 and higher, TLS 1.1 is disabled by default, however it can be re-enabled if needed.

    If you require support for TLS 1.1, reach out to Cloud Customer Care for assistance.

The 1.31.0-gke.889 release notes have been updated to reflect the correct default behavior.

December 18, 2024

Google Distributed Cloud (software only) for VMware 1.31.0-gke.889 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.31.0-gke.889 runs on Kubernetes v1.31.3-gke.100.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Announcing an early look at two preview features:

  • A new architecture called advanced clusters. When advanced cluster is enabled, the underlying Google Distributed Cloud software deploys controllers that allow for a more extensible architecture. Enabling advanced clusters gives you access to new features and capabilities, such as topology domains.

  • A topology domain is a pool of cluster nodes that are considered to be part of the same logical or physical grouping. Topology domains correspond to some underlying hardware or software that has the possibility of correlated failure, like networking equipment in a rack. As part of setting up a topology domain, you create a topology label that is set on all the nodes in the topology domain during cluster creation. This label lets you set up Pod Topology Spread Constraints.

Note the following limitations of the preview:

Upgrade changes:

  • Dataplane V2 is required for all user clusters. Before upgrading a user cluster to 1.31, follow the steps in Enable Dataplane V2.

  • To upgrade clusters to 1.31, you must upgrade your admin cluster first and then user clusters. For more information, see Version rules.

Version changes:

Other changes:

  • (Edited: January 7, 2025) GKE Identity Service now requires that all HTTPS connections use transport layer security (TLS) 1.2 or 1.3. For versions 1.31.0 and higher, TLS 1.1 is disabled by default, however it can be re-enabled if needed.

    If you require support for TLS 1.1, reach out to Cloud Customer Care for assistance.

  • Removed TLS/SSL weak message authentication code cipher suites in the vSphere cloud controller manager.

The following issues are fixed in 1.31.0-gke.889:

  • Fixed the issue that additional manual steps are needed after disabling always-on secrets encryption with gkectl update cluster.
  • Fixed the known issue that caused migrating a user cluster to Controlplane V2 to fail if secrets encryption has ever been enabled on the user cluster, even if it's already disabled.
  • Fixed the known issue where the gkectl upgrade command returned an incorrect error about the netapp storageclass.
  • Fixed the known issue where updating DataplaneV2 ForwardMode doesn't automatically trigger anetd DaemonSet restart.

The following Ubuntu vulnerabilities: are fixed in 1.31.0-gke.889:

Additional Ubuntu vulnerabilities fixed in 1.31.0-gke.889:

December 17, 2024

The following critical container vulnerabilities are fixed in 1.31.0-gke.889:

December 10, 2024

Google Distributed Cloud (software only) for VMware 1.30.400-gke.133 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.30.400-gke.133 runs on Kubernetes v1.30.6-gke.300.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Added support for configuring the GKE Identity Service to enforce a minimum transport layer security (TLS) version of 1.2 for HTTPS connections. By default, the GKE Identity Service allows TLS 1.1 and higher connections. If you require enforcement for a minimum of TLS 1.2, reach out to Cloud Customer Care for assistance.

The following vulnerabilities are fixed in 1.30.400-gke.133:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

November 22, 2024

Google Distributed Cloud (software only) for VMware 1.30.300-gke.84 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.30.300-gke.84 runs on Kubernetes v1.30.5-gke.600.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issues are fixed in 1.30.300-gke.84:

  • Fixed the issue that additional manual steps are needed after disabling always-on secrets encryption with gkectl update cluster.
  • Fixed the known issue that caused gkectl to display false warnings on admin cluster version skew.

The following vulnerabilities are fixed in 1.30.300-gke.84:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

November 14, 2024

Google Distributed Cloud (software only) for VMware 1.29.800-gke.108 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.800-gke.108 runs on Kubernetes 1.29.10-gke.100.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Added support for configuring the GKE Identity Service to enforce a minimum transport layer security (TLS) version of 1.2 for HTTPS connections. By default, the GKE Identity Service allows TLS 1.1 and higher connections. If you require enforcement for a minimum of TLS 1.2, reach out to Cloud Customer Care for assistance.

The following issue is fixed in 1.29.800-gke.108:

Fixed the issue that additional manual steps are needed after disabling always-on secrets encryption with gkectl update cluster.

The following vulnerabilities are fixed in 1.29.800-gke.108:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

November 07, 2024

Google Distributed Cloud (software only) for VMware 1.28.1200-gke.83 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.28.1200-gke.83 runs on Kubernetes v1.28.14-gke.700.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issue is fixed in 1.28.1200-gke.83:

  • Fixed the issue that additional manual steps are needed after disabling always-on secrets encryption with gkectl update cluster.

The following vulnerabilities are fixed in 1.28.1200-gke.83:

Container-optimized OS vulnerabilities:

October 25, 2024

Google Distributed Cloud (software only) for VMware 1.29.700-gke.110 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.700-gke.110 runs on Kubernetes v1.29.8-gke.1800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issues are fixed in 1.29.700-gke.110:

  • Fixed the known issue that caused gkectl to display false warnings on admin cluster version skew.
  • Fixed the known issue that caused migrating a user cluster to Controlplane V2 to fail if secrets encryption has ever been enabled on the user cluster, even if it's already disabled.
  • Fixed the known issue that caused migrating an admin cluster from non-HA to HA to fail if the admin cluster had enabled secret encryption at 1.14 or earlier, and upgraded all the way from that version.

The following vulnerabilities are fixed in 1.29.700-gke.110:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

October 17, 2024

Google Distributed Cloud (software only) for VMware 1.28.1100-gke.91 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.28.1100-gke.91 runs on Kubernetes v1.28.14-gke.200.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issue is fixed in 1.28.1100-gke.91:

Fixed the known issue that caused gkectl to display false warnings on admin cluster version skew.

The following vulnerabilities are fixed in 1.28.1100-gke.91:

Critical container vulnerabilities:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

October 10, 2024

Google Distributed Cloud (software only) for VMware 1.30.200-gke.101 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.30.200-gke.101 runs on Kubernetes v1.30.4-gke.1800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Removed TLS/SSL weak message authentication code cipher suites in the vSphere cloud controller manager.

The following issues are fixed in 1.30.200-gke.101:

  • Fixed the known issue that caused migrating a user cluster to Controlplane V2 to fail if secrets encryption had ever been enabled.
  • Fixed the known issue that caused migrating an admin cluster from non-HA to HA to fail if secret encryption was enabled.
  • Fixed the issue that caused the Pre-upgrade tool to block upgrading a user cluster to version 1.30 or higher because of an incorrect storage driver validator check.

The following vulnerabilities are fixed in 1.30.200-gke.101:

Critical container vulnerabilities:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

October 03, 2024

Google Distributed Cloud (software only) for VMware 1.29.600-gke.109 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.600-gke.109 runs on Kubernetes v1.29.8-gke.1800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Removed TLS/SSL weak message authentication code cipher suites in the vSphere cloud controller manager.

Fixed the following vulnerabilities in 1.29.600-gke.109:

Critical container vulnerabilities:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

October 02, 2024

Google Distributed Cloud (software only) for VMware 1.30.100-gke.96 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.30.100-gke.96 runs on Kubernetes v1.30.4-gke.1800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Fixed the following issues in 1.30.100-gke.96:

  • Fixed the known issue where updating dataplaneV2.forwardMode didn't automatically trigger anetd DaemonSet restart.

Fixed the following vulnerabilities in 1.30.100-gke.96:

September 25, 2024

Google Distributed Cloud (software only) for VMware 1.28.1000-gke.59 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.28.1000-gke.59 runs on Kubernetes v1.28.13-gke.600.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Removed TLS/SSL weak message authentication code cipher suites in the vSphere cloud controller manager.

Fixed the following vulnerabilities in 1.28.1000-gke.59:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

September 23, 2024

A security issue was discovered in Kubernetes clusters with Windows nodes where BUILTIN\Users may be able to read container logs and AUTHORITY\Authenticated Users may be able to modify container logs. For more information, see the GCP-2024-054 security bulletin.

September 19, 2024

Google Distributed Cloud (software only) for VMware 1.29.500-gke.160 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.500-gke.160 runs on Kubernetes v1.29.7-gke.1200.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Fixed the following issues in 1.29.500-gke.160:

  • Fixed the known issue where updating DataplaneV2 ForwardMode didn't automatically trigger anetd DaemonSet restart.
  • Fixed the known issue where the credential.yaml file regenerated incorrectly during admin workstation upgrade.

Fixed the following vulnerabilities in 1.29.500-gke.160:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

September 09, 2024

Google Distributed Cloud (software only) for VMware 1.28.900-gke.113 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.28.900-gke.113 runs on Kubernetes v1.28.12-gke.1100.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

The following issues are fixed in 1.28.900-gke.113:

  • Fixed the known issue where updating DataplaneV2 ForwardMode doesn't automatically trigger anetd DaemonSet restart.
  • Fixed the known issue where the credential.yaml file was regenerated incorrectly during an admin workstation upgrade.
  • Fixed the known issue where the etcdctl command was not found during cluster upgrade at the admin cluster backup stage.

Fixed the following vulnerabilities in 1.28.900-gke.113:

High-severity container vulnerabilities:

Ubuntu vulnerabilities:

August 29, 2024

Google Distributed Cloud (software only) for VMware 1.30.0-gke.1930 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.30.0-gke.1930 runs on Kubernetes v1.30.3-gke.200.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for installations or upgrades with the GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

  • For admin and user clusters created at 1.30 and later versions, loadBalancer.Kind needs to be set to either MetalLB or ManualLB.
  • For user clusters created at 1.30 and later versions, enableControlplaneV2 needs to be set to true.
  • The featureGates.GMPForSystemMetrics field in the stackdriver CR is now always on and can't be disabled. It has been default on since 1.16. If you have manually turned it off, this upgrade means a breaking change in some system metrics format. For information on changing this field, see Enabling and disabling Managed Service for Prometheus.

Version changes in 1.30.0-gke.1930:

  • Existing Seesaw load balancers now require TLS 1.2.
  • COS was upgraded to m109
  • Updated Dataplane V2 to use Cilium 1.13

Other changes in1.30.0-gke.1930:

  • Enhanced the upgrade process to include an automatic pre-upgrade check. Before you upgrade your admin or user cluster, the system runs this check to detect known issues. The check also provides guidance to ensure a smooth upgrade experience.
  • Ingress node ports are optional for ControlplaneV2 clusters.
  • Admin clusters created in 1.30 will use Dataplane V2, Google's Container Network Interface (CNI) implementation, which is based on Cilium.
  • Admin clusters upgraded to 1.30 from 1.29 will use Dataplane V2.
  • Removed mTLS on system metrics scrape endpoints, which makes it easier to integrate with 3rd party monitoring systems.
  • Stopped bundling cert-manager and removed the monitoring-operator because system components no longer depend on them. Cert-manager from existing 1.29 clusters will continue running, but stop being managed by Google after upgrading to 1.30. If you don't use cert-manager, you can delete cert-manager after upgrade. New clusters in 1.30 and higher won't come with cert-manager. If you rely on the bundled cert-manager for their own use case, you should install their own in new clusters.
  • The implementation of the preview feature usage metering has changed. Clusters using this feature will continue to function, but we recommend that you use the predefined dashboard, Anthos Cluster Utilization Metering, to understand resource usage at different levels.

​​The following issues were fixed in 1.30.0-gke.1930:

  • Fixed the known issue where cluster creation failed due to the control plane VIP in a different subnet.
  • Fixed the known issue where a user cluster with Binary Authorization failed to come up.
  • Fixed the known issue that caused the Connect Agent to lose connection to Google Cloud after a non-HA to HA admin cluster migration.
  • Fixed the known issue where the admin cluster upgrade failed for clusters created on versions 1.10 or earlier.
  • Fixed the known issue where the Docker bridge IP used 172.17.0.1/16 for COS cluster control plane nodes.
  • Fixed the known issue where the HA admin cluster installation preflight check reported the wrong number of required static IPs.
  • Fixed the known issue that caused multiple network interfaces with the standard CNI didn't work.
  • Fixed a gkeadm preflight check that wasn't validating the VM folder.

The following vulnerabilities were fixed in 1.30.0-gke.1930:

Critical container vulnerabilities:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

August 26, 2024

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

August 13, 2024

Google Distributed Cloud for VMware 1.29.400-gke.81 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.400-gke.81 runs on Kubernetes v1.29.6-gke.1800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

Existing Seesaw load balancers now require TLS 1.2.

The following vulnerabilities are fixed in 1.29.400-gke.81:

High-severity container vulnerabilities:

Ubuntu vulnerabilities:

August 07, 2024

Google Distributed Cloud for VMware 1.28.800-gke.109 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.28.800-gke.109 runs on Kubernetes v1.28.11-gke.2200.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

Existing Seesaw load balancers now require TLS 1.2.

Fixed
The following vulnerabilities are fixed in 1.28.800-gke.109:

High-severity container vulnerabilities:

Ubuntu vulnerabilities:

August 01, 2024

Google Distributed Cloud for VMware 1.16.11-gke.25 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.16.11-gke.25 runs on Kubernetes v1.27.15-gke.1200.

If you are using a third-party storage vendor, check the GDC Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

Existing Seesaw load balancers now require TLS 1.2.

The following vulnerabilities are fixed in 1.16.11-gke.25:

July 26, 2024

Google Distributed Cloud for VMware 1.29.300-gke.184 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.300-gke.184 runs on Kubernetes v1.29.6-gke.600.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

The following vulnerabilities are fixed In 1.29.300-gke.184:

July 17, 2024

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

  • CVE-2024-26925

For more details, see the GCP-2024-045 security bulletin.

July 16, 2024

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

  • CVE-2024-26921
  • CVE-2024-36972

For more details, see the GCP-2024-043 and GCP-2024-044 security bulletins.

July 15, 2024

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

For more details, see the GCP-2024-042 security bulletin.

July 09, 2024

Google Distributed Cloud for VMware 1.29.200-gke.245 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.200-gke.245 runs on Kubernetes v1.29.5-gke.800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

The following vulnerabilities are fixed In 1.29.200-gke.245:

Google Distributed Cloud for VMware 1.28.700-gke.151 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.28.700-gke.151 runs on Kubernetes v1.28.10-gke.2100.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

The following issues are fixed in 1.28.700-gke.151:

  • Fixed the known issue where the Binary Authorization webhook blocked the CNI plugin, which caused user cluster creation to stall.

  • Fixed the known issue that caused cluster creation to fail because the control plane VIP was in a different subnet from other cluster nodes.

The following vulnerabilities are fixed In 1.28.700-gke.151:

Google Distributed Cloud for VMware 1.16.10-gke.36 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.16.10-gke.36 runs on Kubernetes v1.27.14-gke.1600.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

The following vulnerabilities are fixed In 1.16.10-gke.36:

July 08, 2024

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

For more information, see the GCP-2024-041 security bulletin.

July 03, 2024

A remote code execution vulnerability, CVE-2024-6387, was recently discovered in OpenSSH. The vulnerability exploits a race condition that can be used to obtain access to a remote shell, enabling attackers to gain root access. At the time of publication, exploitation is believed to be difficult and take several hours per machine being attacked. We are not aware of any exploitation attempts. This vulnerability has a Critical severity.

For mitigation steps and more details, see the GCP-2024-040 security bulletin.

July 01, 2024

A vulnerability (CVE-2024-26923) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-039 security bulletin.

June 27, 2024

Google Distributed Cloud for VMware 1.29.200-gke.242 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.200-gke.242 runs on Kubernetes v1.29.5-gke.800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

The following issues are fixed in 1.29.200-gke.242:

  • Fixed the known issue that caused cluster creation to fail because the control plane VIP was in a different subnet from other cluster nodes.
  • Fixed the known issue where the Binary Authorization webook blocked the CNI plugin, which caused user cluster creation to stall.
  • Fixed the known issue that caused the Connect Agent to lose connection to Google Cloud after a non-HA to HA admin cluster migration.
  • Fixed the known issue that caused an admin cluster upgrade to fail for clusters created on versions 1.10 or earlier.
  • Added back the CNI binaries to the OS image so that multiple network interfaces with standard CNI will work (see this known issue).

The following vulnerabilities are fixed in 1.29.200-gke.242:

High-severity container vulnerabilities:

Container-optimized OS vulnerabilities:

Ubuntu vulnerabilities:

June 26, 2024

A vulnerability (CVE-2024-26924) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-038 security bulletin.

June 18, 2024

A vulnerability, CVE-2024-26584, was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS nodes.

For more information, see the GCP-2024-036 security bulletin.

June 12, 2024

A vulnerability, CVE-2022-23222, was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS nodes.

For more information, see the GCP-2024-033 security bulletin.

A vulnerability, CVE-2024-26584, was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-035 security bulletin.

Google Distributed Cloud on VMware 1.28.600-gke.154 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.28.600-gke.154 runs on Kubernetes v1.28.9-gke.1800.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

The following issues are fixed in 1.28.600-gke.154:

  • Fixed the known issue that caused admin cluster upgrades to fail for clusters created on versions 1.10 or earlier.
  • Fixed the known issue where the Docker bridge IP uses 172.17.0.1/16 for COS cluster control plane nodes.

The following vulnerabilities are fixed in 1.28.600-gke.154:

June 10, 2024

A vulnerability (CVE-2022-23222) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS nodes:

For more information, see the GCP-2024-033 security bulletin.

June 03, 2024

Google Distributed Cloud for VMware 1.16.9-gke.40 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.16.9-gke.40 runs on Kubernetes v1.27.13-gke.500.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

The following vulnerabilities are fixed in 1.16.9-gke.40:

May 28, 2024

A new vulnerability (CVE-2024-4323) has been discovered in Fluent Bit that could result in remote code execution. Fluent Bit versions 2.0.7 through 3.0.3 are affected.

Google Distributed Cloud doesn't use a vulnerable version of Fluent Bit and is unaffected.

May 16, 2024

Release 1.29.100-gke.248

Google Distributed Cloud on VMware 1.29.100-gke.248 is now available for download. To upgrade, see Upgrade a cluster or a node pool. Google Distributed Cloud 1.29.100-gke.248 runs on Kubernetes v1.29.4-gke.200.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

Updated Dataplane V2 to use Cilium 1.13.

May 15, 2024

A vulnerability (CVE-2023-52620) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-030 security bulletin.

May 14, 2024

A vulnerability (CVE-2024-26642) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-029 security bulletin.

May 13, 2024

A vulnerability (CVE-2024-26581) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-028 security bulletin.

May 09, 2024

GKE on VMware 1.28.500-gke.121 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.28.500-gke.121 runs on Kubernetes v1.28.8-gke.2000.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

May 08, 2024

A vulnerability (CVE-2024-26643) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-026 security bulletin.

A vulnerability (CVE-2024-26808) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-027 security bulletin.

April 29, 2024

GKE on VMware 1.29.0-gke.1456 is now available. To upgrade, see Upgrade a cluster or a node pool. GKE on VMware 1.29.0-gke.1456 runs on Kubernetes v1.29.3-gke.600.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Server-side preflight checks are enabled by default for admin and user cluster create, update, and upgrade. Server-side preflight checks require the following additional firewall rules from your admin cluster control-plane nodes:

  • Admin cluster F5 BIG_IP API (only if using the F5 BIG-IP load balancer)
  • User cluster F5 BIG_IP API (only if using the F5 BIG-IP load balancer)
  • Admin cluster NTP servers
  • User cluster NTP servers
  • Admin cluster DNS servers
  • User cluster DNS servers
  • User cluster on-premises local Docker registry (if your user cluster is configured to use a local private Docker registry instead of gcr.io)
  • Admin cluster nodes
  • User cluster nodes
  • Admin cluster Load Balancer VIPs
  • User cluster Load Balancer VIPs
  • User cluster worker nodes

For the complete list of firewall rules required for server-side preflight checks, see Firewall rules for admin clusters and search for "Preflight checks".

Version changes in GKE on VMware 1.29.0-gke.1456:

  • Updated Dataplane V2 to use Cilium 1.13.
  • Bumped the AIS version to hybrid_identity_charon_20240331_0730_RC00.

Other changes in GKE on VMware 1.29.0-gke.1456:

  • The gkectl create cluster command prompts for confirmation if the cluster configuration file enables legacy features.
  • The gkectl prepare command always prepares cgroup v2 images.
  • Cluster configuration files are prepopulated with ubuntu_cgv2 (cgroupv2) as the osImageType.
  • The gkeadm tool isn't supported on macOS and Windows.
  • A lightweight version of gkectl diagnose snapshot is available for both admin and user clusters.
  • User cluster upgrades: the --dry-run flag for gkectl upgrade cluster runs preflight checks but doesn't doesn't start the upgrade process.
  • The --async flag for gkectl upgrade cluster to run an asynchronous upgrade is now supported for admin clusters

The following issues are fixed in 1.29.0-gke.1456:

  • Fixed the issue where the admin cluster backup did a retry on non-idempotent operations.
  • Fixed the known issue where the controlPlaneNodePort field defaults to 30968 when the manualLB spec is empty`
  • Fixed the known issue that caused the preflight check to fail when the hostname wasn't in the IP block file.
  • Fixed the known issue that caused Kubelet to be flooded with logs stating that "/etc/kubernetes/manifests" does not exist on the worker nodes.
  • Fixed the manual load balancer issue where the IngressIP is overwritten with the Spec.LoadBalancerIP even if it is empty.
  • Fixed the issue that preflight jobs might be stuck in the pending state.
  • Fixed an issue where egress NAT erroneously broke long-lived connections.
  • Fixed Seesaw crashing on duplicated service IP.
  • Fixed a warning in the storage preflight check.

Fixed the following vulnerabilities GKE on VMware 1.29.0-gke.1456:

April 26, 2024

A vulnerability (CVE-2024-26585) was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2024-024 security bulletin.

April 25, 2024

GKE on VMware 1.16.8-gke.19 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.16.8-gke.19 runs on Kubernetes v1.27.12-gke.1000.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

April 17, 2024

GKE on VMware 1.28.400-gke.75 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.28.400-gke.75 runs on Kubernetes v1.28.7-gke.1700.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following vulnerabilities are fixed in1.28.400-gke.75:

April 09, 2024

GKE on VMware 1.16.7-gke.46 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.16.7-gke.46 runs on Kubernetes v1.27.10-gke.500.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.16.7-gke.46.

  • Fixed the known issue where the controlPlaneNodePort field defaults to 30968 when the manualLB spec is empty.

The following vulnerabilities are fixed in 1.16.7-gke.46:

April 03, 2024

A Denial-of-Service (DoS) vulnerability (CVE-2023-45288) was recently discovered in multiple implementations of the HTTP/2 protocol, including the golang HTTP server used by Kubernetes. The vulnerability could lead to a DoS of the Google Kubernetes Engine (GKE) control plane. For more information, see the GCP-2024-022 security bulletin.

March 27, 2024

GKE on VMware 1.15.10-gke.32 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.15.10-gke.32 runs on Kubernetes v1.26.13-gke.1100.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issue is fixed in 1.15.10-gke.32:

  • Fixed the known issue where the controlPlaneNodePort field defaults to 30968 when the manualLB spec is empty.

The following vulnerabilities are fixed in 1.15.10-gke.32:

March 21, 2024

GKE on VMware 1.28.300-gke.123 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.28.300-gke.123 runs on Kubernetes v1.28.4-gke.1400.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

  • Increased the default memory limit for node-exporter.
  • Updated the AIS version to hybrid_identity_charon_20240228_0730_RC00.

The following issues are fixed in 1.28.300-gke.123:

  • Fixed the issue where the admin cluster backup did a retry on non-idempotent operations.
  • Fixed the known issue where the controlPlaneNodePort field defaulted to 30968 when the manualLB spec was empty.
  • Fixed the known issue that caused the preflight check to fail when the hostname wasn't in the IP block file.
  • Fixed the known issue that caused Kubelet to be flooded with logs stating that "/etc/kubernetes/manifests" does not exist on the worker nodes.

The following vulnerabilities are fixed in 1.28.300-gke.123:

February 29, 2024

GKE on VMware 1.16.6-gke.40 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.16.6-gke.40 runs on Kubernetes v1.27.8-gke.1500.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in1.16.6-gke.40:

  • Fixed the known issue that caused kubelet to be flooded with logs stating that /etc/kubernetes/manifests does not exist on the worker nodes.
  • Fixed the known issue that caused a preflight check to fail when the hostname isn't in the IP block file.
  • Fixed the manual load balancer issue where the IngressIP is overwritten with the Spec.LoadBalancerIP even if it is empty.
  • Fixed the known issue where a 1.15 user master machine encountered an unexpected recreation when the user cluster controller was upgraded to 1.16.

The following vulnerabilities are fixed in1.16.6-gke.40:

February 27, 2024

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

GKE on VMware 1.15.9-gke.20 is now available. To upgrade, see Upgrading GKE on VMware. GKE on VMware 1.15.9-gke.20 runs on Kubernetes v1.26.10-gke.2000.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

February 26, 2024

GKE on VMware 1.28.200-gke.111 is now available. To upgrade, see Upgrading Anthos clusters on VMware. GKE on VMware 1.28.200-gke.111 runs on Kubernetes v1.28.4-gke.1400.

If you are using a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.28.200-gke.111:

  • Fixed the known issue that caused a preflight check to fail when the hostname isn't in the IP block file.
  • Fixed the known issue where the storage policy field is missing in the admin cluster configuration template.
  • Fixed the manual load balancer issue where the IngressIP is overwritten with the Spec.LoadBalancerIP even if it is empty.
  • Fixed the issue that preflight jobs might be stuck in the pending state.
  • Fixed the known issue where nfs-common is missing from the Ubuntu OS image.

The following vulnerabilities are fixed in 1.28.200-gke.111:

February 24, 2024

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

  • CVE-2024-0193

For more information, see the GCP-2024-013 security bulletin.

February 16, 2024

The following vulnerability was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

  • CVE-2023-6932

For more information, see the GCP-2024-011 security bulletin.

February 14, 2024

The following vulnerability was discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes:

  • CVE-2023-6931

For more information, see the GCP-2024-010 security bulletin.

February 01, 2024

GKE on VMware 1.15.8-gke.41 is now available. To upgrade, see Upgrading Anthos clusters on VMware. GKE on VMware 1.15.8-gke.41 runs on Kubernetes v1.26.10-gke.2000.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Upgraded etcd to v3.4.27-0-gke.1.

The following issues are fixed in 1.15.8-gke.41:

  • Fixed Seesaw crashing on duplicated service IP.
  • Fixed a warning in the storage preflight check.

The following vulnerabilities are fixed in 1.15.8-gke.41:

January 31, 2024

A security vulnerability, CVE-2024-21626, has been discovered in runc where a user with permission to create Pods on Container-Optimized OS and Ubuntu nodes might be able to gain full access to the node filesystem.

For instructions and more details, see the GCP-2024-005 security bulletin.

January 25, 2024

GKE for VMware 1.28.100-gke.131 is now available. To upgrade, see Upgrading GKE on VMware. GDCV for VMware 1.28.100-gke.131 runs on Kubernetes v1.28.3-gke.1600.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.28.100-gke.131:

  • Fixed an issue where duplicate Service IP addresses caused the Seesaw load balancer to fail.

  • Fixed an issue where egress NAT erroneously broke long-lived connections.

The following vulnerabilities are fixed in 1.28.100-gke.131:

GKE for VMware 1.16.5-gke.28 is now available. To upgrade, see Upgrading GKE on VMware. GDCV for VMware 1.16.5-gke.28 runs on Kubernetes 1.27.6-gke.2500.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.16.5-gke.28:

  • Fixed an issue where duplicate Service IP addresses caused the Seesaw load balancer to fail.

The following vulnerabilities are fixed in 1.16.5-gke.28:

There is an issue that affects upgrading from 1.16.x to 1.28.100. If the 1.16.x cluster relies on an NFS volume, the upgrade will fail. Clusters that don't use an NFS volume are not affected.

December 18, 2023

GKE on VMware, formerly Anthos clusters on VMware, is a component of Google Distributed Cloud Virtual, software that brings Google Kubernetes Engine (GKE) to on-premises data centers. We are in the process of updating documentation and the Google Cloud Console UI with the new name.

GKE on VMware 1.28.0-gke.651 is now available. GKE on VMware 1.28.0-gke.651 runs on Kubernetes v1.28.3-gke.700. To upgrade, see Upgrading GKE on VMware clusters.

For easier identification of the Kubernetes version for a given release, we are aligning GKE on VMware version numbering with GKE version numbering. This change starts with the December 2023 minor release, which is version 1.28. Additionally, GKE on VMware patch versions (z in the semantic version numbering scheme x.y.z-gke.N) will increment by 100.

Example version numbers for GKE on VMware:

  • Minor release: 1.28.0-gke.651
  • First patch release (example): 1.28.100-gke.27
  • Second patch release (example): 1.28.200-gke.19

This change affects numbering only. Upgrades from 1.16 to 1.28 follow the same process as upgrades between prior minor releases.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

New features in GKE on VMware 1.28.0-gke.651:

Breaking changes in GKE on VMware 1.28.0-gke.651:

  • Cloud Monitoring now requires projects to enable the kubernetesmetadata.googleapis.com API and grant the kubernetesmetadata.publisher IAM role to the logging-monitoring service account. This applies to both creating new 1.28 clusters and upgrading existing clusters to 1.28. If your organization has set up an allowlist that lets traffic from Google APIs and other addresses pass through your proxy server, add kubernetesmetadata.googleapis.com to the allowlist.

  • The admin workstation must have at least 100 GB of disk space. If you are upgrading to 1.28, check the adminWorkstation.diskGB field in the admin workstation configuration file and make sure that the specified size is at least 100.

Version changes in GKE on VMware 1.28.0-gke.651:

  • Bumped etcd to version v3.4.27-0-gke.1.
  • Bumped istio-ingress to version 1.19.3.
  • Bumped the AIS version to hybrid_identity_charon_20230830_0730_RC00.

Other changes in GKE on VMware 1.28.0-gke.651:

  • Creating non-HA admin clusters isn't allowed. New admin clusters are required to be highly available. Non-HA admin clusters have 1 control plane node and 2 add-on nodes. An HA admin cluster has 3 control plane nodes with no add-on nodes, so the number of VMs that a new admin cluster requires hasn't changed.
  • HA admin clusters now have a long running controller to perform reconciliation periodically.
  • The command gkectl repair admin-master --restore-from-backup now supports restoration of etcd data for HA admin clusters.
  • When upgrading user clusters to version 1.28, we validate all changes made in the configuration file, and return an error for unsupported changes. See Remove unsupported changes to unblock upgrade.
  • The vSphere cloud controller manager is enabled in Controlplane V2 user clusters.
  • We now always write the local k8s audit log file, even when Cloud audit logging is enabled. This allows for easier third party logging system integration.
  • MetalLB will be the default load balancer for 1.29 user and admin clusters. The ability to use Seesaw as a load balancer will be removed with 1.29. We recommend migrating to the MetalLB load-balancer. Upgrades from existing Seesaw clusters will continue to work for a few more releases.
  • The loadBalancer.manualLB.addonsNodePort field is deprecated. The field was used for the in-cluster Prometheus and Grafana add-ons, which was deprecated in version 1.16.
  • The loadBalancer.vips.addonsVIP field is deprecated. The field was used for the in-cluster Prometheus and Grafana add-ons, which was deprecated in version 1.16.
  • yq is no longer pre-installed on the admin workstation.
  • Control-plane nodes now have the node-role.kubernetes.io/control-plane taint.
  • In-tree GlusterFS is removed from Kuberentes 1.27. Add storage validation to detect in-tree GlusterFS volumes.
  • Metrics data are now gzip compressed when they are sent to Cloud monitoring.

The following issues are fixed in 1.28.0-gke.651:

  • Fixed an issue where disable_bundled_ingress failed user cluster load balancer validation.
  • Fixed an issue where the cluster-health-controller sometimes leaked vSphere sessions.
  • Fixed an etcd hostname mismatch issue when using FQDN.
  • Fixed a known issue where admin cluster update or upgrade failed if the projects or locations of add-on services didn't match each other.
  • Fixed a known issue where the CSI workload preflight check failed due to a Pod startup failure.
  • Fixed an issue where deleting a user cluster with a volume attached might get stuck.
  • Fixed a known issue where deleting a Controlplane V2 user cluster might get stuck.
  • Fixed a logrotate error on the ubuntu_containerd image.
  • Fixed a disk full issue on Seesaw VMs due to no log rotation for fluent-bit.
  • Fixed a known issue where Seesaw didn't set the target IP in GARP replies.
  • Fixed a flaky SSH error on non-HA admin control-plane nodes after update/upgrade.

The following vulnerabilities are fixed in 1.28.0-gke.651:

There is an issue that affects upgrading from 1.16.x to 1.28.0. If the 1.16.x cluster relies on an NFS volume, the upgrade will fail. Clusters that don't use an NFS volume are not affected.

Anthos clusters on VMware 1.16.4-gke.37 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.16.4-gke.37 runs on Kubernetes 1.27.6-gke.2500.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.16.4-gke.37:

  • Fixed a warning in the storage preflight check.
  • Fixed an issue where control plane creation failed for a user cluster when using a FQDN hostname for a HA admin cluster.
  • Fixed an issue where the cluster-health-controller might leak vSphere sessions.
  • Fixed an issue where disable_bundled_ingress failed user cluster load balancer validation.

The following vulnerabilities are fixed in 1.16.4-gke.37:

December 12, 2023

Anthos clusters on VMware 1.15.7-gke.40 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 11.15.7-gke.40 runs on Kubernetes 1.26.9-gke.700.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.15.7-gke.40:

  • Fixed the etcd hostname mismatch issue when using a FQDN.
  • Fixed an issue where the cluster-health-controller might leak vSphere sessions.
    Fixed the known issue where the CSI workload preflight check fails due to Pod startup failure.

The following vulnerabilities are fixed in 1.15.7-gke.40:

December 04, 2023

The StatefulSet CSI Migration Tool is now available. To learn how to migrate stateful workloads from an in-tree vSphere volume plugin to the vSphere CSI Driver, see Using the StatefulSet CSI Migration Tool.

November 22, 2023

A vulnerability (CVE-2023-5717) has been discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

For more information, see the GCP-2023-046 security bulletin.

November 20, 2023

Anthos clusters on VMware 1.14.10-gke.35 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.8-gke.37 runs on Kubernetes v1.25.13-gke.200.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.14.10-gke.35:

  • Fixed the etcd hostname mismatch issue when using FQDN
  • Fixed the issue where deleting a user cluster with a volume attached stalls, in which case the cluster can't be deleted and can't be used.

The following vulnerabilities are fixed in 1.14.10-gke.35:

November 16, 2023

Anthos clusters on VMware 1.16.3-gke.45 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.16.1-gke.44 runs on Kubernetes 1.27.4-gke.1600.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The Prometheus and Grafana add-ons field, loadBalancer.vips.addonsVIP, is deprecated. This change is because Google Managed Service for Prometheus replaced the Prometheus and Grafana add-ons.

The following issues are fixed in 1.16.3-gke.45:

  • Fixed a Cilium issue causing egress NAT to erroneously break long-lived connections.
  • Fixed the etcd hostname mismatch issue when using a FQDN.
  • Fixed the known issue that caused admin cluster updates or upgrades to fail if the projects or locations of add-on services don't match each other.
  • Fixed the issue that external cluster snapshot won't be taken after gkectl update admin fails.
  • Fixed an issue that caused the CSI workload preflight to fail when Istio is enabled.
  • Fixed the issue that deleting a user cluster with a volume attached may be stuck forever.
  • Fixed the known issue that caused user cluster deletion to fail when using a user-managed admin workstation.

The following vulnerabilities are fixed in 1.16.3-gke.45:

November 13, 2023

The following vulnerabilities were discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes.

  • CVE-2023-4147

For more information, see the GCP-2023-042 security bulletin.

November 08, 2023

A vulnerability (CVE-2023-4004) has been discovered in the Linux kernel that can lead to a privilege escalation on Container-Optimized OS and Ubuntu nodes. For more information, see the GCP-2023-041 security bulletin.

October 31, 2023

Anthos clusters on VMware 1.15.6-gke.25 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.15.6-gke.25 runs on Kubernetes 1.26.9-gke.700.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following vulnerabilities are fixed in 1.15.6-gke.25:

October 19, 2023

Anthos clusters on VMware 1.16.2-gke.28 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.16.2-gke.28 runs on Kubernetes 1.27.4-gke.1600.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issue is fixed in 1.16.2-gke.28:

  • Fixed the known issue where a non-HA Controlplane V2 cluster is stuck at node deletion until it timesout.

The following vulnerabilities are fixed in 1.16.2-gke.28:

Anthos clusters on VMware 1.14.9-gke.21 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.9-gke.21 runs on Kubernetes 1.25.13-gke.200.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.14.9-gke.21:

  • Fixed the known issue where a non-HA Controlplane V2 cluster is stuck at node deletion until it timesout.

The following vulnerabilities are fixed in 1.14.9-gke.21:

October 12, 2023

Anthos clusters on VMware 1.15.5-gke.41 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.15.5-gke.41 runs on Kubernetes 1.26.7-gke.2500.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.15.5-gke.41:

  • Fixed the issue that server-side preflight checks fail to validate container registry access on clusters with a private network and no private registry.
  • Fixed the known issue where a non-HA Controlplane V2 cluster is stuck at node deletion until it timesout.
  • Fixed the known issue where upgrading or updating an admin cluster with a CA version greater than 1 fails.
  • Fixed the issue where the Controlplane V1 stackdriver operator has --is-kubeception-less=true specified by mistake.
  • Fixed the known issue that causes the secrets encryption key to be regenerated when upgrading the admin cluster from 1.14 to 1.15, resulting in the upgrade being blocked.

The following vulnerabilities are fixed in 1.15.5-gke.41:

October 02, 2023

Upgrading an admin cluster with always-on secrets encryption enabled might fail.

An admin cluster upgrade from 1.14.x to 1.15.0 - 1.15.4 with always-on secrets encryption enabled might fail depending on whether the feature was enabled during cluster creation or during cluster update.

We recommend that you don't upgrade your admin cluster until a fix is available in 1.15.5. If you must upgrade to 1.15.0-1.15.4, do the steps in Preventing the upgrade failure before upgrading the cluster.

For information on working around an admin cluster failure because of this issue, see Upgrading an admin cluster with always-on secrets encryption enabled fails. Note that the workaround relies on you having the old encryption key backed up. If the old key is no longer available, you will have to recreate the admin cluster and all user clusters.

September 29, 2023

Anthos clusters on VMware 1.16.1-gke.45 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.16.1-gke.44 runs on Kubernetes 1.27.4-gke.1600.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The Prometheus and Grafana add-ons field, loadBalancer.vips.addonsVIP is deprecated in 1.16 and later. This change is because Google Managed Service for Prometheus replaced the Prometheus and Grafana add-ons in 1.16.

The following issues are fixed in 1.16.1-gke.45:

  • Fixed the known issue that gkectl repair admin-master returns kubeconfig unmarshall error.
  • Fixed the known issue that GARP reply sent by Seesaw doesn't set target IP
  • Fixed the known issue that Seesaw VM may be broken due to low disk space
  • Fixed the known issue that false warnings might be generated against persistent volume claims.
  • Fixed the known issue that caused CNS attachvolume tasks to appear every minute for in-tree PVC/PV after upgrading to Anthos 1.15+.

The following vulnerabilities are fixed in 1.16.1-gke.44:

Anthos clusters on VMware 1.14.8-gke.37 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.8-gke.37 runs on Kubernetes 1.25.12-gke.2400.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

The following issues are fixed in 1.14.8-gke.37:

  • Fixed the disk full known issue on Seesaw VM due to no log rotation for fluent-bit.

The following vulnerabilities are fixed in 1.14.8-gke.37:

September 14, 2023

A standalone tool that you run before upgrading an admin or user cluster is now available. The pre-upgrade tool is supported for Anthos clusters on VMware version 1.9 through 1.13. The tool runs the applicable preflight checks for the version that you are upgrading to and also checks for specific known issues. Before upgrading a 1.9 - 1.13 cluster, we recommend that you run the pre-upgrade tool.

For details on running the tool, see the documentation for the version that you are upgrading to:

September 01, 2023

Anthos clusters on VMware 1.15.4-gke.37 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.15.4-gke.37 runs on Kubernetes 1.26.7-gke.2500.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Upgrading an admin cluster with always-on secrets encryption enabled might fail.

An admin cluster upgrade from 1.14.x to 1.15.0 - 1.15.4 with always-on secrets encryption enabled might fail depending on whether the feature was enabled during cluster creation or during cluster update.

We recommend that you don't upgrade your admin cluster until a fix is available in 1.15.5. If you must upgrade to 1.15.0-1.15.4, do the steps in Preventing the upgrade failure before upgrading the cluster.

For information on working around an admin cluster failure because of this issue, see Upgrading an admin cluster with always-on secrets encryption enabled fails. Note that the workaround relies on you having the old encryption key backed up. If the old key is no longer available, you will have to recreate the admin cluster and all user clusters.

The following issues are fixed in 1.15.4-gke.37:

  • Fixed a known issue where incorrect log rotation configuration for fluent-bit caused low disk space on the Seesaw VM.

  • Fixed a known issue that GARP reply sent by Seesaw doesn't set target IP.

  • Fixed an issue where /etc/vsphere/certificate/ca.crt wasn't updated after vsphere CA rotation on the Controlplane v2 user cluster control plane machines.

  • Fixed a known issue where the admin SSH public key has error after admin cluster upgrade or update.

The following vulnerabilities are fixed in 1.15.4-gke.37:

August 23, 2023

Anthos clusters on VMware 1.16.0-gke.669 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.16.0-gke.669 runs on Kubernetes 1.27.4-gke.1600.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Version changes:

  • Upgraded VMware vSphere Container Storage Plug-in from 3.0 to 3.0.2.
  • The crictl command-line tool was updated to 1.27.
  • The containerd config was updated to version 2.

Other changes:

  • The output of the gkectl diagnose cluster command has been updated to provide a summary that customers can copy and paste when opening support cases.
  • In-tree GlusterFS is removed from Kuberentes 1.27. Add storage validation to detect in-tree glusterFS volumes.

  • Metrics data are now gzip compressed when sending to Cloud Monitoring.

  • The stackdriver-log-forwarder (fluent-bit) now sends logs to Cloud Logging with gzip compression to reduce egress bandwidth needed.

  • Prometheus and Grafana are no longer bundled for in-cluster monitoring and they are replaced with Google Cloud Managed Service for Prometheus.

  • The following flags in the stackdriver custom resource are deprecated and changes to their values aren't honored:

    • scalableMonitoring
    • enableStackdriverForApplications (replaced by enableGMPForApplications and enableCloudLoggingForApplications)
    • enableCustomMetricsAdapter
  • Deploying the vSphere cloud controller manager in both admin and user clusters, and enabling it for admin and kubeception user clusters is now supported.

  • The audit-proxy now sends audit logs to Cloud Audit Logging with gzip compressed to reduce egress bandwidth needed.

  • Removed accounts.google.com from the internet preflight check requirement.

  • The pre-defined dashboards are automatically present based on the presence of metrics.

  • Enabled auto repair on ReadonlyFilesystem node condition

  • Support the d character when using --log-since flag to take cluster snapshot. For example: gkectl diagnose snapshot --log-since=1d

  • A new CSI Workload preflight check was added to verify that workloads using vSphere PVs can work through CSI.

  • Preflight check failures for gkectl prepare now block install and upgrade operations.

  • The kubelet readonly port is now disabled by default for security enhancement. See Enable kubelet readonly port for instructions if you need to re-enable it for legacy reasons.

  • AIS Pods are now scheduled to run on control plane nodes instead of worker nodes.

The following issues are fixed in 1.16.0-gke.669:

  • Fixed the known issue that caused intermittent ssh errors on non-HA admin master after update or upgrade.
  • Fixed the known issue where upgrading enrolled admin cluster could fail due to membership update failure.
  • Fixed the issue where the CPv1 stackdriver operator had --is-kubeception-less=true specified by mistake.

  • Fixed the issue where clusters used the non-high-availability (HA) Connect Agent after an upgrade to 1.15.

  • Fixed the known issue of Cloud Audit Logging failure due to permission denied.

  • Fixed a known issue where the update operation cannot be fulfilled due to KSA signing key version unmatched.

  • Fixed a known issue where $ in the private registry username caused admin control plane machine startup failure.

  • Fixed a known issue where gkectl diagnose snapshot failed to limit the time window for journalctlcommands running on the cluster nodes when you take a cluster snapshot with the --log-since flag.

  • Fixed a known issue where node ID verification failed to handle hostnames with dots.

  • Fixed continuous increase of logging agent memory.

  • Fixed the issue that caused gcloud to fail to update the platform when the required-platform-version is already the current platform version.

  • Fixed an issue where cluster-api-controllers in a high-availability admin cluster had no Pod anti-affinity. This could allow the three clusterapi-controllers Pods not to be scheduled on different control-plane nodes.

  • Fixed the wrong admin cluster resource link annotation key that can cause the cluster to be enrolled again by mistake.

  • Fixed a known issue where node pool creation failed because of duplicated VM-Host affinity rules.

  • The preflight check for StorageClass parameter validations now throws a warning instead of a failure on ignored parameters after CSI Migration. StorageClass parameter diskformat=thin is now allowed and does not generate a warning.

  • Fixed a false error message for gkectl prepare when using a high-availability admin cluster.

  • Fixed an issue during the migration from the Seesaw load balancer to MetalLB that caused 'DeprecatedKubeception' always shows up in the diff.

  • Fixed a known issue where some cluster nodes couldn't access the HA control plane when the underlying network performs ARP suppression.

  • Removed unused Pod disruption budgets (such as kube-apiserver-pdb, kube-controller-manager-pdb, and kube-etcd-pdb) for Controlplane V2 user clusters

The following vulnerabilities are fixed in 1.16.0-gke.669:

August 17, 2023

Anthos clusters on VMware 1.14.7-gke.42 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.7-gke.42 runs on Kubernetes 1.25.10-gke.2100.

Upgraded VMware vSphere Container Storage Plug-in from 2.7.0 to 2.7.2.

The following issues are fixed in 1.14.7-gke.42:

  • Fixed a known issue that admin SSH public key has error after admin cluster upgrade or update.
  • Fixed a known issue that GARP reply sent by Seesaw doesn't set target IP.
  • Fixed an issue that /etc/vsphere/certificate/ca.crt was not updated after vsphere CA rotation on the Controlplane v2 user cluster control plane machines.
  • Fixed an issue that the CPv1 stackdriver operator had --is-kubeception-less=true specified by mistake.

The following vulnerabilities are fixed in 1.14.7-gke.42:

August 10, 2023

Anthos clusters on VMware 1.15.3-gke.47 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.15.3-gke.47 runs on Kubernetes 1.26.5-gke.2100.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Upgrading an admin cluster with always-on secrets encryption enabled might fail.

An admin cluster upgrade from 1.14.x to 1.15.0 - 1.15.4 with always-on secrets encryption enabled might fail depending on whether the feature was enabled during cluster creation or during cluster update.

We recommend that you don't upgrade your admin cluster until a fix is available in 1.15.5. If you must upgrade to 1.15.0-1.15.4, do the steps in Preventing the upgrade failure before upgrading the cluster.

For information on working around an admin cluster failure because of this issue, see Upgrading an admin cluster with always-on secrets encryption enabled fails. Note that the workaround relies on you having the old encryption key backed up. If the old key is no longer available, you will have to recreate the admin cluster and all user clusters.

Anthos clusters on VMware 1.15.3 supports adding the gkeOnPremAPI section to your admin cluster configuration file and user cluster configuration file to enroll the clusters in the Anthos On-Prem API.

Upgraded VMware vSphere Container Storage Plug-in from 3.0 to 3.0.2. For more information, see the Plug-in release notes.

The following issues are fixed in 1.15.3-gke.47:

  • Fixed a known issue. that caused upgrading an admin cluster enrolled in the Anthos On-Prem API to fail.
  • Fixed an issue where audit logs are duplicated into an offline buffer even when they are successfully sent to Cloud Audit Logging.

The following vulnerabilities are fixed in 1.15.3-gke.47:

July 20, 2023

Anthos clusters on VMware 1.13.10-gke.42 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.10-gke.42 runs on Kubernetes 1.24.14-gke.2100.

  • Upgraded VMware vSphere Container Storage Plug-in from 2.6.2 to 2.7.2.
  • Added short names for Volume Snapshot CRDs.

The following issues are fixed in 1.13.10-gke.42:

  • Fixed an issue that CPv1 stackdriver operator has --is-kubeception-less=true specified by mistake.
  • Fixed an issue that /etc/vsphere/certificate/ca.crt is not updated after vsphere CA rotation on the Controlplane v2 user cluster control plane machines.
  • Fixed an issue where audit logs are duplicated into an offline buffer even when they are successfully sent to Cloud Audit Logs.
  • Fixed a known issue where $ in the private registry user name would cause admin control plane machine startup failure.
  • Fixed a known issue where the update operation cannot be fulfilled due to KSA signing key version unmatched.

The following vulnerabilities are fixed in 1.13.10-gke.42:

July 10, 2023

Anthos clusters on VMware 1.15.2-gke.44 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware. 1.15.2-gke.44 runs on Kubernetes 1.26.2-gke.1001.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Upgrading an admin cluster with always-on secrets encryption enabled might fail.

An admin cluster upgrade from 1.14.x to 1.15.0 - 1.15.4 with always-on secrets encryption enabled might fail depending on whether the feature was enabled during cluster creation or during cluster update.

We recommend that you don't upgrade your admin cluster until a fix is available in 1.15.5. If you must upgrade to 1.15.0-1.15.4, do the steps in Preventing the upgrade failure before upgrading the cluster.

For information on working around an admin cluster failure because of this issue, see Upgrading an admin cluster with always-on secrets encryption enabled fails. Note that the workaround relies on you having the old encryption key backed up. If the old key is no longer available, you will have to recreate the admin cluster and all user clusters.

The following issues are fixed in 1.15.2-gke.44:

  • Fixed a bug where after an upgrade to 1.15, clusters used the non-high-availability (HA) Connect Agent.
  • Fixed a known issue where $ in the private registry username caused admin control plane machine startup failure.
  • Fixed a known issue where user cluster update failed after KSA signing key rotation.
  • Fixed a known issue where gkectl diagnose snapshot failed to limit the time window for journalctl commands running on the cluster nodes when you take a cluster snapshot with the --log-since flag.

The following vulnerabilities are fixed in 1.15.2-gke.44:

July 06, 2023

Anthos clusters on VMware 1.14.6-gke.23 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.6-gke.23 runs on Kubernetes 1.25.10-gke.1200.

The following issues are fixed in 1.14.6-gke.23:

  • Fixed a known issue where $ in the private registry username caused admin control plane machine startup failure.
  • Fixed a known issue where gkectl diagnose snapshot failed to limit the time window for journalctl commands running on the cluster nodes when you take a cluster snapshot with the --log-since flag.
  • Fixed a known issue where user cluster update failed after KSA signing key rotation.

The following vulnerabilities are fixed in 1.14.6-gke.23:

High-severity container vulnerabilities:

June 27, 2023

Security bulletin

A number of vulnerabilities have been discovered in Envoy, which is used in Anthos Service Mesh (ASM). These were reported separately as GCP-2023-002.

For more information, see the GCP-2023-016 security bulletin.

Security bulletin

With CVE-2023-31436, an out-of-bounds memory access flaw was found in the Linux kernel's traffic control (QoS) subsystem in how a user triggers the qfq_change_class function with an incorrect MTU value of the network device used as lmax. This flaw allows a local user to crash or potentially escalate their privileges on the system.

For more information, see the GCP-2023-017 security bulletin.

Security bulletin

A new vulnerability (CVE-2023-2235) has been discovered in the Linux kernel that can lead to a privilege escalation on the node. For more information, see the GCP-2023-018 security bulletin.

June 20, 2023

Security bulletin

A new vulnerability, CVE-2023-0468, has been discovered in the Linux kernel that could allow an unprivileged user to escalate privileges to root when io_poll_get_ownership will keep increasing req->poll_refs on every io_poll_wake then overflow to 0 which will fput req->file twice and cause a struct file refcount issue. GKE clusters, including Autopilot clusters, with Container-Optimized OS using Linux Kernel version 5.15 are affected. GKE clusters using Ubuntu images or using GKE Sandbox are unaffected.

For more information, see the GCP-2023-015 security bulletin.

June 16, 2023

Security bulletin

Two new security issues were discovered in Kubernetes where users may be able to launch containers that bypass policy restrictions when using ephemeral containers and either ImagePolicyWebhook (CVE-2023-2727) or the ServiceAccount admission plugin (CVE-2023-2728).

For more information, see the GCP-2023-014 security bulletin

June 14, 2023

Anthos clusters on VMware 1.14.5-gke.41 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.5-gke.41 runs on Kubernetes 1.25.8-gke.1500.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.15, 1.14, and 1.13.

The component access service account key for an admin cluster using a private registry can be updated in 1.14.5 and later. See
Rotating service account keys for details.

The following issues are fixed in 1.14.5-gke.41:

  • Fixed a known issue where the kind cluster downloads container images from docker.io. These container images are now preloaded in the kind cluster container image.
  • Fixed a bug where disks may be out of order in the first boot, causing node bootstrap failure.
  • Fixed a known issue where node ID verification failed to handle hostnames with dots.
  • Fixed an issue where gcloud fails to update the platform when the required-platform-version is already the current platform version.
  • Fixed the Anthos Config Management gcloud issue that the policy controller state might be falsely reported as pending.
  • Fixed continuously increasing memory usage of the logging agent stackdriver-log-forwarder.
  • Fixed the wrong admin cluster resource link annotation key that can cause the cluster to be enrolled in the Anthos On-Prem API again by mistake.
  • Fixed a known issue where some cluster nodes couldn't access the HA control plane when the underlying network performs ARP suppression.
  • Fixed a known issue where vsphere-csi-secret is not updated during gkectl update credentials vsphere for admin cluster

The following vulnerabilities are fixed in 1.14.5-gke.41

Anthos clusters on VMware 1.13.9-gke.29 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.9-gke.29 runs on Kubernetes 1.24.11-gke.1200.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.15, 1.14, and 1.13.

The following issues are fixed in 1.13.9-gke.29:

  • Fixed a known issue where the kind cluster downloads container images from docker.io. These container images are now preloaded in the kind cluster container image.
  • Fixed the issue where gkectl failed to limit the time window for journalctl commands running on the cluster nodes when you take a cluster snapshot with the --log-since flag.
  • Fixed an issue where gcloud fails to update the platform when the required-platform-version is already the current platform version.
  • Fixed a known issue where nodes fail to register if the configured hostname contains a period.
  • Fixed the wrong admin cluster resource link annotation key that can cause the cluster to be enrolled again by mistake.

The following high-severity container vulnerabilities are fixed in 1.13.9-gke.29:

June 06, 2023

Security bulletin

A new vulnerability (CVE-2023-2878) has been discovered in the secrets-store-csi-driver where an actor with access to the driver logs could observe service account tokens. These tokens could then potentially be exchanged with external cloud providers to access secrets stored in cloud vault solutions. The severity of this Security Bulletin is None. For more information, see the GCP-2023-009 security bulletin.

June 05, 2023

Known issue

If you create a version 1.13.8 or version 1.14.4 admin cluster, or upgrade an admin cluster to version 1.13.8 or 1.14.4, the kind cluster pulls the following container images from docker.io:

  • docker.io/kindest/kindnetd
  • docker.io/kindest/local-path-provisioner
  • docker.io/kindest/local-path-helper

If docker.io isn't accessible from your admin workstation, the admin cluster creation or upgrade fails to bring up the kind cluster.

This issue affects the following versions of Anthos clusters on VMware:

  • 1.14.4
  • 1.13.8

For more information, including a workaround, see kind cluster pulls container images from docker.io on the Known issues page.

Security bulletin

A new vulnerability (CVE-2023-1872) has been discovered in the Linux kernel that can lead to a privilege escalation to root on the node. For more information, see the GCP-2023-008.

June 01, 2023

Anthos clusters on VMware 1.15.1-gke.40 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.15.1-gke.40 runs on Kubernetes 1.26.2-gke.1001.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.15, 1.14, and 1.13.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Upgrading an admin cluster with always-on secrets encryption enabled might fail.

An admin cluster upgrade from 1.14.x to 1.15.0 - 1.15.4 with always-on secrets encryption enabled might fail depending on whether the feature was enabled during cluster creation or during cluster update.

We recommend that you don't upgrade your admin cluster until a fix is available in 1.15.5. If you must upgrade to 1.15.0-1.15.4, do the steps in Preventing the upgrade failure before upgrading the cluster.

For information on working around an admin cluster failure because of this issue, see Upgrading an admin cluster with always-on secrets encryption enabled fails. Note that the workaround relies on you having the old encryption key backed up. If the old key is no longer available, you will have to recreate the admin cluster and all user clusters.

  • Fixed a known issue where node ID verification failed to handle hostnames with dots.

  • Fixed continuous increase of logging agent memory.

  • Fixed an issue where cluster-api-controllers in a high-availability admin cluster had no Pod anti-affinity. This could allow the three clusterapi-controllers Pods not to be scheduled on different control-plane nodes.

  • Fixed the wrong admin cluster resource link annotation key that can cause the cluster to be enrolled again by mistake.

  • Fixed a known issue where node pool creation failed because of duplicated VM-Host affinity rules.

  • The preflight check for StorageClass parameter validations now throws a warning instead of a failure on ignored parameters after CSI Migration. StorageClass parameter diskformat=thin is now allowed and does not generate a warning.

  • Fixed an issue where gkectl repair admin-master might fail with Failed to repair: failed to delete the admin master node object and reboot the admin master VM.

  • Fixed a race condition where some cluster nodes couldn't access the high-availability control plane when the underlying network performed ARP suppression.

  • Fixed a false error message for gkectl prepare when using a high-availability admin cluster.

  • Fixed an issue where during user cluster update, DeprecatedKubeception always shows up in the diff.

  • Fixed an issue where there were leftover Pods with failed status due to Predicate NodeAffinity failed during node re-creation.

Fixed the following vulnerabilities:

May 18, 2023

Security bulletin

Two new vulnerabilities (CVE-2023-1281, CVE-2023-1829) have been discovered in the Linux kernel that can lead to a privilege escalation to root on the node. For more information, see the GCP-2023-005 security bulletin.

May 15, 2023

Anthos clusters on VMware 1.13.8-gke.42 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.8-gke.42 runs on Kubernetes 1.24.11-gke.1200.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.15, 1.14, and 1.13.

  • Fixed a race condition where some cluster nodes couldn't access the HA control plane when the underlying network performed ARP suppression.

  • Fixed an issue where vsphere-csi-secret was not updated during gkectl update credentials vsphere for an admin cluster.

  • Disabled motd news on the ubuntu_containerd image to avoid unexpected connections to Canonical.

  • Fixed an issue where the Connect Agent continued using the older image after registry credential update.

  • Fixed an issue where cluster autoscaler ClusterRoleBindings in the admin cluster were accidentally deleted upon user cluster deletion. This fix removes dependency on ClusterRole, ClusterRoleBinding and ServiceAccount objects in the admin cluster.

  • Fixed an issue where Connect Agent in admin clusters might fail to be upgraded during cluster upgrade.

  • Fixed an issue where a cluster might not be registered when the initial membership creation attempt failed.

Fixed the following vulnerabilities:

May 02, 2023

Anthos clusters on VMware 1.15.0-gke.581 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.15.0-gke.581 runs on Kubernetes 1.26.2-gke.1001.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.15, 1.14, and 1.13.

If you use a third-party storage vendor, check the GDCV Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of GKE on VMware.

Upgrading an admin cluster with always-on secrets encryption enabled might fail.

An admin cluster upgrade from 1.14.x to 1.15.0 - 1.15.4 with always-on secrets encryption enabled might fail depending on whether the feature was enabled during cluster creation or during cluster update.

We recommend that you don't upgrade your admin cluster until a fix is available in 1.15.5. If you must upgrade to 1.15.0-1.15.4, do the steps in Preventing the upgrade failure before upgrading the cluster.

For information on working around an admin cluster failure because of this issue, see Upgrading an admin cluster with always-on secrets encryption enabled fails. Note that the workaround relies on you having the old encryption key backed up. If the old key is no longer available, you will have to recreate the admin cluster and all user clusters.

  • CSI migration for the vSphere storage driver is enabled by default. A new storage preflight check and a new CSI workload preflight check verify that PersistentVolumes that used the old in-tree vSphere storage driver will continue to work with the vSphere CSI driver. There is a known issue during admin cluster upgrade. If you see a preflight check about a StorageClass diskformat parameter, you can use --skip-validation-cluster-health to skip the check. This issue will be fixed in a future release.

  • The minimum required version of vCenter and ESXi is 7.0 Update 2.

  • Preview: Support for vSphere 8.0

  • Preview: Support for VM-Host affinity for user cluster node pools

  • Preview: Support for High availability control plane for admin clusters

  • Preview: Support for system metrics collection using Google Cloud Managed Service for Prometheus

  • Preview: You can now filter application logs by namespace, Pod labels and content regex.

  • Preview: Support for storage policy in user clusters

  • Preview: You can now use gkectl diagnose snapshot --upload=true to upload a snapshot. And gkectl helps generate the Cloud Storage bucket with the format gs://anthos-snapshot[uuid]/vmware/$snapshot-name.

  • GA: Support for upgrade and rollback of node pool version

  • GA: gkectl get-config is a new command that locally generates cluster configuration files from an existing admin or user cluster.

  • GA: Support for multi-line parsing of Go and Java logs

  • GA: Support for manual load balancing in user clusters that enable ControlplaneV2

  • GA: Support for update of private registry credentials

  • GA: Metrics and logs in the bootstrap cluster are now uploaded to Google Cloud through Google Cloud's operations suite to provide better observability on admin cluster operations.

  • GA: vSphere CSI is now enabled for Windows node pools.

  • Fully managed Cloud Monitoring Integration dashboards. The new Integration Dashboard is automatically installed. You cannot make changes to the following dashboards, because they are fully managed by Google. However, you can make a copy of a dashboard and customize the copied version:

    • Anthos Cluster Control Plane Uptime
    • Anthos Cluster Node Status
    • Anthos Cluster Pod Status
    • Anthos Cluster Utilization Metering
    • Anthos Cluster on VMware VM Status
  • Admin cluster update operations are now managed by an admin cluster controller.

  • The Connect Agent now runs in high availability mode.

  • The metrics server now runs in high-availability mode.

  • Upgraded the VMware vSphere Container Storage Plug-in from 2.7 to 3.0. This includes support for Kubernetes version 1.26. For more information, see the plug-in release notes.

  • Upgraded Anthos Identity Service to hybrid_identity_charon_20230313_0730_RC00.

  • Switched the node selector from node-role.kubernetes.io/master to node-role.kubernetes.io/control-plane and added toleration node-role.kubernetes.io/control-plane to system components.

  • Controlplane V2 is now the default for new user clusters.

  • Now when you delete a Controlplane V2 user cluster , the data disk is automatically deleted.

  • Cluster DNS now supports ordering policy for upstream servers.

  • Added admin cluster CA certificate validation to the admin cluster upgrade preflight check.

  • Upgraded Anthos Network Gateway to 1.4.4.

  • Updated anthos-multinet.

  • When you upload and share a snapshot using gkectl diagnose snapshot with a Google Support team service account service-[GOOGLE_CLOUD_PROJECT_NUMBER]@gcp-sa-anthossupport.iam.gserviceaccount.com, gkectl helps provision the service account automatically.

  • Upgraded node-exporter from 1.0.1 to 1.4.1.

  • Upgraded Managed Service for Prometheus for application metrics from 0.4 to 0.6.

  • We now allow storage DRS to be enabled in manual mode.

  • GKE connect is now required for admin clusters, and you cannot skip the corresponding validation. You can register existing admin clusters by using gkectl update admin.

  • We no longer silently skip saving empty files in diagnose snapshots, but instead collect the names of those files in a new empty_snapshots file in the snapshot tarball.

  • We now mount /opt/data using disk label data.

  • In the vSphere CSI driver, enabled improved-csi-idempotency and async-query-volume, and disabled trigger-csi-fullsync. This enhances the vSphere CSI driver to ensure volume operations are idempotent.

  • Changed the relative file path fields in the admin cluster configuration file to use absolute paths

  • Removed kubectl describe events in cluster snapshots for a better user experience. kubectl describe events fail when the target event expires. In contrast kubectl get events survive and provide enough debugging information.

Deprecations

  • Support for gkeadm on MAC and Windows is deprecated.

  • The enableWindowsDataplaneV2 field in the user cluster configuration file is deprecated.

  • The gkectl enroll cluster command is deprecated. Use gcloud to enroll a user cluster instead.

  • The following dashboards in the Cloud Monitoring Sample Library will be deprecated in a future release:

    • Anthos cluster control plane uptime
    • Anthos cluster node status
    • Anthos cluster pod status
    • Anthos utilization metering
    • GKE on-prem node status
    • GKE on-prem control plane uptime
    • GKE on-prem pod status
    • GKE on-prem vSphere vm health status
  • In a future release, the following customized dashboards will not be created when you create a new cluster:

    • GKE on-prem node status
    • GKE on-prem control plane uptime
    • GKE on-prem pod status
    • GKE on-prem vSphere vm health status
    • GKE on-prem Windows pod status
    • GKE on-prem Windows node status
  • Fixed the false error message generated by the cluster autoscaler about a missing ClusterRoleBinding. After a user cluster is deleted, that ClusterRoleBinding is no longer needed.

  • Fixed an issue where gkectl check-config failed (nil pointer error) during validation for Manual load balancing.

  • Fixed an issue where the cluster autoscaler did not work when Controlplane V2 was enabled.

  • Fixed an issue where using gkectl update to enable Cloud Audit Logs did not work.

  • Fixed an issue where a preflight check for Seesaw load balancer creation failed if the Seesaw group file already existed.

  • We now backfill the OnPremAdminCluster OSImageType field to prevent an unexpected diff during update.

  • Fixed an issue where disks might be out of order during the first boot.

  • Fixed an issue where the private registry credentials file for the user cluster could not be loaded.

  • Fixed an issue where the user-cluster node options and startup script used the cluster version instead of the node pool version.

  • Fixed an issue where gkectl diagnose cluster didn't check the health of control-plane Pods for kubeception user clusters.

  • Fixed an issue where KSASigningKeyRotation always showed as an unsupported change during user cluster update.

  • Fixed an issue where a cluster might not be registered when the initial membership creation attempt failed.

  • Fixed an issue where user cluster data disk validation used the cluster-level vCenter.datastore instead of masterNode.vsphere.datastore.

  • Fixed an issue where component-access-sa-key was missing in the admin-cluster-creds Secret after admin cluster upgrade.

  • Fixed an issue where during user cluster upgrade, the cluster state indicated that upgrade had completed before CA rotation had completed.

  • Fixed an issue where advanced networking components were evicted or not scheduled on nodes because of Pod priority.

  • Fixed a known issue where the calico-node Pod was unable to renew the auth token in the calico CNI kubeconfig file.

  • Fixed Anthos Identity Service metric exporting issues.

  • During preflight checks and cluster diagnosis, we now skip PersistentVolumes and PersistentVolumeClaims that use non-vSphere drivers.

  • Fixed a known issue where CIDR ranges could not be used in the IP block file.

  • Fixed an issue where auto resizing of CPU and memory for an admin cluster add-on node got reset by an admin cluster controller.

  • anet-operator can now be scheduled to a Windows node in a user cluster that has Controlplane V2 enabled.

May 01, 2023

Anthos clusters on VMware 1.14.4-gke.54 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.4-gke.54 runs on Kubernetes 1.25.8-gke.1500.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

Added admin cluster CA certificate validation to the admin cluster upgrade preflight check.

  • Fixed an issue where the Connect Agent continued using the older image after registry credential update.

  • Fixed an issue where the cluster autoscaler did not work when Controlplane V2 was enabled.

  • Fixed an issue where a cluster might not be registered when the initial membership creation attempt failed.

  • Fixed an issue where ClusterRoleBindings in the admin cluster were accidentally deleted upon user cluster deletion. This fix removes dependency on ClusterRole, ClusterRoleBinding and ServiceAccount objects in the admin cluster.

  • Fixed an issue where a preflight check for Seesaw load balancer creation failed if the Seesaw group file already existed.

  • Disabled motd news on the ubuntu_containerd image.

  • Fixed an issue where gkectl check-config failed at Manual LB slow validation with a nil pointer error.

  • Fix an issue where enabling Cloud Audit Logs with gkectl update did not work.

Fixed the following vulnerabilities:

April 13, 2023

Anthos clusters on VMware 1.12.7-gke.20 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.7-gke.20 runs on Kubernetes 1.23.17-gke.900.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • Added admin cluster CA certificate validation to the admin cluster upgrade preflight check.

  • We now allow storage DRS to be enabled in manual mode.

  • Fixed an issue where using gkectl update to enable Cloud Audit Logs did not work.

  • We now backfill the OnPremAdminCluster OSImageType field to prevent an unexpected diff during update.

  • Fixed an issue where a preflight check for Seesaw load balancer creation failed if the Seesaw group file already existed.

April 12, 2023

Kubernetes image registry redirect

As of March 21, 2023, traffic to k8s.gcr.io is redirected to registry.k8s.io, following the community announcement. This change is happening gradually to reduce disruption, and should be transparent for most Anthos clusters.

To check for edge cases and mitigate potential impact to your clusters, follow the step-by-step guidance in k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know.

April 11, 2023

1.13.7 patch release

Anthos clusters on VMware 1.13.7-gke.29 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.7-gke.29 runs on Kubernetes 1.24.11-gke.1200.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

Fixed for 1.13.7

  • Fixed an issue where gkectl check-config fails at Manual LB slow validation with a nil pointer error.

  • Fixed a bug where enabling Cloud Audit Logs with gkectl update did not work.

  • Fixed an issue where a preflight check for Seesaw load balancer creation failed if the Seesaw group file already existed.

  • We now backfill the OnPremAdminCluster OSImageType field to prevent an unexpected diff during update.

Security bulletin

Two new vulnerabilities, CVE-2023-0240 and CVE-2023-23586, have been discovered in the Linux kernel that could allow an unprivileged user to escalate privileges. For more information, see the GCP-2023-003 security bulletin.

1.12.7-gke.19 bad release

Anthos clusters on VMware 1.12.7-gke.19 is a bad release and you should not use it. The artifacts have been removed from the Cloud Storage bucket.

April 03, 2023

Anthos clusters on VMware 1.14.3-gke.25 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.3-gke.25 runs on Kubernetes 1.25.5-gke.100.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

We now allow storage DRS to be enabled in manual mode.

  • We now backfill the OnPremAdminCluster OSImageType field to prevent an unexpected diff during cluster update.

  • Fixed an issue where gkectl diagnose cluster didn't check the health of control-plane Pods for kubeception user clusters.

  • Fixed an issue where the user-cluster node options and startup script used the cluster version instead of the node pool version.

Fixed the following vulnerabilities:

March 17, 2023

Anthos clusters on VMware 1.13.6-gke.32 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.6-gke.32 runs on Kubernetes 1.24.10-gke.2200.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • Fixed an issue with Anthos Identity Service to better scale and handle concurrent authentication requests.

  • Fixed an issue where component-access-sa-key was missing in the admin-cluster-creds Secret after admin cluster upgrade.

Fixed the following vulnerabilities:

March 07, 2023

Anthos clusters on VMware 1.14.2-gke.37 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.2-gke.37 runs on Kubernetes 1.25.5-gke.100.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

We no longer silently skip saving empty files in diagnose snapshots, but instead collect the names of those files in a new empty_snapshots file in the snapshot tarball.

  • Fixed an issue where user cluster data disk validation used the cluster-level datastore vsphere.datastore instead of masterNode.vsphere.datastore.

  • Fixed an issue with Anthos Identity Service to better scale and handle concurrent authentication requests.

  • Fixed an issue where component-access-sa-key was missing in the admin-cluster-creds Secret after admin cluster upgrade.

  • Fixed an issue where user cluster upgrade triggered through the Google Cloud console might flap between ready and non-ready states until CA rotation fully completes.

  • Fixed an issue where gkectl diagnose cluster might generate false failure signals with non-vSphere CSI drivers.

  • Fixed an issue where admin cluster update doesn't wait for user control-plane machines to be re-created when using ControlPlaneV2.

Fixed the following vulnerabilities:

March 06, 2023

Cluster lifecycle improvements versions 1.13.1 and later

You can use the Google Cloud console or the gcloud CLI to upgrade user clusters managed by the Anthos On-Prem API. The upgrade steps differ depending on your admin cluster version. For more information, see the version of the documentation that corresponds to your admin cluster version:

1.12.6 patch release

Anthos clusters on VMware 1.12.6-gke.35 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.6-gke.35 runs on Kubernetes v1.23.16-gke.2400.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • Fixed a bug where KSASigningKeyRotation always shows as an unsupported change during user cluster update.
  • Fixed an issue with Anthos Identity Service to better scale and handle concurrent authentication requests.

  • Fixed an issue where component-access-sa-key was missing in the admin-cluster-creds Secret after admin cluster upgrade.

Fixed the following vulnerabilities:

March 01, 2023

A new vulnerability (CVE-2022-4696) has been discovered in the Linux kernel that can lead to a privilege escalation on the node. Anthos clusters on VMware running v1.12 and v1.13 are impacted. Anthos clusters on VMware running v1.14 or later are not affected.

For instructions and more details, see the Anthos clusters on VMware security bulletin.

February 13, 2023

Anthos clusters on VMware 1.13.5-gke.27 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.5-gke.27 runs on Kubernetes 1.24.9-gke.2500.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • Updated the Ubuntu image to ubuntu-gke-op-2004-1-13-v20230201 using node kernel version 5.4.0.1062.60.

  • Instead of ignoring snapshots files with empty content, we save their names in a new file named empty_snapshots.

During preflight checks and cluster diagnosis, we now skip PVs and PVCs that use non-vSphere drivers.

Fixed the following vulnerabilities:

January 31, 2023

Anthos clusters on VMware 1.14.1-gke.39 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.1-gke.39 runs on Kubernetes 1.25.5-gke.100.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • In the admin cluster configuration file, gkeadm now prepopulates caCertPath and the service account key paths with absolute paths instead of relative paths.

  • In the vSphere CSI driver, enabled improved-csi-idempotency, and async-query-volume, and disabled trigger-csi-fullsync. This enhances the vSphere CSI driver to ensure volume operations are idempotent.

  • Fixed a known issue where the calico-node Pod is unable to renew the auth token in the calico CNI kubeconfig file.

  • Fixed a known issue where CIDR ranges cannot be used in the IP block file.

Fixed the following vulnerabilities:

January 26, 2023

Anthos clusters on VMware 1.12.5-gke.34 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.5-gke.34 runs on Kubernetes 1.23.15-gke.2400.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

In the vSphere CSI driver, enabled improved-csi-idempotency, and async-query-volume, and disabled trigger-csi-fullsync. This enhances the vSphere CSI driver to ensure volume operations are idempotent.

  • If you specify a CIDR range (subnet) in the IP block file for your cluster nodes, the broadcast IP of the subnet, the network CIDR IP, and the network gateway IP will be excluded from the pool of addresses that get assigned to nodes.

  • Fixed a known issue where CIDR ranges cannot be used in the IP block file.

  • Fixed a bug where CA rotation appeared as an unsupported change during admin cluster update.

Fixed the following vulnerabilities:

January 25, 2023

Anthos clusters on VMware version 1.14.0 has a known issue where the calico-node Pod is unable to renew the auth token in the calico CNI kubeconfig file. For more information, see Pod create or delete errors due to Calico CNI service account auth token issue.

Because of this issue, you cannot use Anthos On-Prem API clients (Google Cloud console and gcloud CLI) to create and manage 1.14.0 clusters.

January 12, 2023

Anthos clusters on VMware 1.13.4-gke.19 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.4-gke.19 runs on Kubernetes 1.24.9-gke.100

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • In the vSphere CSI driver, enabled improved-csi-idempotency, and async-query-volume, and disabled trigger-csi-fullsync. This enhances the vSphere CSI driver to ensure volume operations are idempotent.

  • In the admin cluster configuration file, gkeadm now prepopulates caCertPath and the service account key paths with absolute paths instead of relative paths.

  • If you specify a CIDR range (subnet) in the IP block file for your cluster nodes, the broadcast IP of the subnet, the network CIDR IP, and the network gateway IP will be excluded from the pool of addresses that get assigned to nodes.
  • Fixed a bug where CIDR ranges cannot be used in an IP block file.

December 22, 2022

A new vulnerability (CVE-2022-2602) has been discovered in the io_uring subsystem in the Linux kernel that can allow an attacker to potentially execute arbitrary code.

For more information see the GCP-2022-025 security bulletin.

December 21, 2022

Anthos clusters on VMware 1.14.0-gke.430 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.0-gke.430 runs on Kubernetes 1.25.5-gke.100.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • Support for user cluster creation with Controlplane V2 enabled is now generally available. For more details on how to create a user cluster with this model, see Create a user cluster with Controlplane V2.
  • Preview: You can now roll back node pools to a previous working version if you detect an issue in the new version after a cluster upgrade. For more information, see Rolling back a node pool after an upgrade.
  • Preview: The following private registry updates are now available:
    • Support for private registry credentials using prepared Secrets is now available as a preview feature. A new privateRegistry field has been added in the Secrets configuration file.
    • Added a new privateRegistry section in the user cluster configuration file. You can use different private registry credentials for the user cluster and admin cluster. You can also use a different private registry address for user clusters with Controlplane V2 enabled.
    • You can also update private registry credentials for an admin cluster or user cluster with the gkectl update credentials command. For more information, see Update private registry credentials.
  • Cluster names are now included in kubeconfig files when creating a new admin cluster or user cluster. If you are upgrading your existing cluster to 1.14.0 or higher, the existing kubeconfig file is updated with the cluster name.
  • cluster-health-controller is now integrated with health-check-exporter to emit metrics based on the periodic health check results, making it easy to monitor and detect cluster health problems.
  • GA: The node pool update policy is generally available. With this feature, you can configure the value of maximumConcurrentNodePoolUpdate in the user cluster configuration file to 1. This will configure the maximum number of additional nodes spawned during cluster upgrade or update, which can potentially avoid two issues — resource quota limit issue and PDB deadlock issue. For more information, see Configure node pool update policy.
  • Support for vSphere cluster/host/network/datastore folders is generally available. You can use folders to group objects of the same type for easier management. For more information, see Specify vSphere folders in cluster configuration and the relevant sections in the admin cluster and user cluster configuration files.
  • Added a feature enabling cluster administrators to configure RBAC policies based on Azure Active Directory (AD) groups. Group information for users belonging to more than 200 groups can now be retrieved.
  • Upgraded Kubernetes from 1.24 to 1.25:
    • Migrated PDB API version from policy/v1beta1 to policy/v1. You must ensure that any workload PDB API version is updated to policy/v1 before upgrading your cluster to 1.14.0.
    • Migrated autoscaling/v2beta1 to autoscaling/v2.
    • Disabled CSI Migration for vSphere as this is enabled by default in Kubernetes 1.25.
  • Added storage validation that checks if in-use Kubernetes PersistentVolumes (PV) have disks present in the configured datastore, and if node.Status.VolumesAttached is consistent with the actual PV/disk attachment states during admin and user cluster upgrade preflight checks.
  • Updated gcloud version to 410.0.0 on the admin workstation.
  • Upgraded VMware vSphere Container Storage Plug-in from 2.5 to 2.7. This version bump includes support for Kubernetes version 1.25. For more information, see VMware vSphere Container Storage Plug-in 2.7 Release Notes.
  • In the generated user cluster configuration template, the prepopulated value for enableDataplaneV2 is now true.
  • Removed unnecessary RBAC policies for managing the lifecycle of user clusters in the Google Cloud console.
  • Updated the parser of container logs to extract severity level.
  • Simplified the cluster snapshot uploading process by automatically retrieving GKE connect-register service account key, and making the flag --service-account-key-file optional. When the cluster is not registered correctly, and no additional service account key file is passed in through the flag, the gkectl diagnose snapshot command will use the GOOGLE_APPLICATION_CREDENTIALS environment variable to authenticate the request.
  • Upgraded Container-Optimized OS to m101.
  • In the admin cluster and user cluster configuration file templates, loadbalancer.kind field is now prepopulated with MetalLB.

A known issue has been discovered. See the January 25, 2023 release note.

December 20, 2022

Anthos clusters on VMware 1.12.4-gke.42 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.4-gke.42 runs on Kubernetes 1.23.13-gke.1700.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

  • Changed the relative file path fields in the admin cluster configuration file to use absolute paths.
  • Added yq tool in the admin workstation.

December 15, 2022

Anthos clusters on VMware 1.13.3-gke.26 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.3-gke.26 runs on Kubernetes 1.24.7-gke.1700.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

  • Added yq tool in the admin workstation to simplify troubleshooting.
  • Upgraded VMware vSphere Container Storage Plug-in from 2.5 to 2.6.2. This version bump includes support for Kubernetes version 1.24. For more information, see VMware vSphere Container Storage Plug-in 2.6 Release Notes.
  • Added storage validation that checks Kubernetes PersistentVolumes and vSphere virtual disks as part of admin and user cluster upgrade preflight checks.
  • Fixed an issue where anet-operator could be scheduled to a Windows node with enableControlplaneV2: true.
  • Fixed OOM events associated with monitoring-operator- Pods by increasing memory limit to 1GB.
  • Fixed the issue where deleting a user cluster also deleted cluster-health-controller and vsphere-metrics-exporter ClusterRole objects.
  • Fixed the following vulnerabilities:

December 08, 2022

Anthos clusters on VMware 1.11.6-gke.18 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.6-gke.18 runs on Kubernetes 1.22.15-gke.3300.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

November 17, 2022

Anthos clusters on VMware 1.13.2-gke.26 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.2-gke.26 runs on Kubernetes 1.24.7-gke.1400.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

  • Fixed a validation error where the GKE Hub membership is not found when using a gcloud version that is not bundled with the admin workstation.
  • Fixed the issue where the admin cluster might fail to register due to naming conflicts.
  • Fixed the issue where the Connect Agent in the admin cluster does not upgrade after a failure to upgrade nodes in the user cluster control plane.
  • Fixed a bug where running gkectl diagnose snapshot using system scenario did not capture Cluster API resources in the default namespace.
  • Fixed the issue during admin cluster creation where gkectl check-config fails due to missing OS images, if gkectl prepare is not run first.
  • Fixed the unspecified Internal Server error in ClientConfig when using the Anthos Identity Service (AIS) hub feature to manage the OpenID Connect (OIDC) configuration.
  • Fixed the issue of /var/log/audit/ filling up disk space on the admin workstation.
  • Fixed an issue where cluster deletion may be stuck at node draining when the user cluster control plane and node pools are on different datastores.
  • Fixed the issue where nodes fail to register if the configured hostname in the IP block file contains one or more periods.
  • Fixed the following vulnerabilities:

November 10, 2022

Anthos clusters on VMware 1.11.5-gke.14 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.5-gke.14 runs on Kubernetes 1.22.15-gke.2200.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

November 09, 2022

Two new vulnerabilities, CVE-2022-2585 and CVE-2022-2588, have been discovered in the Linux kernel that can lead to a full container break out to root on the node.

For more information, see the GCP-2022-024 security bulletin.

November 07, 2022

A security vulnerability, CVE-2022-39278, has been discovered in Istio, which is used in Anthos Service Mesh, that allows a malicious attacker to crash the control plane.

For instructions and more details, see the Anthos clusters on VMware security bulletin.

November 01, 2022

Anthos clusters on VMware 1.13.1-gke.35 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.1-gke.35 runs on Kubernetes 1.24.2-gke.1900.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

  • Increased logging granularity for the cluster backup operation including indicating status for each step of the process.

October 28, 2022

A new vulnerability, CVE-2022-20409, has been discovered in the Linux kernel that could allow an unprivileged user to escalate to system execution privilege.

For instructions and more details, see the Anthos clusters on VMware security bulletin.

October 27, 2022

A new vulnerability, CVE-2022-3176, has been discovered in the Linux kernel that can lead to local privilege escalation. This vulnerability allows an unprivileged user to achieve full container breakout to root on the node.

For instructions and more details, see the Anthos clusters on VMware security bulletin.

October 25, 2022

Anthos clusters on VMware 1.12.3-gke.23 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.3-gke.23 runs on Kubernetes 1.23.8-gke.1900.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

  • Fixed the issue of a race condition that blocks the deletion of an old machine object during cluster upgrade or update.
  • Fixed an issue for clusters enabled with Anthos Network Gateway where the NetworkGatewayGroup object may erroneously report nodes as having NotHealthy status.
  • Fixed an issue where creating or updating NetworkGatewayGroup objects fails because of a webhook IP conflict error.
  • Fixed the following vulnerabilities:

October 13, 2022

Anthos clusters on VMware 1.11.4-gke.32 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.4-gke.32 runs on Kubernetes 1.22.8-gke.204.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

October 12, 2022

The Connect Agent version used in Anthos clusters on VMware versions 1.8 and earlier is no longer supported. If you upgrade your user cluster to these versions, the gkectl updgrade cluster command may fail. If you encounter this issue and need further assistance, you should contact Google Support.

October 11, 2022

If you use gcloud anthos version 1.4.2, and authenticate an Anthos cluster on VMware with gcloud anthos auth, the command fails with the following error:

Decryption failed, no keys in the current key set could decrypt the payload.

To resolve this, you must upgrade gcloud anthos to 1.4.3 or above (gcloud SDK 397.0.0 or above) to authenticate clusters with gcloud anthos auth.

September 29, 2022

Anthos clusters on VMware 1.13.0-gke.525 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.0-gke.525 runs on Kubernetes 1.24.2-gke.1900.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.

vSphere versions below 7.0 Update 1 are no longer supported in Anthos clusters on VMware. You must upgrade vSphere (both ESXi and vCenter) to version 7.0 Update 1 or above before you can upgrade to Anthos clusters on VMware 1.13.0. If you want to use the vSphere Container Storage Interface driver or NFSv3, then you must upgrade to vSphere 7.0 Update 2 or a later update of version 7.0.

Cluster life-cycle Improvements:

  • GA: A new asynchronous variation of the user cluster upgrade is now supported. With this variation, the gkectl upgrade cluster command starts the upgrade and completes. You don't need to watch the output of the command for the entire duration of the upgrade. For more details, see Upgrade a user cluster.
  • Preview: You can now update node pools either sequentially or maintain the default parallel behavior by specifying the value of maximumConcurrentNodePoolUpdate in your user cluster configuration file. Setting the value to 1 will configure the node pool update to be sequential, which can potentially avoid two issues — resource quota limit issue and PDB deadlock issue.
  • Introduced an admin cluster controller for managing the admin cluster lifecycle.
  • Added new preflight checks:
    • Check that node IPs are in the subnet for IPAM.
    • A new preflight check was added to validate the clusterLocation field under stackdriver and cloudAuditLogging. This preflight check requires the component access service account to have the compute.viewer role, and the compute.googleapis.com to be allowlisted in the HTTP proxy and firewall settings. If you use an invalid value in the clusterLocation, the preflight check will fail. You can correct the invalid clusterLocation by removing the stackdriver and/or cloudAuditLogging configurations from the admin or user cluster configuration files, applying the changes with gkectl update, and then add the corrected configurations back. Or, you can use --skip-validation-gcp to skip the check. Note that having an invalid clusterLocation will cause a failure to export logs and metrics.
    • For a cluster in static IP mode, you need to have one IP address for each node and an additional IP address. This additional IP address will be used for a temporary node during cluster update, upgrade and auto-repair.
    • Validate that IP addresses are not in docker IP range in IPAM mode.
    • Check to make sure there is no node port collision among different user clusters in manual load balancing mode.
    • Check datastore size to ensure it has enough capacity for surge machine.
    • Check for an available IP address for creating Windows VM template in IPAM mode.
    • PDB preflight check to prevent multiple PDBs from matching with the same pod.

Platform enhancements:

  • GA: Support for cos OS image type in admin cluster nodes is now generally available. You can update the admin node image type with the gkectl update admin command.
  • Preview: A new user cluster deployment model with support for multi-vCenter deployments is available as a preview feature. For more details on how to create a user cluster with this new model, see Create a user cluster with a new installation model.
  • Preview: vSphere CSI volume snapshot is now available as a preview feature. This feature provides the ability to create volume snapshots and restore volumes from snapshots using VMware Cloud Native Storage. To use this feature, you must update both vCenter Server and ESXi to version 7.0 Update 3 or later.

Security enhancements:

  • GA: Support for storing credentials for user clusters as Kubernetes Secrets is generally available.

    • With this feature, users can prepare credentials for the user cluster, and store them as Kubernetes Secrets in the admin cluster before a user cluster is created. After credential preparation, users can delete the Secrets configuration file which contains the user cluster credentials from the admin workstation. When creating a user cluster, the prepared credentials will be used. For more details, see Configure prepared credentials for user clusters.
  • Kubernetes service account (KSA) Signing Key rotation is supported on user clusters. For more details, see Rotate KSA signing keys.

  • GA: Component access SA key rotation for both admin and user clusters is generally available.

  • GA: You can set up Connect gateway to use Google Group membership for authorization. For more information, see Set up the Connect gateway with Google Groups.

  • Changed kube-scheduler, kube-etcd, kube-apiserver and Key Management Service (KMS) components to run in rootless mode in the user cluster.

Simplify day-2 operations:

  • Preview: Added support of multi-line parsing for Go and Java logs.
  • GA: Launched the enablement of Google Cloud Managed Service for Prometheus to track metrics in Anthos on vSphere clusters, and introduced two separate flags to enable logging and monitoring for user applications separately: EnableCloudLoggingForApplications and EnableGMPForApplications. You can monitor and alert on the applications using Prometheus with Google-managed Prometheus without managing and operating Prometheus. You can set enableGMPForApplications in the Stackdriver spec to enable Google Managed Prometheus for application metrics without any other manual steps, and the Google Managed Prometheus components are then set up automatically. See Enable Managed Service for Prometheus for user applications for details.

  • Added a new Anthos Utilization Metering dashboard in Cloud Monitoring to monitor cluster health. The dashboard shows CPU and memory utilization in the clusters by namespace and Pod labels.

  • Upgraded to Ubuntu 20.04 and containerd 1.6.
  • connectgateway.googleapis.com API is now required to create new clusters in 1.13.0.
  • Updated the gcloud version in the admin workstation to 401.0.0.
  • Increased the default boot disk size for the admin workstation to 100GB.
  • SImplified the gkectl diagnose snapshot scenario usage. The --scenario flag is no longer needed for the admin cluster snapshot. Use system (default) or all values to specify scenarios for the user cluster snapshot. For more details, see Diagnosing cluster issues.
  • Improved gkectl diagnose cluster to detect and diagnose two general issues:
    • Node draining issues can block cluster upgrade
    • Kubernetes Cluster API resource managed by an Anthos clusters on VMware bundle might be accidentally modified which can cause failure of system components, or cluster upgrade or update failure.
  • Enforced admin cluster registration with preflight checks.

    • This also applies to admin clusters to be upgraded to 1.13. You can run gkectl update admin to register existing 1.12 admin clusters.
    • You can skip this check with the --skip-validation-config flag if you cannot register admin clusters for certain reasons.
  • Configuration for Logging and Monitoring is now enforced in admin and user cluster configuration files during creation preflight checking. You can run gkectl update cluster and gkectl update admin to enable Logging and Monitoring in existing 1.12 user or admin clusters before upgrading to 1.13. Otherwise, upgrade preflight checks will emit a warning. You can skip these checks with the --skip-validation-stackdriver flag if you cannot enable Logging and Monitoring for certain reasons. However, enabling Logging and Monitoring is strongly recommended to get better Google support, and there is no charge for this service on Anthos.

  • When Logging and Monitoring is enabled, the values of the gkeConnect.projectID field, stackdriver.projectID field, and cloudAuditLogging.projectID field must all be the same in the cluster configuration files. Otherwise, cluster creation preflight checks would fail with an error, and upgrade preflight checks would emit a warning. You can also skip these checks with the --skip-validation-stackdriver flag, but this is not recommended as using different project IDs for stackdriver and gkeconnect may cause friction during support and fleet management. Note you can still send logs and metrics to a different project through Cloud Logging sinks and metric viewer scoping.

  • Migrated metrics-server and addon-resizer to a new namespace: gke-managed-metrics-server.

  • Refined kube-state-metrics so that only core metrics are collected by default. Fewer resources are needed to collect this optimized set of metrics, which improves overall performance and scalability.

  • Fixed the issue of cloud-init log not showing in the serial console for Ubuntu.
  • Fixed the issue where user cluster check-config fails when the admin cluster uses cos as the osImageType.
  • Updated virtual hardware version to version 15 for creating VMs in Anthos cluster on VMware 1.13.0.
  • Fixed the issue of two missing metrics, scheduler and controller-manager, in the admin and user cluster.
  • Fixed the issue of an empty CPU readiness chart in OOTB dashboards that was caused by deprecated metrics.
  • Fixed the issue where you may not be able to add a new user cluster if a user cluster is stuck in the deletion process, and your admin cluster is set up with a MetalLB load balancer configuration.
  • Fixed the following vulnerabilities:
  • In the configuration file template generated by gkectl create-config cluster, the pre-populated value for the commented field kubeception is shown as false, while the default value is true.
  • In the configuration file template generated by gkectl create-config admin, gkeConnect is shown as an optional section, however it is actually a required section.

September 28, 2022

Anthos clusters on VMware 1.12.2-gke.21 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.2-gke.21 runs on Kubernetes 1.23.8-gke.1900.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

  • Fixed the issue where you may not be able to add a new user cluster if a user cluster is stuck in the deletion process, and your admin cluster is set up with a MetalLB load balancer configuration.
  • Fixed an issue where istiod starts up very slowly when connectivity to the Google Cloud metadata service is partially broken.
  • Fixed the issue where the admin control plane VM template is deleted after a resumed admin cluster upgrade attempt.
  • Fixed the issue where user cluster check-config fails when the admin cluster uses cos as the osImageType.
  • Fixed the following vulnerabilities:

September 08, 2022

Anthos clusters on VMware 1.10.7-gke.15 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.10.7-gke.15 runs on Kubernetes 1.21.14-gke.2100.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

Fixed for v1.10.7

Anthos clusters on VMware 1.11.3-gke.45 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.3-gke.45 runs on Kubernetes 1.22.8-gke.204.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

The gkectl diagnose cluster command automatically runs when gkectl diagnose snapshot is run, and the output is saved in a new folder in the snapshot called /diagnose-report.

Fixed for v1.11.3

August 25, 2022

Anthos clusters on VMware 1.12.1-gke.57 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.1-gke.57 runs on Kubernetes 1.23.5-gke.1505.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

  • GA: You can now have your GKE clusters in separate vSphere clusters. With this feature, you can deploy the admin cluster in one vSphere cluster, and a user cluster in a different vSphere cluster.
  • Fixed the issue where mounting emptyDir volume with exec option on Container-Optimized OS (COS) nodes fails with permission error.
  • Fixed the issue where enabling and disabling cluster autoscaler sometimes prevents nodepool replicas from being updated.
  • Fixed the manual node repair issue where manually adding the onprem.cluster.gke.io/repair-machine Machine annotation can trigger VM recreation without deleting the Machine object.
  • Switched back to cgroup v1 (hybrid) for Container Optimized OS (COS) nodes because cgroup v2 (unified) could potentially cause instability for your workloads in a COS cluster.
  • Fixed the issue where running gkectl repair admin-master after a failed admin cluster upgrade attempt caused subsequent admin upgrade attempts to fail. A preflight check has been added for gkectl repair admin-master to prevent the process from using a template that doesn't match the admin cluster checkpoint.
  • Fixed the issue where kubectl describe might error or timeout if resource number is too high during a cluster snapshot.
  • Fixed the following vulnerabilities:

August 12, 2022

Anthos clusters on VMware 1.10.6-gke.36 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.10.6-gke.36 runs on Kubernetes 1.21.14-gke.2100.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

  • Fixed the issue where mounting emptyDir volume with exec option on Container-Optimized OS (COS) nodes fails with permission error.
  • Fixed the issue where enabling and disabling cluster autoscaler sometimes prevents nodepool replicas from being updated.
  • Fixed the following vulnerabilities:

August 02, 2022

A new vulnerability CVE-2022-2327 has been discovered in the Linux kernel that can lead to local privilege escalation. This vulnerability allows an unprivileged user to achieve a full container breakout to root on the node.

For more information, see the GCP-2022-018 security bulletin.

July 27, 2022

Anthos clusters on VMware 1.11.2-gke.53 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.2-gke.53 runs on Kubernetes 1.22.8-gke.204.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

  • Fixed a known issue in which the cluster backup feature affected the inclusion of always-on secrets encryption keys in the backup.
  • Fixed a known issue of high-resource usage when AIDE runs as a cron job, by disabling AIDE by default. This fix affects compliance with CIS L1 Server benchmark 1.4.2: Ensure filesystem integrity is regularly checked. Customers can opt in to re-enable the AIDE if needed. To re-enable the AIDE cron job, see Configure AIDE cron job.
  • Fixed a known issue where gke-metrics-agent DaemonSet has frequent CrashLoopBackOff errors by upgrading to gke-metrics-agent v1.1.0-anthos.14.
  • Fixed the following vulnerabilities:

July 19, 2022

Anthos clusters on VMware 1.9.7-gke.8 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.9.7-gke.8 runs on Kubernetes 1.21.5-gke.1200.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

  • Fixed a known issue in which the cluster backup feature affected the inclusion of always-on secrets encryption keys in the backup.
  • Fixed a known issue of high-resource usage when AIDE runs as a cron job, by disabling AIDE by default. This fix affects compliance with CIS L1 Server benchmark 1.4.2: Ensure filesystem integrity is regularly checked. Customers can opt in to re-enable the AIDE if needed. To re-enable the AIDE cron job, see Configure AIDE cron job.
  • Fixed the following vulnerabilities:

July 07, 2022

Anthos clusters on VMware v1.12.0-gke.446 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware v1.12.0-gke.446 runs on Kubernetes v1.23.5-gke.1504.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.

Announcements

  • vSphere releases for versions lower than version 7.0 Update 2 are deprecated in Kubernetes 1.24. VMware's General Support for vSphere 6.7 will end on October 15, 2022. Customers are recommended to upgrade vSphere (both ESXi and vCenter) to version 7.0 Update 2 or above. vSphere versions less than version 7.0 Update 2 will no longer be supported in Anthos clusters on VMware in an upcoming version. You must upgrade vSphere to 7.0 Update 2 or above before you can upgrade to Anthos clusters on VMware 1.13.0.

  • Beta versions of VolumeSnapshot CRDs are deprecated in Kubernetes v1.20 and are unsupported in the Kubernetes v1.24 release.
    The upcoming Anthos clusters on VMware version 1.13 release will no longer serve v1beta1 VolumeSnapshot CRDs. Make sure that you migrate manifests and API clients to use snapshot.storage.k8s.io/v1 API version, available since Kubernetes v1.20. All existing persisted objects remain accessible via the new snapshot.storage.k8s.io/v1 APIs.

  • The dockershim component in Kubernetes enables cluster nodes to use the Docker Engine container runtime. However, Kubernetes 1.24 removed the dockershim component. Starting from Anthos clusters on VMware version 1.12.0, you cannot create new clusters that use the Docker Engine container runtime. All new clusters must use the default container runtime Containerd. A cluster update will also be blocked if you want to switch from containerd node pool to docker node pool, or if you add new docker node pools. For existing version 1.11.x clusters with docker node pools, you can continue upgrading it to version 1.12.0, but you must update the node pools to use containerd before you can upgrade to version 1.13.0 in the future.

Breaking changes:

In Kubernetes 1.23, the rbac.authorization.k8s.io/v1alpha1 API version is removed. Instead, use the rbac.authorization.k8s.io/v1 API. See the Kubernetes 1.23.5 release notes.

Platform enhancements:

  • General Availability (GA): Separate vSphere data centers for the admin cluster and the user clusters are supported.
  • GA: Anthos Identity service LDAP authentication is supported.
  • GA: User cluster control-plane node and admin cluster add-on node auto sizing is supported.

Security enhancements: