This page describes how to delete a Google Distributed Cloud user cluster. User cluster deletion unregisters the cluster from the fleet and deletes the workloads, node pools, control plane nodes, and the corresponding resources, like VMs, F5 partitions, and data disks.
Choose a tool to delete a cluster
If you created the cluster with
enableAdvancedCluster
set to true
(which is required for
setting up topology domains),
you must use gkectl
to delete the cluster.
Otherwise, how you delete a user cluster depends on whether the cluster is enrolled in the GKE On-Prem API. A user cluster is enrolled in the GKE On-Prem API if one of the following is true:
The cluster was created by using the Google Cloud console, the Google Cloud CLI (gcloud CLI), or Terraform, which automatically enrolls the cluster in the GKE On-Prem API. Collectively, these tools are referred to as GKE On-Prem API clients.
The cluster was created using
gkectl
, but it was enrolled in the GKE On-Prem API.
If the cluster is enrolled in the GKE On-Prem API, use a GKE On-Prem API
client to delete the cluster. If the cluster isn't enrolled in the
GKE On-Prem API, use gkectl
on the admin workstation to delete the cluster.
To find all user clusters that are enrolled in the GKE On-Prem API in a specific project, run the following command:
gcloud container vmware clusters list \ --project=PROJECT_ID \ --location=-
The output is similar to the following:
NAME LOCATION VERSION ADMIN_CLUSTER. STATE example-user-cluster-1a us-west1 1.31.0-gke.889 example-admin-cluster-1 RUNNING
When you set --location=-
, that means to list all clusters in all regions.
If you need to scope down the list, you can set --location
to a specific
region.
If the cluster is listed, it is enrolled in the GKE On-Prem API. If you aren't
a project owner, minimally, you must be granted the Identity and Access Management
role roles/gkeonprem.admin
on the project to delete enrolled clusters. For
details on the permissions included in this role, see
GKE On-Prem API roles
in the IAM documentation.
Delete stateful workloads
Before you delete your cluster, delete your stateful workloads, PVCs, and PVs by
running kubectl delete
.
Delete a user cluster
gkectl
You can use gkectl
to delete clusters that aren't enrolled in the
GKE On-Prem API. If your organization's proxy and firewall rules allow
traffic to reach gkeonprem.googleapis.com
and
gkeonprem.mtls.googleapis.com
(the service names for the GKE On-Prem API),
then gkectl
will be able to delete enrolled clusters.
Run the following command on the admin workstation to delete the cluster:
gkectl delete cluster \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --cluster CLUSTER_NAME
where
ADMIN_CLUSTER_KUBECONFIG
is the path to the admin cluster's kubeconfig file.CLUSTER_NAME
is the name of the user cluster you want to delete.
If deletion fails with a message similar to the following:
Exit with error: ... failed to unenroll user cluster CLUSTER_NAME failed to create GKE On-Prem API client
That means the cluster is enrolled, but gkectl
was unable to reach the
GKE On-Prem API. In this case, the easiest thing to do is use an
GKE On-Prem API client to delete the cluster.
If deleting the user cluster fails halfway, you can run gkectl
with the
--force
flag to ignore the halfway error and continue the deletion.
gkectl delete cluster \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --cluster CLUSTER_NAME \ --force
If you deleted a cluster that used the Seesaw load balancer, delete the Seesaw VMs. You can do this by deleting the Seesaw VMs in the vSphere user interface.
Console
If the user cluster is managed by the GKE On-Prem API do the following steps to delete the cluster:
In the console, go to the Google Kubernetes Engine clusters overview page.
Select the Google Cloud project that the user cluster is in.
In the list of clusters, locate the cluster that you want to delete. If the Type is external, this indicates that the cluster was created using
gkectl
and wasn't enrolled in the GKE On-Prem API. In this case, follow the steps in thegkectl
tab to delete the cluster.If the icon in the Status column indicates a problem, follow the steps in the gcloud CLI tab to delete the cluster. You will need to add the
--ignore-errors
flag to the delete command.Click the name of the cluster that you want to delete.
In the Details panel, near the top of the window, click
Delete.When prompted to confirm, enter the name of the cluster and click Remove.
gcloud CLI
If the user cluster is managed by the GKE On-Prem API, do the following on a computer that has the gcloud CLI installed:
Update components:
gcloud components update
Use the following command to delete the cluster:
gcloud container vmware clusters delete USER_CLUSTER_NAME \ --project=PROJECT_ID \ --location=LOCATION \ --force \ --allow-missing
Replace the following:
USER_CLUSTER_NAME
: The name of the user cluster to delete.PROJECT_ID
: The ID of the project that the cluster is registered to.LOCATION
: The Google Cloud location associated with the user cluster.
The
--force
flag lets you delete a cluster that has node pools. Without the--force
flag, you have to delete the node pools first, and then delete the cluster.The
--allow-missing
flag is a standard Google API flag. When you include this flag, the command returns success if the cluster isn't found.If the command returns an error that contains the text
failed connecting to the cluster's control plane
, this indicates connectivity issues with either the admin cluster, the Connect Agent, or the on-premises environment.If you think the connectivity issue is transient, for example, because of network problems, wait and retry the command.
If retrying the command continues to fail, see Collecting Connect Agent logs to troubleshoot issues with the Connect Agent.
If you know that the admin cluster has been deleted, of if the VMs for the admin or the user cluster have been shut down or are otherwise inaccessible, include the
--ignore-errors
flag and retry the command.
For information about other flags, see the gcloud CLI reference.
Clean up resources
If there were issues when you deleted the cluster, some F5 or vSphere resources might be left over. The following sections explain how to clean up these leftover resources.
Clean up a user cluster's VMs in vSphere
To verify that the user cluster's VMs are deleted, perform the following steps:
From the vSphere Web Client's left-hand Navigator menu, click the Hosts and Clusters menu.
Find the resource pool for your admin cluster. This is the value of
vCenter.resourcePool
in your admin cluster configuration file.Under the resource pool, locate VMs prefixed with the name of your user cluster. These are the control-plane nodes for your user cluster. There will be one or three of these depending on whether your user cluster has a high-availability control plane.
Find the resource pool for your user cluster. This is the value of
vCenter.resourcePool
in your user cluster configuration file. If your user cluster configuration file does not specify a resource pool, it is inherited from the admin cluster.Under the resource pool, locate VMs prefixed with the name of a node pool in your user cluster. These are the worker nodes in your user cluster.
For each control-plane node and each worker node:
From the vSphere Web Client, right-click the VM and select Power > Power Off.
After the VM is powered off, right-click the VM and select Delete from Disk.
Clean up a user cluster's F5 partition
If there are any entries remaining in the user cluster's partition, perform the following steps:
- From the F5 BIG-IP console, in the top-right corner of the console, switch to the user cluster partition you want to clean up.
- Select Local Traffic > Virtual Servers > Virtual Server List.
- In the Virtual Servers menu, remove all the virtual IPs.
- Select Pools, then delete all the pools.
- Select Nodes, then delete all the nodes.
After you have finished
After the cluster is deleted, you can delete the user cluster's kubeconfig.