This page describes how to delete an GKE on-prem user cluster.
Overview
GKE on-prem supports deletion of user clusters via gkectl
.
If the cluster is unhealthy (for example, if its control plane is unreachable or
the cluster failed to bootstrap), refer to Deleting an unhealthy user cluster.
Deleting a user cluster
To delete a user cluster, run the following command:
gkectl delete cluster \ --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \ --cluster [CLUSTER_NAME]
where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.
Known issue
In version 1.1.2, there is a known issue that results in this error if you are using a vSAN datastore:
Error deleting machine object xxx; Failed to delete machine xxx: failed to ensure disks detached: failed to convert disk path "" to UUID path: failed to convert full path "ds:///vmfs/volumes/vsan:52ed29ed1c0ccdf6-0be2c78e210559c7/": ServerFaultCode: A general system error occurred: Invalid fault
See the workaround in the release notes.
Deleting an unhealthy user cluster
You can pass in --force
to delete a user cluster if the cluster is unhealthy.
A user cluster might be unhealthy if its control plane is unreachable, if the
cluster fails to bootstrap, or if gkectl delete cluster
fails to delete the
cluster.
To force delete a cluster:
gkectl delete cluster \ --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \ --cluster [CLUSTER_NAME] \ --force
where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.
Cleaning up external resources
After a forced deletion, some resources might be leftover in F5 or vSphere. The following sections explain how to clean up these leftover resources.
Cleaning up a user cluster's VMs in vSphere
To verify that the user cluster's VMs are deleted, perform the following steps:
- From the vSphere Web Client's left-hand Navigator menu, click the Hosts and Clusters menu.
- Find your Resource Pool.
- Verify whether there are VMs prefixed with your user cluster's name.
If there are user cluster VMs remaining, perform the following steps for each VM:
- From the vSphere Web Client, right-click the user cluster VM and select Power > Power Off.
- Once the VM is powered off, right-click the VM and select Delete from Disk.
Cleaning up a user cluster's F5 partition
If there are any entries remaining in the user cluster's partition, perform the following steps:
- From the F5 BIG-IP console, in the top-right corner of the console, switch to the user cluster partition you want to clean up.
- Select Local Traffic > Virtual Servers > Virtual Server List.
- In the Virtual Servers menu, remove all the virtual IPs.
- Select Pools, then delete all the pools.
- Select Nodes, then delete all the nodes.
After you have finished
After gkectl
finishes deleting the user cluster, you can delete the user
cluster's kubeconfig.
Troubleshooting
For more information, refer to Troubleshooting.
Diagnosing cluster issues using gkectl
Use gkectl diagnose
commands to identify cluster issues
and share cluster information with Google. See
Diagnosing cluster issues.
Running gkectl
commands verbosely
-v5
Logging gkectl
errors to stderr
--alsologtostderr
Locating gkectl
logs in the admin workstation
Even if you don't pass in its debugging flags, you can view
gkectl
logs in the following admin workstation directory:
/home/ubuntu/.config/gke-on-prem/logs
Locating Cluster API logs in the admin cluster
If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:
Find the name of the Cluster API controllers Pod in the
kube-system
namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use
grep
or a similar tool to search for errors:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager