Delete a node pool

This page shows you how to delete node pools in GKE on AWS.

Before you begin

This page assumes you are familiar with the cluster autoscaler. For more information, see Cluster autoscaler.

Delete a node pool

To delete a node pool, run the following command:


  1. In the Google Cloud console, go to the Google Kubernetes Engine clusters overview page.

    Go to GKE clusters

  2. Select the Google Cloud project that the cluster is in.

  3. In the cluster list, select the name of the cluster, and then select View details in the side panel.

  4. Select the Nodes tab to see a list of all the node pools.

  5. Select a node pool from the list.

  6. Near the top of the window, click Delete.

    If the delete fails, follow the steps in the gcloud tab and add the --ignore-errors flag to the gcloud container aws node-pools delete command.


  1. Get a list of your node pools:

    gcloud container aws node-pools list \
      --cluster CLUSTER_NAME \
      --location GOOGLE_CLOUD_LOCATION

    Replace the following:

    • CLUSTER_NAME: the name of the cluster that the node pool is attached to
    • GOOGLE_CLOUD_LOCATION: the Google Cloud location hosting the node pool
  2. For each of your node pools, delete it with the following command:

    gcloud container aws node-pools delete NODE_POOL_NAME \
      --cluster CLUSTER_NAME \
      --location GOOGLE_CLOUD_LOCATION

    Replace the following:

    • NODE_POOL_NAME: the name of the node pool to delete

    If the command returns an error and the delete fails, you can force the deletion by running the command again with the --ignore-errors flag. This flag is available in version 1.29 and later.

How GKE on AWS protects workloads during node pool deletion

During node pool deletion, GKE on AWS performs graceful shut down on each node without honoring PodDisruptionBudget. It takes the following steps:

  1. Disable cluster autoscaler if it was enabled.
  2. Set up a deadline for the draining process. After this deadline, even if there are still Pod objects existing, GKE on AWS stops draining and proceeds to deleting underlying virtual machines. The default deadline is 5 minutes. For every 10 more nodes, 5 more minutes is added.
  3. Cordon all the nodes in the node pool.
  4. Before deadline is met, delete Pod objects in the node pool with best efforts.
  5. Delete all the underlying compute resources.

What's next