Stay organized with collections
Save and categorize content based on your preferences.
A node pool is a group of nodes within a Kubernetes cluster that all have the same
configuration. Node pools use a NodePool specification. Each node in the pool
has a Kubernetes node label, which has the name of the node pool as its value.
By default, all new node pools run the same version of Kubernetes as the control
plane.
When you create a Kubernetes cluster, the number of nodes and type of nodes that you
specify create the first node pool of the cluster. You can add additional node
pools of different sizes and types to your cluster. All nodes in any given node
pool are identical to one another.
Custom node pools are useful when scheduling pods that require more resources
than others, such as more memory or local disk space. You can use node taints if
you need more control over scheduling the pods.
You can create and delete node pools individually without affecting the whole
cluster. You cannot configure a single node in a node pool. Any configuration
changes affect all nodes in the node pool.
You can resize node pools
in a cluster by upscaling or downscaling the pool. Downscaling a node pool is an
automated process where you decrease the pool size and the
GDC system automatically drains and evicts an arbitrary
node. You cannot select a specific node to remove when downscaling a node pool.
Before you begin
To view and manage node pools in a Kubernetes cluster, you must have the following
roles:
User Cluster Admin (user-cluster-admin)
User Cluster Node Viewer (user-cluster-node-viewer)
These roles are not bound to a namespace.
Add a node pool
When creating a Kubernetes cluster from the GDC console, you
can customize the default node pool and create additional node pools before the
cluster creation initializes. If you must add a node pool to an existing Kubernetes
cluster, complete the following steps:
Console
In the navigation menu, select Kubernetes Engine > Clusters.
Click the cluster from the cluster list. The Cluster details page is
displayed.
Select Node pools > Add node pool.
Assign a name for the node pool. You cannot modify the name after you create
the node pool.
Specify the number of worker nodes to create in the node pool.
Select your machine class that best suits your workload requirements. The
machine classes show in the following settings:
Machine type
vCPU
Memory
Click Save.
API
Open the Cluster custom resource spec with the kubectl CLI using the
interactive editor:
MANAGEMENT_API_SERVER: The zonal API server's
kubeconfig path where the Kubernetes cluster is hosted. If you have
not yet generated a kubeconfig file for the API server in your
targeted zone, see
Sign in for details.
MACHINE_TYPE: The machine type for the
worker nodes of the node pool. View the
available machine types
for what is available to configure.
NODE_POOL_NAME: The name of the node pool.
NUMBER_OF_WORKER_NODES: The number of worker
nodes to provision in the node pool.
TAINTS: The taints to apply to the nodes of
this node pool. This is an optional field.
LABELS: The labels to apply to the nodes of
this node pool. It contains a list of key-value pairs. This is an
optional field.
GPU_PARTITION_SCHEME: The GPU partitioning
scheme, if you're running GPU workloads. For example, mixed-2. The
GPU is not partitioned if this field is not set. For available
Multi-Instance GPU (MIG) profiles, see
Supported MIG profiles.
Save the file and exit the editor.
View node pools
To view existing node pools in a Kubernetes cluster, complete the following steps:
Console
In the navigation menu, select Kubernetes Engine > Clusters.
Click the cluster from the cluster list. The Cluster details page is
displayed.
Select Node pools.
The list of node pools running in the cluster is displayed. You can manage
the node pools of the cluster from this page.
API
View the node pools of a specific Kubernetes cluster:
Deleting a node pool deletes the nodes and routes to them. These nodes evict and
reschedule any pods running on them. If the pods have specific node selectors,
the pods might remain in a non-schedulable condition if no other node in the
cluster satisfies the criteria.
Ensure you have at least three worker nodes before deleting a node pool to
ensure your cluster has enough compute space to run effectively.
To delete a node pool, complete the following steps:
Console
In the navigation menu, select Kubernetes Engine > Clusters.
Click the cluster that is hosting the node pool you want to delete.
Select Node pools.
Click deleteDelete next to the node
pool to delete.
API
Open the Cluster custom resource spec with the kubectl CLI using the
interactive editor:
MANAGEMENT_API_SERVER: The zonal API server's
kubeconfig path where the Kubernetes cluster is hosted. If you have
not yet generated a kubeconfig file for the API server in your
targeted zone, see
Sign in for details.
Remove the node pool entry from the nodePools section. For example, in
the following snippet, you must remove the machineTypeName, name, and
nodeCount fields:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eNode pools are groups of identically configured nodes within a Kubernetes cluster, all sharing the same \u003ccode\u003eNodePool\u003c/code\u003e specification and Kubernetes node label.\u003c/p\u003e\n"],["\u003cp\u003eYou can create multiple node pools within a cluster, each with different node sizes and types, which is useful for scheduling pods with specific resource requirements.\u003c/p\u003e\n"],["\u003cp\u003eNode pools can be added, deleted, or resized individually without affecting the entire cluster, and any configuration changes apply to all nodes within the pool.\u003c/p\u003e\n"],["\u003cp\u003eManaging node pools requires specific roles: User Cluster Admin (\u003ccode\u003euser-cluster-admin\u003c/code\u003e) and User Cluster Node Viewer (\u003ccode\u003euser-cluster-node-viewer\u003c/code\u003e).\u003c/p\u003e\n"],["\u003cp\u003eDeleting a worker node pool removes the nodes and routes to them, potentially leaving pods unschedulable if their node selectors are not met by other nodes.\u003c/p\u003e\n"]]],[],null,["# Manage node pools\n\nA *node pool* is a group of nodes within a Kubernetes cluster that all have the same\nconfiguration. Node pools use a `NodePool` specification. Each node in the pool\nhas a Kubernetes node label, which has the name of the node pool as its value.\nBy default, all new node pools run the same version of Kubernetes as the control\nplane.\n\nWhen you create a Kubernetes cluster, the number of nodes and type of nodes that you\nspecify create the first node pool of the cluster. You can add additional node\npools of different sizes and types to your cluster. All nodes in any given node\npool are identical to one another.\n\nCustom node pools are useful when scheduling pods that require more resources\nthan others, such as more memory or local disk space. You can use node taints if\nyou need more control over scheduling the pods.\n\nYou can create and delete node pools individually without affecting the whole\ncluster. You cannot configure a single node in a node pool. Any configuration\nchanges affect all nodes in the node pool.\n\nYou can [resize node pools](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster#resize-node-pools)\nin a cluster by upscaling or downscaling the pool. Downscaling a node pool is an\nautomated process where you decrease the pool size and the\nGDC system automatically drains and evicts an arbitrary\nnode. You cannot select a specific node to remove when downscaling a node pool.\n\nBefore you begin\n----------------\n\nTo view and manage node pools in a Kubernetes cluster, you must have the following\nroles:\n\n- User Cluster Admin (`user-cluster-admin`)\n- User Cluster Node Viewer (`user-cluster-node-viewer`)\n\nThese roles are not bound to a namespace.\n\nAdd a node pool\n---------------\n\nWhen creating a Kubernetes cluster from the GDC console, you\ncan customize the default node pool and create additional node pools before the\ncluster creation initializes. If you must add a node pool to an existing Kubernetes\ncluster, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Kubernetes Engine \\\u003e Clusters**.\n2. Click the cluster from the cluster list. The **Cluster details** page is displayed.\n3. Select **Node pools \\\u003e Add node pool**.\n4. Assign a name for the node pool. You cannot modify the name after you create the node pool.\n5. Specify the number of worker nodes to create in the node pool.\n6. Select your machine class that best suits your workload requirements. The machine classes show in the following settings:\n - Machine type\n - vCPU\n - Memory\n7. Click **Save**.\n\n### API\n\n1. Open the `Cluster` custom resource spec with the `kubectl` CLI using the\n interactive editor:\n\n kubectl edit clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e: The name of the cluster.\n - \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e: The zonal API server's kubeconfig path where the Kubernetes cluster is hosted. If you have not yet generated a kubeconfig file for the API server in your targeted zone, see [Sign in](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/iam/sign-in#cli) for details.\n2. Add a new entry in the `nodePools` section:\n\n nodePools:\n ...\n - machineTypeName: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eMACHINE_TYPE\u003c/span\u003e\u003c/var\u003e\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNODE_POOL_NAME\u003c/span\u003e\u003c/var\u003e\n nodeCount: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNUMBER_OF_WORKER_NODES\u003c/span\u003e\u003c/var\u003e\n taints: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eTAINTS\u003c/span\u003e\u003c/var\u003e\n labels: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eLABELS\u003c/span\u003e\u003c/var\u003e\n acceleratorOptions:\n gpuPartitionScheme: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eGPU_PARTITION_SCHEME\u003c/span\u003e\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e: The machine type for the worker nodes of the node pool. View the [available machine types](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/cluster-node-machines#available-machine-types) for what is available to configure.\n - \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e: The name of the node pool.\n - \u003cvar translate=\"no\"\u003eNUMBER_OF_WORKER_NODES\u003c/var\u003e: The number of worker nodes to provision in the node pool.\n - \u003cvar translate=\"no\"\u003eTAINTS\u003c/var\u003e: The taints to apply to the nodes of this node pool. This is an optional field.\n - \u003cvar translate=\"no\"\u003eLABELS\u003c/var\u003e: The labels to apply to the nodes of this node pool. It contains a list of key-value pairs. This is an optional field.\n - \u003cvar translate=\"no\"\u003eGPU_PARTITION_SCHEME\u003c/var\u003e: The GPU partitioning scheme, if you're running GPU workloads. For example, `mixed-2`. The GPU is not partitioned if this field is not set. For available Multi-Instance GPU (MIG) profiles, see [Supported MIG profiles](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/cluster-node-machines#mig-profiles).\n\n | **Note:** You cannot edit node configurations, such as GPU partitioning, after the node pool is created.\n3. Save the file and exit the editor.\n\nView node pools\n---------------\n\nTo view existing node pools in a Kubernetes cluster, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Kubernetes Engine \\\u003e Clusters**.\n2. Click the cluster from the cluster list. The **Cluster details** page is displayed.\n3. Select **Node pools**.\n\nThe list of node pools running in the cluster is displayed. You can manage\nthe node pools of the cluster from this page.\n\n### API\n\n- View the node pools of a specific Kubernetes cluster:\n\n kubectl get clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e -n platform \\\n -o json --kubeconfig \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e | \\\n jq .status.workerNodePoolStatuses\n\n The output is similar to the following: \n\n [\n {\n \"conditions\": [\n {\n \"lastTransitionTime\": \"2023-08-31T22:16:17Z\",\n \"message\": \"\",\n \"observedGeneration\": 2,\n \"reason\": \"NodepoolReady\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastTransitionTime\": \"2023-08-31T22:16:17Z\",\n \"message\": \"\",\n \"observedGeneration\": 2,\n \"reason\": \"ReconciliationCompleted\",\n \"status\": \"False\",\n \"type\": \"Reconciling\"\n }\n ],\n \"name\": \"worker-node-pool\",\n \"readyNodes\": 3,\n \"readyTimestamp\": \"2023-08-31T18:59:46Z\",\n \"reconcilingNodes\": 0,\n \"stalledNodes\": 0,\n \"unknownNodes\": 0\n }\n ]\n\nDelete a node pool\n------------------\n\nDeleting a node pool deletes the nodes and routes to them. These nodes evict and\nreschedule any pods running on them. If the pods have specific node selectors,\nthe pods might remain in a non-schedulable condition if no other node in the\ncluster satisfies the criteria.\n| **Important:** Control plane node pools and load balancer node pools are critical to a cluster's function and consequently can't be removed from a cluster. You can only delete worker node pools.\n\nEnsure you have at least three worker nodes before deleting a node pool to\nensure your cluster has enough compute space to run effectively.\n\nTo delete a node pool, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Kubernetes Engine \\\u003e Clusters**.\n\n2. Click the cluster that is hosting the node pool you want to delete.\n\n3. Select **Node pools**.\n\n4. Click *delete* **Delete** next to the node\n pool to delete.\n\n### API\n\n1. Open the `Cluster` custom resource spec with the `kubectl` CLI using the\n interactive editor:\n\n kubectl edit clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e: The name of the cluster.\n - \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e: The zonal API server's kubeconfig path where the Kubernetes cluster is hosted. If you have not yet generated a kubeconfig file for the API server in your targeted zone, see [Sign in](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/iam/sign-in#cli) for details.\n2. Remove the node pool entry from the `nodePools` section. For example, in\n the following snippet, you must remove the `machineTypeName`, `name`, and\n `nodeCount` fields:\n\n nodePools:\n ...\n - machineTypeName: n2-standard-2-gdc\n name: nodepool-1\n nodeCount: 3\n\n Be sure to remove all fields for the node pool you are deleting.\n3. Save the file and exit the editor."]]