Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Node pool adalah sekelompok node dalam cluster Kubernetes yang semuanya memiliki
konfigurasi yang sama. Node pool menggunakan spesifikasi NodePool. Setiap node dalam kumpulan memiliki label node Kubernetes, yang memiliki nama node pool sebagai nilainya.
Secara default, semua node pool baru menjalankan versi Kubernetes yang sama dengan bidang
kontrol.
Saat Anda membuat cluster Kubernetes, jumlah node dan jenis node yang Anda
tentukan akan membuat node pool pertama dari cluster tersebut. Anda dapat menambahkan node pool tambahan dengan berbagai ukuran dan jenis ke cluster Anda. Semua node dalam node pool tertentu identik satu sama lain.
Kumpulan node kustom berguna saat menjadwalkan pod yang memerlukan lebih banyak resource
daripada yang lain, seperti lebih banyak memori atau ruang disk lokal. Anda dapat menggunakan taint node jika memerlukan kontrol lebih besar terkait penjadwalan pod.
Anda dapat membuat dan menghapus node pool satu per satu tanpa memengaruhi seluruh cluster. Anda tidak dapat mengonfigurasi satu node di node pool. Setiap perubahan konfigurasi memengaruhi semua node di node pool.
Anda dapat mengubah ukuran node pool
dalam cluster dengan menaikkan atau menurunkan skala pool. Pengecilan skala node pool adalah
proses otomatis saat Anda mengurangi ukuran pool dan
sistem GDC secara otomatis menguras dan mengeluarkan node
arbitrer. Anda tidak dapat memilih node tertentu untuk dihapus saat mengecilkan skala kumpulan node.
Sebelum memulai
Untuk melihat dan mengelola node pool di cluster Kubernetes, Anda harus memiliki peran berikut:
User Cluster Admin (user-cluster-admin)
User Cluster Node Viewer (user-cluster-node-viewer)
Peran ini tidak terikat ke namespace.
Menambahkan node pool
Saat membuat cluster Kubernetes dari konsol GDC, Anda
dapat menyesuaikan node pool default dan membuat node pool tambahan sebelum
pembuatan cluster diinisialisasi. Jika Anda harus menambahkan node pool ke cluster Kubernetes
yang ada, selesaikan langkah-langkah berikut:
Konsol
Di menu navigasi, pilih Kubernetes Engine > Clusters.
Klik cluster dari daftar cluster. Halaman Cluster details akan ditampilkan.
Pilih Node pools > Add node pool.
Tetapkan nama untuk kumpulan node. Anda tidak dapat mengubah nama setelah membuat
node pool.
Tentukan jumlah node pekerja yang akan dibuat di node pool.
Pilih kelas mesin yang paling sesuai dengan persyaratan workload Anda. Class
komputer ditampilkan di setelan berikut:
Jenis mesin
vCPU
Memori
Klik Simpan.
API
Buka spesifikasi resource kustom Cluster dengan kubectl CLI menggunakan editor interaktif:
MANAGEMENT_API_SERVER: Jalur kubeconfig server API zonal tempat cluster Kubernetes dihosting. Jika Anda belum membuat file kubeconfig untuk server API di zona target, lihat Login untuk mengetahui detailnya.
MACHINE_TYPE: Jenis mesin untuk
node pekerja node pool. Lihat
jenis mesin yang tersedia
untuk mengetahui konfigurasi yang tersedia.
NODE_POOL_NAME: Nama node pool.
NUMBER_OF_WORKER_NODES: Jumlah node pekerja yang akan disediakan di node pool.
TAINTS: Taint yang akan diterapkan ke node di
node pool ini. Kolom ini bersifat opsional.
LABELS: Label yang akan diterapkan ke node
node pool ini. Objek ini berisi daftar key-value pair. Kolom ini bersifat opsional.
GPU_PARTITION_SCHEME: Skema partisi GPU, jika Anda menjalankan beban kerja GPU. Contoh, mixed-2. GPU tidak dipartisi jika kolom ini tidak ditetapkan. Untuk mengetahui profil GPU Multi-Instance (MIG) yang tersedia, lihat Profil MIG yang didukung.
Simpan file dan keluar dari editor.
Melihat node pool
Untuk melihat node pool yang ada di cluster Kubernetes, selesaikan langkah-langkah berikut:
Konsol
Di menu navigasi, pilih Kubernetes Engine > Clusters.
Klik cluster dari daftar cluster. Halaman Cluster details akan ditampilkan.
Pilih Node pools.
Daftar node pool yang berjalan di cluster akan ditampilkan. Anda dapat mengelola
node pool cluster dari halaman ini.
API
Melihat kumpulan node dari cluster Kubernetes tertentu:
Menghapus node pool akan menghapus node dan rute ke node tersebut. Node ini mengeluarkan dan
menjadwalkan ulang semua pod yang berjalan di node tersebut. Jika pod memiliki pemilih node tertentu, pod mungkin tetap berada dalam kondisi yang tidak dapat dijadwalkan jika tidak ada node lain dalam cluster yang memenuhi kriteria.
Pastikan Anda memiliki setidaknya tiga node pekerja sebelum menghapus node pool untuk
memastikan cluster Anda memiliki ruang komputasi yang cukup untuk berjalan secara efektif.
Untuk menghapus node pool, selesaikan langkah-langkah berikut:
Konsol
Di menu navigasi, pilih Kubernetes Engine > Clusters.
Klik cluster yang menghosting kumpulan node yang ingin Anda hapus.
Pilih Node pools.
Klik deleteHapus di samping node pool yang akan dihapus.
API
Buka spesifikasi resource kustom Cluster dengan kubectl CLI menggunakan editor interaktif:
MANAGEMENT_API_SERVER: Jalur kubeconfig server API zonal tempat cluster Kubernetes dihosting. Jika Anda belum membuat file kubeconfig untuk server API di zona target, lihat Login untuk mengetahui detailnya.
Hapus entri kumpulan node dari bagian nodePools. Misalnya, dalam
cuplikan berikut, Anda harus menghapus kolom machineTypeName, name, dan
nodeCount:
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eNode pools are groups of identically configured nodes within a Kubernetes cluster, all sharing the same \u003ccode\u003eNodePool\u003c/code\u003e specification and Kubernetes node label.\u003c/p\u003e\n"],["\u003cp\u003eYou can create multiple node pools within a cluster, each with different node sizes and types, which is useful for scheduling pods with specific resource requirements.\u003c/p\u003e\n"],["\u003cp\u003eNode pools can be added, deleted, or resized individually without affecting the entire cluster, and any configuration changes apply to all nodes within the pool.\u003c/p\u003e\n"],["\u003cp\u003eManaging node pools requires specific roles: User Cluster Admin (\u003ccode\u003euser-cluster-admin\u003c/code\u003e) and User Cluster Node Viewer (\u003ccode\u003euser-cluster-node-viewer\u003c/code\u003e).\u003c/p\u003e\n"],["\u003cp\u003eDeleting a worker node pool removes the nodes and routes to them, potentially leaving pods unschedulable if their node selectors are not met by other nodes.\u003c/p\u003e\n"]]],[],null,["# Manage node pools\n\nA *node pool* is a group of nodes within a Kubernetes cluster that all have the same\nconfiguration. Node pools use a `NodePool` specification. Each node in the pool\nhas a Kubernetes node label, which has the name of the node pool as its value.\nBy default, all new node pools run the same version of Kubernetes as the control\nplane.\n\nWhen you create a Kubernetes cluster, the number of nodes and type of nodes that you\nspecify create the first node pool of the cluster. You can add additional node\npools of different sizes and types to your cluster. All nodes in any given node\npool are identical to one another.\n\nCustom node pools are useful when scheduling pods that require more resources\nthan others, such as more memory or local disk space. You can use node taints if\nyou need more control over scheduling the pods.\n\nYou can create and delete node pools individually without affecting the whole\ncluster. You cannot configure a single node in a node pool. Any configuration\nchanges affect all nodes in the node pool.\n\nYou can [resize node pools](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster#resize-node-pools)\nin a cluster by upscaling or downscaling the pool. Downscaling a node pool is an\nautomated process where you decrease the pool size and the\nGDC system automatically drains and evicts an arbitrary\nnode. You cannot select a specific node to remove when downscaling a node pool.\n\nBefore you begin\n----------------\n\nTo view and manage node pools in a Kubernetes cluster, you must have the following\nroles:\n\n- User Cluster Admin (`user-cluster-admin`)\n- User Cluster Node Viewer (`user-cluster-node-viewer`)\n\nThese roles are not bound to a namespace.\n\nAdd a node pool\n---------------\n\nWhen creating a Kubernetes cluster from the GDC console, you\ncan customize the default node pool and create additional node pools before the\ncluster creation initializes. If you must add a node pool to an existing Kubernetes\ncluster, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Kubernetes Engine \\\u003e Clusters**.\n2. Click the cluster from the cluster list. The **Cluster details** page is displayed.\n3. Select **Node pools \\\u003e Add node pool**.\n4. Assign a name for the node pool. You cannot modify the name after you create the node pool.\n5. Specify the number of worker nodes to create in the node pool.\n6. Select your machine class that best suits your workload requirements. The machine classes show in the following settings:\n - Machine type\n - vCPU\n - Memory\n7. Click **Save**.\n\n### API\n\n1. Open the `Cluster` custom resource spec with the `kubectl` CLI using the\n interactive editor:\n\n kubectl edit clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e: The name of the cluster.\n - \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e: The zonal API server's kubeconfig path where the Kubernetes cluster is hosted. If you have not yet generated a kubeconfig file for the API server in your targeted zone, see [Sign in](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/iam/sign-in#cli) for details.\n2. Add a new entry in the `nodePools` section:\n\n nodePools:\n ...\n - machineTypeName: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eMACHINE_TYPE\u003c/span\u003e\u003c/var\u003e\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNODE_POOL_NAME\u003c/span\u003e\u003c/var\u003e\n nodeCount: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNUMBER_OF_WORKER_NODES\u003c/span\u003e\u003c/var\u003e\n taints: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eTAINTS\u003c/span\u003e\u003c/var\u003e\n labels: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eLABELS\u003c/span\u003e\u003c/var\u003e\n acceleratorOptions:\n gpuPartitionScheme: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eGPU_PARTITION_SCHEME\u003c/span\u003e\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e: The machine type for the worker nodes of the node pool. View the [available machine types](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/cluster-node-machines#available-machine-types) for what is available to configure.\n - \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e: The name of the node pool.\n - \u003cvar translate=\"no\"\u003eNUMBER_OF_WORKER_NODES\u003c/var\u003e: The number of worker nodes to provision in the node pool.\n - \u003cvar translate=\"no\"\u003eTAINTS\u003c/var\u003e: The taints to apply to the nodes of this node pool. This is an optional field.\n - \u003cvar translate=\"no\"\u003eLABELS\u003c/var\u003e: The labels to apply to the nodes of this node pool. It contains a list of key-value pairs. This is an optional field.\n - \u003cvar translate=\"no\"\u003eGPU_PARTITION_SCHEME\u003c/var\u003e: The GPU partitioning scheme, if you're running GPU workloads. For example, `mixed-2`. The GPU is not partitioned if this field is not set. For available Multi-Instance GPU (MIG) profiles, see [Supported MIG profiles](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/cluster-node-machines#mig-profiles).\n\n | **Note:** You cannot edit node configurations, such as GPU partitioning, after the node pool is created.\n3. Save the file and exit the editor.\n\nView node pools\n---------------\n\nTo view existing node pools in a Kubernetes cluster, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Kubernetes Engine \\\u003e Clusters**.\n2. Click the cluster from the cluster list. The **Cluster details** page is displayed.\n3. Select **Node pools**.\n\nThe list of node pools running in the cluster is displayed. You can manage\nthe node pools of the cluster from this page.\n\n### API\n\n- View the node pools of a specific Kubernetes cluster:\n\n kubectl get clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e -n platform \\\n -o json --kubeconfig \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e | \\\n jq .status.workerNodePoolStatuses\n\n The output is similar to the following: \n\n [\n {\n \"conditions\": [\n {\n \"lastTransitionTime\": \"2023-08-31T22:16:17Z\",\n \"message\": \"\",\n \"observedGeneration\": 2,\n \"reason\": \"NodepoolReady\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastTransitionTime\": \"2023-08-31T22:16:17Z\",\n \"message\": \"\",\n \"observedGeneration\": 2,\n \"reason\": \"ReconciliationCompleted\",\n \"status\": \"False\",\n \"type\": \"Reconciling\"\n }\n ],\n \"name\": \"worker-node-pool\",\n \"readyNodes\": 3,\n \"readyTimestamp\": \"2023-08-31T18:59:46Z\",\n \"reconcilingNodes\": 0,\n \"stalledNodes\": 0,\n \"unknownNodes\": 0\n }\n ]\n\nDelete a node pool\n------------------\n\nDeleting a node pool deletes the nodes and routes to them. These nodes evict and\nreschedule any pods running on them. If the pods have specific node selectors,\nthe pods might remain in a non-schedulable condition if no other node in the\ncluster satisfies the criteria.\n| **Important:** Control plane node pools and load balancer node pools are critical to a cluster's function and consequently can't be removed from a cluster. You can only delete worker node pools.\n\nEnsure you have at least three worker nodes before deleting a node pool to\nensure your cluster has enough compute space to run effectively.\n\nTo delete a node pool, complete the following steps: \n\n### Console\n\n1. In the navigation menu, select **Kubernetes Engine \\\u003e Clusters**.\n\n2. Click the cluster that is hosting the node pool you want to delete.\n\n3. Select **Node pools**.\n\n4. Click *delete* **Delete** next to the node\n pool to delete.\n\n### API\n\n1. Open the `Cluster` custom resource spec with the `kubectl` CLI using the\n interactive editor:\n\n kubectl edit clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_NAME\u003c/var\u003e: The name of the cluster.\n - \u003cvar translate=\"no\"\u003eMANAGEMENT_API_SERVER\u003c/var\u003e: The zonal API server's kubeconfig path where the Kubernetes cluster is hosted. If you have not yet generated a kubeconfig file for the API server in your targeted zone, see [Sign in](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/iam/sign-in#cli) for details.\n2. Remove the node pool entry from the `nodePools` section. For example, in\n the following snippet, you must remove the `machineTypeName`, `name`, and\n `nodeCount` fields:\n\n nodePools:\n ...\n - machineTypeName: n2-standard-2-gdc\n name: nodepool-1\n nodeCount: 3\n\n Be sure to remove all fields for the node pool you are deleting.\n3. Save the file and exit the editor."]]