[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-29。"],[[["\u003cp\u003eNode pools are groups of nodes within a cluster with identical configurations, allowing for the allocation of pods with specific resource needs to dedicated nodes.\u003c/p\u003e\n"],["\u003cp\u003eThe best practice is to create two distinct node pools: \u003ccode\u003eapigee-data\u003c/code\u003e for stateful Cassandra pods and \u003ccode\u003eapigee-runtime\u003c/code\u003e for all other stateless runtime pods.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003enodeSelector\u003c/code\u003e feature assigns pods to the correct node pools, and setting \u003ccode\u003erequiredForScheduling\u003c/code\u003e to \u003ccode\u003etrue\u003c/code\u003e enforces that pods are only scheduled on matching node pools.\u003c/p\u003e\n"],["\u003cp\u003eYou can customize node pool names and assign specific components to custom node pools, and this can also be configured at the individual component level.\u003c/p\u003e\n"],["\u003cp\u003eIf you choose to label nodes manually instead of having them auto-labeled by node pools, you must properly differentiate between data and runtime node pools to achieve the desired pod scheduling.\u003c/p\u003e\n"]]],[],null,["# Configuring dedicated node pools\n\n| You are currently viewing version 1.13 of the Apigee hybrid documentation. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nAbout node pools\n----------------\n\nA [node pool](https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools)\nis a group of nodes within a cluster that all have the same configuration.\nTypically, you define separate node pools when you have pods with differing resource requirements.\nFor example, the `apigee-cassandra` pods require persistent storage, while\nthe other Apigee hybrid pods do not.\n\n\nThis topic discusses how to configure dedicated node pools for a hybrid installation.\n\nUsing the default nodeSelectors\n-------------------------------\n\n\nThe best practice is to set up two dedicated node pools: one for the Cassandra\npods and one for all the other runtime pods. Using default\n[nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) configurations, the\ninstaller will assign the Cassandra pods to a *stateful* node pool named `apigee-data` and all\nthe other pods to a *stateless* node pool named `apigee-runtime`. All you have to do is\ncreate node pools with these names, and Apigee hybrid handles the pod scheduling details\nfor you:\n\n\nFollowing is the default `nodeSelector` configuration. The `apigeeData`\nproperty specifies a node pool for the Cassandra pods. The `apigeeRuntime` specifies the node\npool for all the other pods. You can override these default\nsettings in your overrides file, as explained later in this topic: \n\n```javascript\nnodeSelector:\n requiredForScheduling: true\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-runtime\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-data\"\n```\n\n\nTo ensure your pods are scheduled on the correct nodes, create two node pools with the names `apigee-data` and `apigee-runtime`.\n\nThe requiredForScheduling property\n----------------------------------\n\n\nThe `nodeSelector` config section has a property called\n`requiredForScheduling`: \n\n```javascript\nnodeSelector:\n requiredForScheduling: false\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-runtime\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-data\"\n```\n\n\nIf set to `false`, underlying pods will be scheduled whether or not node pools\nare defined with the required names. This means that if you forget to create node pools\nor if you accidentally name a node pool other than `apigee-runtime` or\n`apigee-data`, the hybrid runtime installation will succeed. Kubernetes\nwill decide where to run your pods.\n\nIf you set `requiredForScheduling` to `true` (the default), the installation will fail\nunless there are node pools that match the configured `nodeSelector` keys and values.\n| **Note:** The best practice is to set this value to `requiredForScheduling:true` for a production environment.\n\nUsing custom node pool names\n----------------------------\n\n\nIf you don't want to use node pools with the default names, you can create node pools with\ncustom names and specify those names in the\n`nodeSelector` stanza. For example, the following configuration assigns the\nCassandra pods to the pool named `my-cassandra-pool` and all other pods to the pool\nnamed `my-runtime-pool`: \n\n```javascript\nnodeSelector:\n requiredForScheduling: false\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"my-runtime-pool\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"my-cassandra-pool\"\n```\n\nOverriding the node pool for specific components on GKE\n-------------------------------------------------------\n\n\nYou can also override node pool configurations\nat the individual component level. For example, the following configuration assigns\nthe node pool with the value\n`apigee-custom` to the `runtime` component: \n\n```javascript\nruntime:\n nodeSelector:\n key: cloud.google.com/gke-nodepool\n value: apigee-custom\n```\n\nYou can specify a custom node pool on any of these components:\n\n- `istio`\n- `mart`\n- `synchronizer`\n- `runtime`\n- `cassandra`\n- `udca`\n- `logger`\n\nGKE and Google Distributed Cloud node pool configuration\n--------------------------------------------------------\n\nOn GKE and Google Distributed Cloud (GDC) platforms, node pools must have a unique name that you provide when you create\nthe pools, and GKE/GDC automatically labels each node with the following: \n\n```javascript\ncloud.google.com/gke-nodepool=THE_NODE_POOL_NAME\n```\n\n\nAs long as you create node pools named `apigee-data` and `apigee-runtime`,\nno further configuration is required. If you want to use custom node names, see\n[Using custom node pool names](#using-custom-node-pool-names).\n\nNode pool configuration on other Kubernetes platforms.\n------------------------------------------------------\n\n\nSee your Kubernetes platform documentation for information about labeling and managing node pools.\n\nWhile the node pools automatically label the worker nodes by default, you can optionally label the worker nodes\nmanually with the following steps:\n\n1. Run the following command to get a list of the worker nodes in your cluster: \n\n ```\n kubectl -n APIGEE_NAMESPACE get nodes\n ```\n\n\n If you are using custom node pool labels, make sure each key-value pair is unique. For example: \n\n ```\n nodeSelector:\n requiredForScheduling: true\n apigeeRuntime:\n key: \"pool1-key\"\n value: \"pool1-label\"\n apigeeData:\n key: \"pool2-key\"\n value: \"pool2-label\"\n ```\n\n Overriding the node pool for specific components\n ------------------------------------------------\n\n\n You can also override node pool configurations at the individual component level. For example, the following configuration assigns the node pool with the value `apigee-custom` to the `runtime` component: \n\n ```javascript\n runtime:\n nodeSelector:\n key: apigee.com/apigee-nodepool\n value: apigee-custom\n ```\n\n You can specify a custom node pool on any of these components:\n - `apigeeingressgateway`\n - `cassandra`\n - `logger`\n - `mart`\n - `metrics`\n - `runtime`\n - `synchronizer`\n - `udca`"]]