[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-21。"],[[["\u003cp\u003eNode pools are groups of nodes within a cluster with the same configuration, typically used to accommodate pods with varying resource needs.\u003c/p\u003e\n"],["\u003cp\u003eFor Apigee hybrid installations, it's best practice to use two dedicated node pools: \u003ccode\u003eapigee-data\u003c/code\u003e for Cassandra pods and \u003ccode\u003eapigee-runtime\u003c/code\u003e for all other pods.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003enodeSelector\u003c/code\u003e configuration allows specifying node pool assignments, with \u003ccode\u003erequiredForScheduling\u003c/code\u003e determining whether the installation succeeds even without matching node pools.\u003c/p\u003e\n"],["\u003cp\u003eCustom node pool names can be used instead of the defaults, by specifying the custom names in the \u003ccode\u003enodeSelector\u003c/code\u003e configuration, and individual components can override node pool configurations as well.\u003c/p\u003e\n"],["\u003cp\u003eGKE node pools will be labeled automatically with a specific key and value, whereas you can manually label worker nodes as well, using a \u003ccode\u003ekubectl\u003c/code\u003e command.\u003c/p\u003e\n"]]],[],null,["# Configuring dedicated node pools\n\n| You are currently viewing version 1.14 of the Apigee hybrid documentation. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nAbout node pools\n----------------\n\nA [node pool](https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools)\nis a group of nodes within a cluster that all have the same configuration.\nTypically, you define separate node pools when you have pods with differing resource requirements.\nFor example, the `apigee-cassandra` pods require persistent storage, while\nthe other Apigee hybrid pods do not.\n\n\nThis topic discusses how to configure dedicated node pools for a hybrid installation.\n\nUsing the default nodeSelectors\n-------------------------------\n\n\nThe best practice is to set up two dedicated node pools: one for the Cassandra\npods and one for all the other runtime pods. Using default\n[nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) configurations, the\ninstaller will assign the Cassandra pods to a *stateful* node pool named `apigee-data` and all\nthe other pods to a *stateless* node pool named `apigee-runtime`. All you have to do is\ncreate node pools with these names, and Apigee hybrid handles the pod scheduling details\nfor you:\n\n\nFollowing is the default `nodeSelector` configuration. The `apigeeData`\nproperty specifies a node pool for the Cassandra pods. The `apigeeRuntime` specifies the node\npool for all the other pods. You can override these default\nsettings in your overrides file, as explained later in this topic: \n\n```javascript\nnodeSelector:\n requiredForScheduling: true\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-runtime\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-data\"\n```\n\n\nTo ensure your pods are scheduled on the correct nodes, create two node pools with the names `apigee-data` and `apigee-runtime`.\n\nSee:\n\n- [`nodeSelector.apigeeRuntime.key`](/apigee/docs/hybrid/v1.14/config-prop-ref#nodeselector-apigeeruntime-key).\n- [`nodeSelector.apigeeRuntime.value`](/apigee/docs/hybrid/v1.14/config-prop-ref#nodeselector-apigeeruntime-value).\n- [`nodeselector.apigeeData.key`](/apigee/docs/hybrid/v1.14/config-prop-ref#nodeselector-apigeedata-key).\n- [`nodeSelector.apigeeData.value`](/apigee/docs/hybrid/v1.14/config-prop-ref#nodeselector-apigeedata-value).\n\nThe requiredForScheduling property\n----------------------------------\n\n\nThe `nodeSelector` config section has a property called\n`requiredForScheduling`: \n\n```javascript\nnodeSelector:\n requiredForScheduling: false\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-runtime\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"apigee-data\"\n```\n\n\nIf set to `false`, underlying pods will be scheduled whether or not node pools\nare defined with the required names. This means that if you forget to create node pools\nor if you accidentally name a node pool other than `apigee-runtime` or\n`apigee-data`, the hybrid runtime installation will succeed. Kubernetes\nwill decide where to run your pods.\n\nIf you set `requiredForScheduling` to `true` (the default), the installation will fail\nunless there are node pools that match the configured `nodeSelector` keys and values.\n| **Note:** The best practice is to set this value to `requiredForScheduling:true` for a production environment.\n\nSee [` nodeSelector.requiredForScheduling`](/apigee/docs/hybrid/v1.14/config-prop-ref#nodeselector-requiredforscheduling).\n\nUsing custom node pool names\n----------------------------\n\n\nIf you don't want to use node pools with the default names, you can create node pools with\ncustom names and specify those names in the\n`nodeSelector` stanza. For example, the following configuration assigns the\nCassandra pods to the pool named `my-cassandra-pool` and all other pods to the pool\nnamed `my-runtime-pool`: \n\n```javascript\nnodeSelector:\n requiredForScheduling: false\n apigeeRuntime:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"my-runtime-pool\"\n apigeeData:\n key: \"cloud.google.com/gke-nodepool\"\n value: \"my-cassandra-pool\"\n```\n\nOverriding the node pool for specific components on GKE\n-------------------------------------------------------\n\n\nYou can also override node pool configurations\nat the individual component level. For example, the following configuration assigns\nthe node pool with the value\n`apigee-custom` to the `runtime` component: \n\n```javascript\nruntime:\n nodeSelector:\n key: cloud.google.com/gke-nodepool\n value: apigee-custom\n```\n\nYou can specify a custom node pool on any of these components:\n\n- `istio`\n- `mart`\n- `synchronizer`\n- `runtime`\n- `cassandra`\n- `udca`\n- `logger`\n\nGKE node pool configuration\n---------------------------\n\nIn GKE, node pools must have a unique name that you provide when you create\nthe pools, and GKE\nautomatically labels each node with the following: \n\n```javascript\ncloud.google.com/gke-nodepool=THE_NODE_POOL_NAME\n```\n\n\nAs long as you create node pools named `apigee-data` and `apigee-runtime`,\nno further configuration is required. If you want to use custom node names, see\n[Using custom node pool names](#using-custom-node-pool-names).\n\nManually labeling nodes\n-----------------------\n\nWhile the node pools automatically label the worker nodes by default, you can optionally label the worker nodes\nmanually with the following steps:\n\n1. Run the following command to get a list of the worker nodes in your cluster: \n\n ```\n kubectl -n APIGEE_NAMESPACE get nodes\n ```\n\n\n Example output: \n\n ```javascript\n NAME STATUS ROLES AGE VERSION\n apigee-092d639a-4hqt Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-092d639a-ffd0 Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-109b55fc-5tjf Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-c2a9203a-8h27 Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-c70aedae-t366 Ready \u003cnone\u003e 7d v1.14.6-gke.2\n apigee-d349e89b-hv2b Ready \u003cnone\u003e 7d v1.14.6-gke.2\n ```\n2. Label each node to differentiate between runtime nodes and data nodes. **Note:** Be sure to choose the nodes so that they are equally distributed among availability zones (AZs).\n\n\n Use this command to label the nodes: \n\n ```\n kubectl label node NODE_NAME KEY=VALUE\n ```\n\n\n For example: \n\n ```javascript\n $ kubectl label node apigee-092d639a-4hqt apigee.com/apigee-nodepool=apigee-runtime\n $ kubectl label node apigee-092d639a-ffd0 apigee.com/apigee-nodepool=apigee-runtime\n $ kubectl label node apigee-109b55fc-5tjf apigee.com/apigee-nodepool=apigee-runtime\n $ kubectl label node apigee-c2a9203a-8h27 apigee.com/apigee-nodepool=apigee-data\n $ kubectl label node apigee-c70aedae-t366 apigee.com/apigee-nodepool=apigee-data\n $ kubectl label node apigee-d349e89b-hv2b apigee.com/apigee-nodepool=apigee-data\n ```\n\nOverriding the node pool for specific components on Anthos GKE\n--------------------------------------------------------------\n\n\nYou can also override node pool configurations\nat the individual component level for an Anthos GKE installation. For example, the following\nconfiguration assigns\nthe node pool with the value\n`apigee-custom` to the `runtime` component: \n\n```javascript\nruntime:\n nodeSelector:\n key: apigee.com/apigee-nodepool\n value: apigee-custom\n```\n\nYou can specify a custom node pool on any of these components:\n\n- `istio`\n- `mart`\n- `synchronizer`\n- `runtime`\n- `cassandra`\n- `udca`\n- `logger`"]]