About custom compute classes


This page describes how you can use custom compute classes to control the properties of the nodes that Google Kubernetes Engine (GKE) provisions when autoscaling your cluster. This document is intended for platform administrators who want to declaratively define autoscaling profiles for nodes, so that specific workloads run on hardware that meets their requirements.

Compute classes overview

In GKE, a compute class is a profile that consists of a set of node attributes that GKE uses to provision the nodes that run your workloads. Compute classes can target specific optimizations, like provisioning high-performance nodes or prioritizing cost-optimized configurations for cheaper running costs. Custom compute classes let you define profiles that GKE then uses to provision nodes that closely meet the requirements of specific workloads.

Custom compute classes are available to use in GKE Autopilot mode and GKE Standard mode in version 1.30.3-gke.1451000 and later, and offer a declarative approach to defining node attributes and autoscaling priorities. Custom compute classes are available to configure and use in all eligible GKE clusters by default.

Benefits of custom compute classes

Custom compute classes offer the following benefits:

  • Fallback compute priorities: Define a hierarchy of node configurations in each compute class for GKE to prioritize. If the most preferred configuration is unavailable, GKE automatically chooses the next configuration in the hierarchy. This fallback model ensures that even when compute resources are unavailable, your workloads still run on optimized hardware with minimal scheduling delays.
  • Granular autoscaling control: Define node configurations that are best suited for specific workloads. GKE prioritizes those configurations when creating nodes during scaling.
  • Declarative infrastructure configuration: Adopt a declarative approach to infrastructure management so that GKE automatically creates nodes for you that match your specific workload requirements.
  • Active migration: If compute resources for a more preferred machine configuration become available in your location, GKE automatically migrates your workloads to new nodes that use the preferred configuration.
  • Cost optimization: Prioritize cost-efficient node types like Spot VMs to reduce your cluster expenses.
  • Default compute classes for namespaces: Set a default compute class in each Kubernetes namespace, so that workloads in that namespace run on optimized hardware even if they don't request a specific compute class.
  • Custom node consolidation thresholds: Define custom resource usage thresholds for nodes. If a specific node's resource usage falls below your threshold, GKE attempts to consolidate the workloads into a similar, available node and scales down the underutilized node.

Use cases for custom compute classes

Consider using custom compute classes in scenarios like the following:

  • You want to run your AI/ML workloads on specific GPU or TPU configurations.
  • You want to set default hardware configurations for the workloads that specific teams run, taking the overhead off of the application operators.
  • You run workloads that perform optimally on specific Compute Engine machine series or hardware configurations.
  • You want to declare hardware configurations that meet specific business requirements, like high performance, cost optimized, or high availability.
  • You want GKE to hierarchically fallback to using specific hardware configurations during compute resource unavailability, so that your workloads always run on machines that suit their requirements.
  • You want to centrally decide on the optimal configurations across your enterprise's fleet, so that your costs are more predictable and your workloads run more reliably.
  • You want to centrally specify which of your Compute Engine capacity reservations GKE should use to provision new nodes for specific workloads.

How custom compute classes work

Custom compute classes are Kubernetes custom resource that provisions Google Cloud infrastructure. You define a ComputeClass object in the cluster, and then request that compute class in workloads or set that compute class as the default for a Kubernetes namespace. When you deploy a workload that requests the compute class, GKE attempts to place the Pods on nodes that meet the compute class requirements.

To ensure that your custom compute classes are optimized for your fleet, consider the following guidelines:

  • Understand the compute requirements of your fleet, including any application-specific hardware requirements.
  • Decide on a theme that guides the design of each compute class. For example, a performance-optimized compute class might have a fallback strategy that uses only high-CPU machine types.
  • Decide on the Compute Engine machine family and machine series that most closely fit your workloads. For details, see Machine families resource and comparison guide.
  • Plan a fallback strategy within each compute class so that workloads always run on nodes that use similar machine configurations. For example, if the N4 machine series isn't available, you can fall back to N2 machines.

View the complete custom resource definition

To view the complete custom resource definition (CRD) for the ComputeClass custom resource, run the following command:

kubectl describe crd computeclasses.cloud.google.com

The output shows you the entire CRD, including all supported fields and relationships between fields. To better understand custom compute classes, refer to this definition while you read this document.

Plan a custom compute class

To effectively plan, deploy, and use a custom compute class in your cluster, you do the following steps:

  1. Choose your fallback compute priorities: Define a series of rules that govern the properties of the nodes that GKE creates for the compute class.
  2. Configure GKE Standard node pools and compute classes: For Standard mode clusters, perform required configuration steps to use the compute class with your node pools.
  3. Define scaling behavior when no priority rules apply: optionally, tell GKE what to do if nodes that meet your priority rules can't be provisioned.
  4. Set autoscaling parameters for node consolidation: tell GKE when to consolidate workloads and remove underutilized nodes.
  5. Configure active migration to higher priority nodes: optionally, tell GKE to move workloads to more preferred nodes as hardware becomes available.
  6. Consume Compute Engine reservations: optionally, tell GKE to consume existing Compute Engine zonal reservations when creating new nodes.

Choose your fallback compute priorities

The primary advantage of using a custom compute class is to have control over the fallback strategy when your preferred nodes are unavailable due to factors like resource exhaustion and quota limitations.

You create a fallback strategy by defining a list of priority rules in your custom compute class. When a cluster needs to scale up, GKE prioritizes creating nodes that match the first priority rule. If GKE can't create those nodes, it falls back to the next priority rule, repeating this process until GKE successfully scales up the cluster or exhausts all the rules. If all the rules are exhausted, GKE creates nodes based on the default or specified behavior described in Define scaling behavior when no priority rules apply.

Priority rules

You define priority rules in the spec.priorities field of the ComputeClass custom resource. Each rule in the priorities field describes the properties of the nodes to provision. GKE processes the priorities field in order, which means that the first item in the field is the highest priority for node provisioning.

Depending on the type of priority rule, you can specify additional machine properties, like Spot VMs or minimum CPU capacity, for GKE to use when provisioning nodes. The priorities field supports the following priority rule types:

  • machineFamily: Defines nodes using a Compute Engine machine series, like n2 or c3.
  • machineType: Defines nodes using a predefined Compute Engine machine type, like n2-standard-4.
  • nodepools: In GKE Standard clusters, provides a list of manually-created node pools that are associated with the compute class in which GKE should provision nodes.

Additionally, you can specify TPUs in your priority rules without explicitly specifying a machine series or a machine type. The type and configuration of TPU that you select implicitly selects the most suitable machine type.

machineFamily rule type

The machineFamily field accepts a Compute Engine machine series like n2 or c3. If unspecified, the default is e2. You can use the following fields alongside the machineFamily rule type:

  • spot: Spot VMs. The default value is false.
  • minCores: Minimum vCPUs per node. The default value is 0.
  • minMemoryGb: Minimum memory per node. The default value is 0.
  • storage.bootDiskKMSKey: Path to Cloud Key Management Service key to use for boot disk encryption.

The following example shows the machineFamily priority rule:

priorities:
- machineFamily: n2
  spot: true
  minCores: 16
  minMemoryGb: 64
  storage:
    bootDiskKMSKey: projects/example/locations/us-central1/keyRings/example/cryptoKeys/key-1

machineType rule type

The machineType field accepts a Compute Engine predefined machine type, like n2-standard-32. The machine type must support any GPUs that you specify.

You can use the following fields alongside the machineType rule type:

  • spot: Use Spot VMs. Default is false.
  • storage: Configure node storage.
    • storage.bootDiskType: Boot disk type.
    • storage.bootDiskKMSKey: Path to Cloud KMS key to use for boot disk encryption.
    • storage.bootDiskSize: Size in GB for the node boot disk.
    • storage.localSSDCount: Number of local SSDs to attach to the node. If specified, must be at least 1.
  • gpu: Configure GPUs.

The following example shows a machineType rule for n2-standard-32 machine types:

priorities:
- machineType: n2-standard-32
  spot: true
  storage:
    bootDiskType: pd-balanced
    bootDiskSize: 250
    localSSDCount: 2
    bootDiskKMSKey: projects/example/locations/us-central1/keyRings/example/cryptoKeys/key-1

The following example shows a machineType rule for GPUs:

priorities:
- machineType: g2-standard-16
  spot: false
  gpu:
    type: nvidia-l4
    count: 1

TPU rules

Requires GKE version 1.31.2-gke.1518000 or later

To select TPUs in your priority rules, specify the type, count, and topology of the TPU in the tpu field of a priority rule. The following fields are required:

You can also specify any of the following fields alongside the tpu field in your priority rule:

  • spot: Use Spot VMs. Default is false.
  • storage: Configure node storage.
    • storage.bootDiskType: Boot disk type.
    • storage.bootDiskKMSKey: Path to Cloud KMS key to use for boot disk encryption.
    • storage.bootDiskSize: Size in GB for the node boot disk.
  • reservations: Use a Compute Engine reservation. For details, see the Consume Compute Engine reservations section.

The following example shows a rule for TPUs:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: tpu-class
spec:
  priorities:
  - tpu:
      type: tpu-v5p-slice
      count: 4
      topology: 4x4x4
    reservations:
      specific:
      - name: tpu-reservation
        project: reservation-project
      affinity: Specific
  - spot: true
    tpu:
      type: tpu-v5p-slice
      count: 4
      topology: 4x4x4
  nodePoolAutoCreation:
    enabled: true

This example defines the following fallback behavior:

  1. GKE attempts to provision a 16-node multi-host TPU v5p slice by consuming a shared Compute Engine reservation named tpu-reservation from the reservation-project project.
  2. If the reservation has no available TPUs, GKE attempts to provision a 16-node multi-host TPU v5p slice running on Spot VMs.
  3. If none of the preceding rules can be satisfied, GKE follows the logic in the Define scaling behavior when no priority rules apply section.

After you deploy a TPU custom compute class to your cluster, select that custom compute class in your workload:

nodepools rule type

The nodepools field takes a list of existing node pools on which GKE attempts to create pending Pods. GKE doesn't process the values in this field in order. You can't specify other machine properties alongside this field in the same priority rule item. This field is only supported on GKE Standard mode. For usage details, see Target specific node pools in a compute class definition.

How GKE creates nodes using priority rules

When you deploy a workload that requests a compute class and a new node is needed, GKE processes the list of rules in the priorities field of the ComputeClass specification in order.

For example, consider the following specification:

spec:
  ...
  priorities:
  - machineFamily: n2
    spot: true
    minCores: 64
  - machineFamily: n2
    spot: true
  - machineFamily: n2
    spot: false

When you deploy a workload that requests a compute class with these priority rules, GKE matches nodes as follows:

  1. GKE places Pods on any existing nodes that are associated with this compute class.
  2. If existing nodes can't accommodate the Pods, GKE provisions new nodes that use the N2 machine series, are Spot VMs, and have at least 64 vCPU.
  3. If N2 Spot VMs with at least 64 vCPU aren't available in the region, GKE provisions new nodes that use N2 Spot VMs that can fit the Pods, regardless of the number of cores.
  4. If no N2 Spot VMs are available in the region, GKE provisions new on-demand N2 VMs.
  5. If none of the preceding rules can be satisfied, GKE follows the logic in the Define scaling behavior when no priority rules apply section.

GKE Standard node pools and compute classes

If you use GKE Standard mode, you might have to perform manual configuration to ensure that your compute class Pods schedule as expected.

Configure manually-created node pools for compute class use

If your GKE Standard clusters have node pools that you manually created without node auto-provisioning, you must configure those node pools to associate them with specific compute classes. GKE only schedules Pods that request a specific compute class on nodes in node pools that you associate with that compute class. GKE Autopilot mode and GKE Standard mode node pools that were created by node auto-provisioning automatically perform this configuration for you.

To associate a manually created node pool with a compute class, you add node labels and node taints to the node pool during creation or during an update by specifying the --node-labels flag and the --node-taints flag, as follows:

  • Node label: cloud.google.com/compute-class=COMPUTE_CLASS
  • Taint: cloud.google.com/compute-class=COMPUTE_CLASS:NoSchedule

In these attributes, COMPUTE_CLASS is the name of your custom compute class.

For example, the following command updates an existing node pool and associates it with the dev-class compute class:

gcloud container node-pools update dev-pool \
    --cluster=example-cluster \
    --node-labels="cloud.google.com/compute-class=dev-class" \
    --node-taints="cloud.google.com/compute-class=dev-class:NoSchedule"

You can associate each node pool in your cluster with one custom compute class. Pods that GKE schedules on these manually-created node pools only trigger node creation inside those node pools during autoscaling events.

Node auto-provisioning and compute classes

You can use node auto-provisioning with a custom compute class to let GKE automatically create and delete node pools based on your priority rules.

To use node auto-provisioning with a compute class, you must do the following:

  1. Ensure that you have node auto-provisioning enabled in your cluster.
  2. Add the nodePoolAutoCreation field with the enabled: true value to your ComputeClass specification.

GKE can then place Pods that use compute classes that configure node auto-provisioning on new node pools. GKE decides whether to scale up an existing node pool or create a new node pool based on factors like the size of the clusters and Pod requirements. Pods with compute classes that don't configure node auto-provisioning continue to only scale up existing node pools.

You can use compute classes that interact with node auto-provisioning alongside compute classes that interact with manually-created node pools in the same cluster.

Consider the following interactions with node auto-provisioning:

  • You can't use the machine family or the Spot VMs node selectors because these selectors conflict with compute class behavior. GKE rejects any Pods that request a compute class and also request Spot VMs or specific machine series.
  • You can configure node auto-provisioning for compute classes that use the nodepools field to reference existing node pools. Node auto-provisioning processes the priorities in order and attempts to scale the existing node pools up to place your Pods.

Consider the following example for a cluster that has both manually-created node pools and node auto-provisioning:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: my-class
spec:
  priorities:
  - nodepools: [manually-created-pool]
  - machineFamily: n2
  - machineFamily: n2d
  nodePoolAutoCreation:
    enabled: true

In this example, GKE attempts to do the following:

  1. Create new nodes in the manually-created-pool node pool.
  2. Provision N2 nodes, either in existing N2 node pools or by creating a new node pool.
  3. If GKE can't create N2 nodes, it attempts to scale up existing N2D node pools or create new N2D node pools.

Target specific node pools in a compute class definition

The priorities.nodepools field lets you specify a list of manually created node pools on which GKE attempts to schedule Pods in no specific order in GKE Standard clusters that use cluster autoscaling. This field only supports a list of node pools; you can't specify additional machine properties like the machine series in the same priority rule. When you deploy a workload that requests a compute class that has named node pools, GKE attempts to schedule the pending Pods in those node pools. GKE might create new nodes in those node pools to place the Pods.

The node pools that you specify in the priorities.nodepools field must be associated with that compute class by using node labels and node taints, as described in the Configure manually created node pools for compute classes section.

The list of node pools that you specify in the nodepools field has no priority. To configure a fallback order for named node pools, you must specify multiple separate priorities.nodepools items. For example, consider the following specification:

spec:
  ...
  priorities:
  - nodepools: [pool1, pool2]
  - nodepools: [pool3]

In this example, GKE first attempts to place pending Pods that request this compute class on existing nodes in node pools that are labeled with the compute class. If existing nodes aren't available, GKE tries to provision new nodes in pool1 or pool2. If GKE can't provision new nodes in these node pools, GKE attempts to provision new Pods in pool3.

Define scaling behavior when no priority rules apply

The ComputeClass custom resource lets you specify what GKE should do if there are no nodes that can meet any of the priority rules. The whenUnsatisfiable field in the specification supports the following values:

  • ScaleUpAnyway: Create a new node that uses the cluster's default machine configuration. This is the default behavior.
    • In Autopilot clusters, GKE places the Pod on a new or existing node, regardless of the node machine configuration.
    • In Standard clusters that don't use node auto-provisioning, GKE tries to scale up any manually created node pool that defines a label and taint matching a given compute class.
    • In Standard clusters that use node auto-provisioning, GKE might create a new node pool that uses the default E2 machine series to place the Pod.
  • DoNotScaleUp: Leave the Pod in the Pending status until a node that meets the compute class requirements is available.

Set autoscaling parameters for node consolidation

By default, GKE removes nodes that are underutilized by running workloads, consolidating those workloads on other nodes that have capacity. For all compute classes, this is the default behavior because all clusters that use compute classes must use the cluster autoscaler or are Autopilot clusters. During a node consolidation, GKE drains an underutilized node, recreates the workloads on another node, and then deletes the drained node.

The timing and criteria for node removal depends on the autoscaling profile. You can fine-tune the resource underutilization thresholds that trigger node removal and workload consolidation by using the autoscalingPolicy section in your custom compute class definition. You can fine-tune the following parameters:

  • consolidationDelayMinutes: The number of minutes after which GKE removes underutilized nodes
  • consolidationThreshold: The utilization threshold for CPU and memory as a percentage of the node's available resources. GKE only considers nodes for removal if the resource utilization is less than this threshold.
  • gpuConsolidationThreshold: The utilization threshold for GPU as a percentage of the node's available resources. GKE only considers nodes for removal if the resource utilization is less than this threshold. Consider setting this to 100 or to 0 so that GKE consolidates any nodes that don't have 100% utilization of attached GPUs.

Consider the following example:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: my-class
spec:
  priorities:
  - machineFamily: n2
  - machineFamily: n2d
  autoscalingPolicy:
    consolidationDelayMinutes: 5
    consolidationThreshold: 70

In this configuration, GKE removes unused nodes after five minutes, and nodes only become candidates for consolidation if both their CPU and memory utilization is less than 70%.

Configure active migration to higher priority nodes

Active migration is an optional autoscaling feature in custom compute classes that automatically replaces existing nodes that are lower in a compute class fallback priority list with new nodes that are higher in that priority list. This ensures that all your running Pods eventually run on your most preferred nodes for that compute class, even if GKE originally had to run those Pods on less preferred nodes.

When an active migration occurs, GKE creates new nodes based on the compute class priority rules, and then drains and deletes the obsolete lower priority nodes. The migration happens gradually to minimize workload disruption. Active migration has the following considerations:

  • If you've enabled node auto-provisioning on your Standard clusters, active migration might trigger the creation of new node pools if existing node pools don't meet the higher-priority criteria defined in your custom compute class.
  • To avoid critical workload disruptions, active migration doesn't move the following Pods:
    • Pods that set a PodDisruptionBudget, if the move would exceed the PodDisruptionBudget.
    • Pods that have the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation.

Consider the following example compute class specification, which prioritizes N2 nodes over N2D nodes:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: my-class
spec:
  priorities:
  - machineFamily: n2
  - machineFamily: n2d
  activeMigration:
    optimizeRulePriority: true

If N2 nodes were unavailable when you deployed a Pod with this compute class, GKE would have used N2D nodes as a fallback option. If N2 nodes become available to provision later, like if your quota increases or if N2 VMs become available in your location, GKE creates a new N2 node and gradually migrates the Pod from the existing N2D node to the new N2 node. GKE then deletes the obsolete N2D node.

Consume Compute Engine reservations

Available in GKE version 1.31.1-gke.2105000 and later

If you use Compute Engine capacity reservations to get a higher level of assurance of hardware availability in specific Google Cloud zones, you can configure each fallback priority in your custom compute class so that GKE consumes reservations when creating new nodes.

Consuming reservations in custom compute classes has the following requirements:

  • Standard mode clusters must use node auto-provisioning for GKE to use reservations to create new nodes. For details, see the Node auto-provisioning and compute classes section. You can also continue to consume reservations when you manually create node pools in your cluster.
  • Compute classes that configure local SSDs must use the machineType priority rule, not machineFamily. For details, see the machineType rule type section.

Consider the following example compute class specification, which selects a specific shared reservation for use when provisioning n2 instances for nodes:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: shared-specific-reservations
spec:
  nodePoolAutoCreation:
    enabled: true
  priorities:
  - machineFamily: n2
    reservations:
      specific:
      - name: n2-shared-reservation
        project: reservation-project
      affinity: Specific
  - machineFamily: n2
    spot: true
  - machineFamily: n2
  whenUnsatisfiable: DoNotScaleUp

If you deploy a Pod that uses the shared-specific-reservations compute class, GKE attempts to use the n2-shared-reservation reservation when creating new n2 instances to run the Pod. If the reservation doesn't have available capacity, the scale-up operation fails.

You can consume the following types of reservations:

  • Specific single-project reservations: configure the following fields:

    • reservations.specific.name: the reservation name.
    • reservations.affinity: must be Specific.
  • Specific shared reservations: configure the following fields:

    • reservations.specific.name: the reservation name.
    • reservations.specific.project: the project ID of the project that owns the reservation.
    • reservations.affinity: must be Specific.
  • Any matching reservations: configure the following fields:

    • reservations.affinity: must be AnyBestEffort.
    • Don't set a reservation name or project.

If GKE can't find available capacity in a reservation, the resulting behavior depends on the type of reservation being selected in the compute class priority rule, as follows:

  • Specific reservations: GKE tries the next priority rule in the compute class.
  • Any matching reservations: GKE tries to provision an on-demand node that meets the requirements of that priority rule. If GKE can't provision an on-demand node, GKE tries the next priority rule in the compute class.

If GKE can't meet the requirements of any of the priority rules for the compute class, the behavior when no rules apply occurs.

Request compute classes in workloads

To use a custom compute class after you finish designing it, your Pod must explicitly request that compute class in the Pod specification. You can optionally set a compute class as the default in a specific Kubernetes namespace, in which case Pods in that namespace will use that compute class unless the Pods request a different compute class.

For instructions to request and use compute classes in GKE, see Control autoscaled node attributes with custom compute classes.

What's next