Google Kubernetes Engine (GKE) node memory swap lets GKE nodes use disk space as virtual memory when physical memory is exhausted. Node memory swap can help improve application resilience and prevent out-of-memory (OOM) errors for certain workloads.
When to use node memory swap
Use node memory swap to provide a buffer against OOM errors for memory-intensive applications, especially during unexpected usage spikes. Node memory swap can help improve your workload resilience, such as in the following scenarios:
- Run workloads with unpredictable memory patterns.
- Reduce the risk of applications crashing because of node memory exhaustion.
- Optimize costs by avoiding the need to overprovision memory for occasional peaks.
How node memory swap works
When you enable node memory swap, GKE configures the node's operating system to use disk space as virtual memory. This process provides a buffer for applications that experience temporary memory pressure.
GKE calculates container swap limits based on the container's memory resource limits and the node's total memory.
You can configure swap on different storage types to balance performance and cost:
- Boot disk: uses the node's boot disk for swap space.
- Ephemeral local SSD: uses a local SSD that is also shared with Pod ephemeral storage.
- Dedicated local SSD: reserves one or more local SSDs exclusively for swap.
To protect sensitive data, GKE encrypts swap space by default using an ephemeral key.
Requirements and limitations
Node memory swap has the following requirements and limitations:
- GKE clusters must be version
1.34.1-gke.1341000or later. - Only Pods that have the
BurstableQuality of Service (QoS) class can use node memory swap. For more information about QoS classes, see the Kubernetes documentation for Pod quality of service classes. - If you enable node memory swap, the container resize policy must be set
to
RestartContainer. - If you configure node memory swap to use a boot disk, the swap size cannot exceed 50% of the boot disk's total capacity.
- If you configure node memory swap to use a local SSD, the machine type must support local SSD. You can't use raw block storage with a local SSD.
Best practices
Node memory swap is intended as a safety net for unpredictable memory spikes, not a replacement for sufficient physical memory. For guidance on optimizing workloads, see Right size workloads at scale.
You should also consider the following best practices when using node memory swap:
- Isolate node memory swap-enabled nodes by
applying a taint to the node pool, for example,
gke-swap=enabled:NoSchedule, and adding a corresponding toleration to the workloads intended to use swap. - Size your node memory swap space appropriately. Insufficient node memory swap space might not prevent OOM errors, and excessive use can degrade performance.
- Monitor node memory swap use on your workloads. Frequent use of node memory swap can be an indicator of memory pressure.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.
Enable node memory swap
You can enable node memory swap on a cluster or node pool basis. To enable
node memory swap, create or update a system-config.yaml file that contains
the node memory swap configuration that you want. The following example
enables node memory swap with default settings:
linuxNodeConfig:
swapConfig:
enabled: true
There are additional settings that you can configure. This example configures unencrypted swap to use 30% of an ephemeral local SSD's storage:
linuxNodeConfig:
swapConfig:
enabled: true
encryptionConfig:
disabled: true
ephemeral_local_ssd_profile:
swapSizePercent: 30
ephemeralStorageLocalSsdConfig:
localSsdCount: 1
Enable node memory swap on a cluster
To enable node memory swap on a cluster, complete one of the following steps:
To create a new cluster with node memory swap enabled, run the following command:
gcloud beta container clusters create CLUSTER_NAME \ --location=LOCATION \ --cluster-version=1.34.1-gke.1341000 \ --system-config-from-file=system-config.yamlReplace the following:
CLUSTER_NAME: the name of your new cluster.LOCATION: the region or zone for your cluster.
To update an existing cluster to enable node memory swap, run the following command:
gcloud beta container clusters update CLUSTER_NAME \ --location=LOCATION \ --system-config-from-file=system-config.yamlReplace the following:
CLUSTER_NAME: the name of your new cluster.LOCATION: the region or zone for your cluster.
Enable node memory swap on a node pool
To enable node memory swap on a node pool, complete one of the following steps:
To create a new node pool with node memory swap enabled, run the following command:
gcloud beta container node-pools create NODEPOOL_NAME \ --cluster=CLUSTER_NAME \ --location=LOCATION \ --node-version=1.34.1-gke.1293000 \ --system-config-from-file=system-config.yamlReplace the following:
NODEPOOL_NAME: the name of your new node pool.CLUSTER_NAME: the name of your cluster.LOCATION: the region or zone for your cluster.
To update an existing node pool to enable node memory swap, run the following command:
gcloud beta container node-pools update NODEPOOL_NAME \ --cluster=CLUSTER_NAME \ --location=LOCATION \ --system-config-from-file=system-config.yamlReplace the following:
NODEPOOL_NAME: the name of your node pool.CLUSTER_NAME: the name of your cluster.LOCATION: the region or zone for your cluster.
Verify the configuration
To verify that node memory swap is enabled, complete the following steps:
Verify that the
system-config.yamlis applied with theswapConfigsettings by running the following command:gcloud beta container node-pools describe NODEPOOL_NAME \ --cluster=CLUSTER_NAME \ --location=LOCATION \ --format='yaml(config.linuxNodeConfig.swapConfig)'Verify the kubelet configuration exists on the node by running the following command:
kubectl get --raw "/api/v1/nodes/NODE_NAME/proxy/configz" | jq .kubeletconfig.memorySwap
Monitor memory swap use
You can monitor node memory with Cloud Monitoring or kubectl.
Monitoring
The following system metrics are available by default for observing swap usage:
kubernetes.io/node/memory/swap_used_byteskubernetes.io/container/memory/swap_used_bytes
GKE also provides container-level swap usage metrics through cAdvisor. To use these metrics, enable cAdvisor on the cluster:
prometheus.googleapis.com/container_memory_swap/gauge
kubectl
Monitor swap usage with kubectl commands by completing the following steps:
Check the
SwapDetectedcondition on the node object by running the following command:kubectl get node NODE_NAME -o jsonpath='{.status.conditions[?(@.type=="Swap")]}' | jq .The output is similar to the following:
{ "lastHeartbeatTime": "2025-07-11T00:14:52Z", "lastTransitionTime": "2025-06-25T05:20:10Z", "message": "Swap is active: Total=49Gi Used=0B Free=49Gi", "reason": "SwapDetected", "status": "True", "type": "Swap" }Check the swap capacity by running the following command:
kubectl get node NODE_NAME -o jsonpath='{.status.nodeInfo.swap}'The output is similar to the following:
{"capacity":53687087104}
Disable node memory swap
To disable node memory swap, update your system-config.yaml file by completing the following steps:
Update the
system-config.yamlfile to setswapConfig.enabledtofalse:linuxNodeConfig: swapConfig: enabled: falseUpdate the node pool with the new configuration:
gcloud beta container node-pools update NODEPOOL_NAME \ --cluster=CLUSTER_NAME \ --location=LOCATION \ --system-config-from-file=system-config.yaml
What's next
- Learn more about customizing node system configuration.
- Learn more about GKE storage options.