Use dedicated Persistent Disks as ephemeral volumes

This page shows you how to use external storage hardware, such as Compute Engine Persistent Disks, as ephemeral volumes in your Google Kubernetes Engine (GKE) workloads. You should already be familiar with Kubernetes Volumes and StorageClasses.

When to use ephemeral storage in Kubernetes

Ephemeral storage is useful in any situation where your workloads only need the data during the lifecycle of the application, such as for data-processing pipelines, machine learning jobs, batch processing, local caching, or analytics. By default, part of the GKE node boot disk is available to use as ephemeral storage in your Pods. This approach often requires careful space planning.

Kubernetes generic ephemeral volumes let you explicitly request ephemeral storage for your Pods by using PersistentVolumeClaims. GKE dynamically provisions Compute Engine Persistent Disks and attaches the disks to your nodes. This type of ephemeral storage is useful in situations like the following:

  • Your workloads have high performance requirements, so you need to control the storage hardware.
  • You need short-term, container-specific ephemeral storage.
  • You want to avoid using emptyDir to provision ephemeral storage. emptyDir volumes are still useful in situations where you want multiple containers to share the data in the ephemeral storage.
  • You want more ephemeral storage capacity than the GKE built-in defaults.
  • You want to avoid having to plan your node boot disk size and type in advance for Standard mode GKE clusters.

Ephemeral storage types in GKE

In general, you can use boot disk storage capacity or dedicated Persistent Disks as ephemeral storage in your Pods and containers. The following table describes the differences:

Storage type How to use Description
Boot disk - Persistent Disks

Mount a volume using emptyDir in the Pod specification and request the capacity that you need.

For instructions, see Creating volumes.

The requested ephemeral storage is taken from a reserved portion of the node boot disk. This is the default in both Autopilot and Standard clusters.

Use when Pods have small ephemeral storage requests or when you want to share the ephemeral data between multiple containers in the Pod.


  • The request must be between 10 MiB and 10 GiB.
  • The storage hardware type is pre-configured.


No size limit, but requires careful planning of node boot disk size and storage hardware type.

For details about how GKE calculates the ephemeral storage reservation in the node boot disk, see Local ephemeral storage reservation.

Local SSD disks
  1. Create a node pool with attached Local SSD disks and a compatible machine series.
  2. Mount a volume using emptyDir with the required capacity.
  3. Use a nodeSelector to place Pods on nodes with attached Local SSD disks.

For instructions, see Provision ephemeral storage with local SSDs.

Local SSD disks use fixed 375 GB increments that are supported in Standard mode GKE clusters and in Autopilot nodes that run A100 (80 GB) GPUs.

Use when you need ephemeral storage that has high throughput.

For details, see About local SSDs for GKE.

Dedicated Persistent Disks
  1. Optionally, create a Kubernetes StorageClass for the hardware.
  2. Mount a volume using the ephemeral volume type in the Pod specification.

This document provides instructions to request this ephemeral storage type.

Google Cloud dynamically provisions the requested external hardware, attached it to your nodes, and mounts the requested volume into your Pod.

Use when Pods have large ephemeral storage requests or when you want to control the underlying Persistent Disk type. These volumes have the following properties:

  • Up to 64 TiB in Autopilot mode and Standard mode
  • Specialized hardware such as SSD-backed volumes supported.
  • Network-attached storage.
  • Uses Kubernetes Volumes to get storage, instead of using emptyDir to share the node boot disk.

For details about this ephemeral volume type, see Generic ephemeral volumes.


Storage that you provision through generic ephemeral volumes as described in this guide is billed based on Compute Engine disk pricing.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.
  • Ensure that you have a GKE Autopilot or Standard cluster running version 1.23 or later.
  • Ensure that you have enough quota in your Google Cloud project for the storage hardware. To manage your quota, see View the quotas for your project.

Create a StorageClass

Creating a custom Kubernetes StorageClass lets you specify the type of storage to provision based on your price and performance requirements. This step is optional but recommended. If you want to use the GKE default StorageClass, which has the pd-balanced Persistent Disk type, skip this step.

  1. Save the following manifest as ephemeral-pd-class.yaml:

    kind: StorageClass
      name: ephemeral-ssd
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
      type: STORAGE_TYPE

    Replace STORAGE_TYPE with the name of the Persistent Disk type that you want, like pd-ssd. For a list of supported types, see Persistent Disk types in the Compute Engine documentation.

  2. Create the StorageClass:

    kubectl create -f ephemeral-pd-class.yaml

Request ephemeral storage capacity in a Pod

To provision, attach, and use external hardware as ephemeral storage, add the corresponding volume to your Pod manifest and add a volume mount to the container specification.

  1. Save the following manifest as ephemeral-ssd-deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
      name: ephemeral-deployment
      replicas: 1
          storage: ephemeral
            storage: ephemeral
          - name: ephemeral-container
            image: nginx
                cpu: 500m
                memory: 2Gi
                ephemeral-storage: 2Gi
            - mountPath: "/short-term"
              name: ephemeral-volume
          - name: ephemeral-volume
                    type: ephemeral
                  accessModes: ["ReadWriteOnce"]
                  storageClassName: "ephemeral-ssd"
                      storage: 1Ti

    This manifest creates a new Kubernetes PersistentVolumeClaim that requests a new PersistentVolume named ephemeral-volume with the following properties:

    • spec.volumes.ephemeral: The ephemeral volume type.
    • .spec.accessModes: The volume access mode, which determines read-write access from Pods and volume sharing between nodes. This examples uses ReadWriteOnce, which mounts the PersistentVolume to a single node for access by one or more Pods on the node. For details, see Access modes.
    • .spec.storageClassName: Optionally, the name of the StorageClass that you created. If you omit this field, GKE uses the default StorageClass and provisions a pd-balanced Persistent Disk.
    • The storage capacity that you want.
  2. Create the Deployment:

    kubectl create -f ephemeral-ssd-deployment.yaml

GKE provisions a Compute Engine disk that meets the requirements of the PersistentVolumeClaim and attaches the disk to the node. GKE mounts the volume into the Pod and provides the requested capacity to the container.

Verify that GKE mounted an ephemeral volume

  1. Create a shell session in the Pod:

    kubectl exec -it deploy/ephemeral-deployment -- bash
  2. Check the mounted volumes:

    df -h

    The output is similar to the following:

    Filesystem                Size      Used Available Use% Mounted on
    /dev/sdb               1006.9G     28.0K   1006.8G   0% /short-term
    /dev/sda1                94.3G      3.6G     90.6G   4% /etc/hosts
    /dev/sda1                94.3G      3.6G     90.6G   4% /dev/termination-log
    /dev/sda1                94.3G      3.6G     90.6G   4% /etc/hostname
    /dev/sda1                94.3G      3.6G     90.6G   4% /etc/resolv.conf
  3. Exit the shell session:


What's next