Stay organized with collections
Save and categorize content based on your preferences.
This page shows you how to configure local volumes for Google Distributed Cloud
clusters.
Google Distributed Cloud clusters provide two options for configuring
local PVs)
in the cluster: LVP share and LVP node mounts. LVP share uses directories in a
shared file system, while LVP node mounts uses dedicated disks.
LVP share
This storage class creates a local PV backed by subdirectories in a local,
shared file system on every node in the cluster. These subdirectories are
automatically created during cluster creation. Workloads using this storage
class will share capacity and IOPS because the PVs are backed by the same shared
file system. For better isolation, we recommend configuring disks through LVP
node mounts instead.
Configure an LVP share
Optional: Before cluster creation, mount a disk using the configured path
as a mount point so that the created PVs will share the new disk capacity and
be isolated from the boot disk.
Specify the following under lvpShare in the cluster CR:
path: The host machine path on each host where subdirectories are
created. A local PV is created for each subdirectory. The default path
is /mnt/localpv-share.
storageClassName: The storage class that PVs are created with during
cluster creation. The default value is local-shared.
numPVUnderSharedPath: Number of subdirectories to create under path.
The default value is 5.
PVs are created with the storage class specified in storageClassName. The
total number of local PVs created in the cluster is numPVUnderSharedPath
multiplied by the number of nodes.
LVP node mounts
This storage class creates a local PV for each mounted disk in the configured
directory. Each PV maps to a disk with capacity equal to the underlying disk
capacity. The total number of local PVs created in the cluster is the number of
disks mounted under the path across all nodes. Additional mounts can be added
after cluster creation.
Configure LVP node mounts
On nodes that have extra disks for PVs, format and mount each disk under
path. This can also be done before or after cluster creation. See
best practices.
List disks and find the one you want to mount:
sudo lsblk
Format the disk, for example with single ext4 file system:
Add the disk to the /etc/fstab file, so that the device automatically
mounts again when the instance restarts:
# Backup of your current /etc/fstab file
sudo cp /etc/fstab /etc/fstab.backup
# Use the blkid command to find the UUID for the zonal persistent disk
sudo blkid /dev/DEVICE_ID
# Edit /etc/fstab file: create an entry that includes the UUID
UUID=UUID_VALUE /mnt/localpv-disk/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2
Specify the following under lvpNodeMounts in cluster CR:
path: The host machine path for each mount where mounted disks are
discovered and a local PV is created. The default path is
/mnt/localpv-disk.
storageClassName: The storage class that PVs are created with during
cluster creation. The default value is local-disks.
The configuration looks something similar to the following:
PVs are created with the storage class specified in storageClassName. The
total number of PVs created is the number of disks mounted under path
across all nodes.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[],[],null,["# Configure local storage\n\nThis page shows you how to configure local volumes for Google Distributed Cloud\nclusters.\n\nGoogle Distributed Cloud clusters provide two options for configuring\n[local PVs](https://kubernetes.io/docs/concepts/storage/volumes/#local))\nin the cluster: LVP share and LVP node mounts. LVP share uses directories in a\nshared file system, while LVP node mounts uses dedicated disks.\n| **Caution:** Using a local PV binds the Pod to a specific disk and node. If that disk or node becomes unavailable, then the Pod also becomes unavailable. Workloads using local PVs need to be resilient to this kind of failure, and may require additional orchestration to release the Pod's PVCs and find a new, empty disk on another node.\n\nLVP share\n---------\n\nThis storage class creates a local PV backed by subdirectories in a local,\nshared file system on every node in the cluster. These subdirectories are\nautomatically created during cluster creation. Workloads using this storage\nclass will share capacity and IOPS because the PVs are backed by the same shared\nfile system. For better isolation, we recommend configuring disks through LVP\nnode mounts instead.\n\n### Configure an LVP share\n\n1. **Optional**: Before cluster creation, mount a disk using the configured path\n as a mount point so that the created PVs will share the new disk capacity and\n be isolated from the boot disk.\n\n2. Specify the following under `lvpShare` in the cluster CR:\n\n - `path`: The host machine path on each host where subdirectories are created. A local PV is created for each subdirectory. The default path is `/mnt/localpv-share`.\n - `storageClassName`: The storage class that PVs are created with during cluster creation. The default value is `local-shared`.\n - `numPVUnderSharedPath`: Number of subdirectories to create under `path`. The default value is `5`.\n\n The configuration looks similar to the following: \n\n apiVersion: baremetal.cluster.gke.io/v1\n kind: Cluster\n metadata:\n name: cluster1\n namespace: cluster-cluster1\n spec:\n storage:\n lvpShare:\n path: /mnt/localpv-share\n storageClassName: local-shared\n numPVUnderSharedPath: 5\n\nPVs are created with the storage class specified in `storageClassName`. The\ntotal number of local PVs created in the cluster is `numPVUnderSharedPath`\nmultiplied by the number of nodes.\n\nLVP node mounts\n---------------\n\nThis storage class creates a local PV for each mounted disk in the configured\ndirectory. Each PV maps to a disk with capacity equal to the underlying disk\ncapacity. The total number of local PVs created in the cluster is the number of\ndisks mounted under the path across all nodes. Additional mounts can be added\nafter cluster creation.\n\n### Configure LVP node mounts\n\n1. On nodes that have extra disks for PVs, format and mount each disk under\n path. This can also be done before or after cluster creation. See\n [best practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md).\n\n 1. List disks and find the one you want to mount:\n\n sudo lsblk\n\n 2. Format the disk, for example with single ext4 file system:\n\n sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/DEVICE_ID\n\n 3. Under the configured path, create a directory as the mount point for the\n new disk:\n\n sudo mkdir -p /mnt/localpv-disk/MNT_DIR\n\n 4. Mount the disk:\n\n sudo mount -o discard,defaults /dev/DEVICE_ID /mnt/localpv-disk/MNT_DIR &&\n sudo chmod a+w /mnt/localpv-disk/MNT_DIR\n\n 5. Add the disk to the `/etc/fstab` file, so that the device automatically\n mounts again when the instance restarts:\n\n # Backup of your current /etc/fstab file\n sudo cp /etc/fstab /etc/fstab.backup\n\n # Use the blkid command to find the UUID for the zonal persistent disk\n sudo blkid /dev/DEVICE_ID\n\n # Edit /etc/fstab file: create an entry that includes the UUID\n UUID=UUID_VALUE /mnt/localpv-disk/MNT_DIR ext4 discard,defaults,NOFAIL_OPTION 0 2\n\n2. Specify the following under `lvpNodeMounts` in cluster CR:\n\n - `path`: The host machine path for each mount where mounted disks are discovered and a local PV is created. The default path is `/mnt/localpv-disk`.\n - `storageClassName`: The storage class that PVs are created with during cluster creation. The default value is `local-disks`.\n\n The configuration looks something similar to the following: \n\n apiVersion: baremetal.cluster.gke.io/v1\n kind: Cluster\n metadata:\n name: cluster1\n namespace: cluster-cluster1\n spec:\n storage:\n lvpNodeMounts:\n path: /mnt/localpv-disk\n storageClassName: local-disks\n\n PVs are created with the storage class specified in `storageClassName`. The\n total number of PVs created is the number of disks mounted under `path`\n across all nodes.\n\nWhat's next\n-----------\n\n- Learn how to [configure the default storage class](/kubernetes-engine/distributed-cloud/bare-metal/docs/installing/default-storage-class)."]]