This page explains how to enable dynamic provisioning of regional persistent disks and how to provision them manually in Google Kubernetes Engine (GKE).
For creating end-to-end solutions for high-availability applications with regional persistent disks, see Increase stateful app availability with Stateful HA Operator.
Regional persistent disks
As with zonal persistent disks, regional persistent disks can be dynamically
provisioned as needed or manually provisioned in advance by the cluster
administrator, although dynamic provisioning is recommended.
To utilize regional persistent disks of the pd-standard
type, set the
PersistentVolumeClaim's spec.resources.requests.storage
attribute to a minimum
of 200 GiB. If your use case requires a smaller volume, consider using pd-balanced
or pd-ssd
instead.
Dynamic provisioning
To enable dynamic provisioning of regional persistent disks, create a
StorageClass
with the replication-type
parameter, and specify zone
constraints in allowedTopologies
.
For example, the following manifest describes a StorageClass
named
regionalpd-storageclass
that uses standard
persistent disks and that replicates data to the europe-west1-b
and
europe-west1-c
zones:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-balanced
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
If using a regional cluster, you can leave allowedTopologies
unspecified. If
you do this, when you create a Pod that consumes a PersistentVolumeClaim
which uses this StorageClass
a regional persistent disk is provisioned with
two zones. One zone is the same as the zone that the Pod is scheduled in. The
other zone is randomly picked from the zones available to the cluster.
When using a zonal cluster, allowedTopologies
must be set.
Once the StorageClass
is created, next create a PersistentVolumeClaim
object, using the storageClassName
field to refer to the StorageClass
. For
example, the following manifest creates a PersistentVolumeClaim
named
regional-pvc
and references the regionalpd-storageclass
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: regional-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: regionalpd-storageclass
Since the StorageClass
is configured with
volumeBindingMode: WaitForFirstConsumer
, the PersistentVolume
is not
provisioned until a Pod using the PersistentVolumeClaim
has been created.
The following manifest is an example Pod using the previously created
PersistentVolumeClaim
:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: regional-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Manual provisioning
First, create a regional persistent disk using the
gcloud compute disks create
command. The following example creates a disk named gce-disk-1
replicated to the europe-west1-b
and europe-west1-c
zones:
gcloud compute disks create gce-disk-1 \
--size 500Gi \
--region europe-west1 \
--replica-zones europe-west1-b,europe-west1-c
You can then create a PersistentVolume
that references the regional
persistent disk you just created. In addition to objects in
Using preexisting Persistent Disks as PersistentVolumes,
the PersistentVolume
for a regional persistent disk should also specify a
node-affinity
.
If you use a StorageClass
, it should specify the persistent disk CSI driver.
Here's an example of a StorageClass
manifest that uses standard persistent
disks and that replicates data to the europe-west1-b
and europe-west1-c
zones:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-balanced
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
Here's an example manifest that creates a PersistentVolume
named
pv-demo
and references the regionalpd-storageclass
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
storageClassName: "regionalpd-storageclass"
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: pv-claim-demo
csi:
driver: pd.csi.storage.gke.io
volumeHandle: projects/PROJECT_ID/regions/europe-west1/disks/gce-disk-1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.gke.io/zone
operator: In
values:
- europe-west1-b
- europe-west1-c
Note the following for the PersistentVolume
example:
- The
volumeHandle
field contains details from thegcloud compute disks create
call, including yourPROJECT_ID
. - The
claimRef.namespace
field must be specified even when it is set todefault
.
Naming persistent disks
Kubernetes cannot distinguish between zonal and regional persistent disks with the same name. As a workaround, ensure that persistent disks have unique names. This issue does not occur when using dynamically provisioned persistent disks.
What's next
- Take a tutorial to learn about Deploying WordPress on GKE with Persistent Disks and Cloud SQL.