This guide describes how you can create a new Kubernetes volume backed by the Managed Lustre CSI driver in GKE with dynamic provisioning. The Managed Lustre CSI driver lets you create storage backed by Managed Lustre instances on-demand, and access them as volumes for your stateful workloads.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Cloud Managed Lustre API and the Google Kubernetes Engine API. Enable APIs
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
- For limitations and requirements, see the CSI driver overview.
- Make sure to enable the Managed Lustre CSI driver. It is disabled by default in Standard and Autopilot clusters.
Set up environment variables
Set up the following environment variables:
export CLUSTER_NAME=CLUSTER_NAME
export PROJECT_ID=PROJECT_ID
export NETWORK_NAME=LUSTRE_NETWORK
export IP_RANGE_NAME=LUSTRE_IP_RANGE
export FIREWALL_RULE_NAME=LUSTRE_FIREWALL_RULE
export LOCATION=ZONE
Replace the following:
CLUSTER_NAME
: the name of the cluster.PROJECT_ID
: your Google Cloud project ID.LUSTRE_NETWORK
: the shared Virtual Private Cloud (VPC) network where both the GKE cluster and Managed Lustre instance reside.LUSTRE_IP_RANGE
: the name for the IP address range created for VPC Network Peering with Managed Lustre.LUSTRE_FIREWALL_RULE
: the name for the firewall rule to allow TCP traffic from the IP address range.ZONE
: the geographical zone of your GKE cluster; for example,us-central1-a
.
Set up a VPC network
You must specify the same VPC network when creating the Managed Lustre instance and your GKE clusters.
To enable service networking, run the following command:
gcloud services enable servicenetworking.googleapis.com \ --project=${PROJECT_ID}
Create a VPC network. Setting the
--mtu
flag to8896
results in a 10% performance gain.gcloud compute networks create ${NETWORK_NAME} \ --subnet-mode=auto --project=${PROJECT_ID} \ --mtu=8896
Create an IP address range.
gcloud compute addresses create ${IP_RANGE_NAME} \ --global \ --purpose=VPC_PEERING \ --prefix-length=20 \ --description="Managed Lustre VPC Peering" \ --network=${NETWORK_NAME} \ --project=${PROJECT_ID}
Get the CIDR range associated with the range you created in the preceding step.
CIDR_RANGE=$( gcloud compute addresses describe ${IP_RANGE_NAME} \ --global \ --format="value[separator=/](address, prefixLength)" \ --project=${PROJECT_ID} )
Create a firewall rule to allow TCP traffic from the IP address range you created.
gcloud compute firewall-rules create ${FIREWALL_RULE_NAME} \ --allow=tcp:988,tcp:6988 \ --network=${NETWORK_NAME} \ --source-ranges=${CIDR_RANGE} \ --project=${PROJECT_ID}
To set up network peering for your project, verify that you have necessary IAM permissions, specifically the
compute.networkAdmin
orservicenetworking.networksAdmin
role.- Go to Google Cloud console > IAM & Admin, then search for your project owner principal.
- Click the pencil icon, then click + ADD ANOTHER ROLE.
- Select Compute Network Admin or Service Networking Admin.
- Click Save.
Connect the peering.
gcloud services vpc-peerings connect \ --network=${NETWORK_NAME} \ --project=${PROJECT_ID} \ --ranges=${IP_RANGE_NAME} \ --service=servicenetworking.googleapis.com
Configure the Managed Lustre CSI driver
This section covers how you can enable and disable the Managed Lustre CSI driver, if needed.
Enable the Managed Lustre CSI driver on a new GKE cluster
To enable the Managed Lustre CSI driver when creating a new GKE cluster, follow these steps:
Autopilot
gcloud container clusters create-auto "${CLUSTER_NAME}" \
--location=${LOCATION} \
--network="${NETWORK_NAME}" \
--cluster-version=1.33.2-gke.1111000 \
--enable-lustre-csi-driver \
--enable-legacy-lustre-port
Standard
gcloud container clusters create "${CLUSTER_NAME}" \
--location=${LOCATION} \
--network="${NETWORK_NAME}" \
--cluster-version=1.33.2-gke.1111000 \
--addons=LustreCsiDriver \
--enable-legacy-lustre-port
When the enable-legacy-lustre-port
flag is specified, the CSI driver configures
LNet
(the virtual network layer for the Managed Lustre kernel
module) to use port 6988. This flag is required to work around a port conflict
with the gke-metadata-server
on GKE nodes.
Enable the Managed Lustre CSI driver on an existing GKE cluster
To enable the Managed Lustre CSI driver on an existing GKE cluster, use the following command:
gcloud container clusters update ${CLUSTER_NAME} \
--location=${LOCATION} \
--enable-legacy-lustre-port
Enabling the Managed Lustre CSI driver can trigger node recreation in order to update the necessary kernel modules for the Managed Lustre client. For immediate availability, we recommend manually upgrading your node pools.
GKE clusters on a release channel upgrade according to their scheduled rollout, which can take several weeks depending on your maintenance window. If you're on a static GKE version, you need to manually upgrade your node pools.
After the node pool upgrade, CPU nodes might appear to be using a GPU image in the Google Cloud console or CLI output. For example:
config:
imageType: COS_CONTAINERD
nodeImageConfig:
image: gke-1330-gke1552000-cos-121-18867-90-4-c-nvda
This behavior is expected. The GPU image is being reused on CPU nodes to securely install the Managed Lustre kernel modules. You won't be charged for GPU usage.
Disable the Managed Lustre CSI driver
You can disable the Managed Lustre CSI driver on an existing GKEcluster by using the Google Cloud CLI.
gcloud container clusters update ${CLUSTER_NAME} \
--location=${LOCATION} \
--update-addons=LustreCsiDriver=DISABLED
After the CSI driver is disabled, GKE automatically recreates your nodes and uninstalls the Managed Lustre kernel modules.
Create a new volume using the Managed Lustre CSI driver
The following sections describe the typical process for creating a Kubernetes volume backed by a Managed Lustre instance in GKE:
- Create a StorageClass.
- Use a PersistentVolumeClaim to access the volume.
- Create a workload that consumes the volume.
Create a StorageClass
When the Managed Lustre CSI driver is enabled, GKE automatically creates a StorageClass for provisioning Managed Lustre instances. The StorageClass depends on the Managed Lustre performance tier. GKE, and is one of the following:
lustre-rwx-125mbps-per-tib
lustre-rwx-250mbps-per-tib
lustre-rwx-500mbps-per-tib
lustre-rwx-1000mbps-per-tib
GKE provides a default StorageClass for each supported Managed Lustre performance tier. This simplifies the dynamic provisioning of Managed Lustre instances, as you can use the built-in StorageClasses without having to define your own.
For zonal clusters, the CSI driver provisions Managed Lustre instances in the same zone as the cluster. For regional clusters, it provisions the instance in one of the zones within the region.
The following example shows you how to create a custom StorageClass with specific topology requirements:
Save the following manifest in a file named
lustre-class.yaml
:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: lustre-class provisioner: lustre.csi.storage.gke.io volumeBindingMode: Immediate reclaimPolicy: Delete parameters: perUnitStorageThroughput: "1000" network: LUSTRE_NETWORK allowedTopologies: - matchLabelExpressions: - key: topology.gke.io/zone values: - us-central1-a
For the full list of fields that are supported in the StorageClass, see the Managed Lustre CSI driver reference documentation.
Create the StorageClass by running this command:
kubectl apply -f lustre-class.yaml
Use a PersistentVolumeClaim to access the Volume
This section shows you how to create a PersistentVolumeClaim resource that references the Managed Lustre CSI driver's StorageClass.
Save the following manifest in a file named
lustre-pvc.yaml
:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lustre-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 18000Gi storageClassName: lustre-class
For the full list of fields that are supported in the PersistentVolumeClaim, see the Managed Lustre CSI driver reference documentation.
Create the PersistentVolumeClaim by running this command:
kubectl apply -f lustre-pvc.yaml
Create a workload to consume the volume
This section shows an example of how to create a Pod that consumes the PersistentVolumeClaim resource you created earlier.
Multiple Pods can share the same PersistentVolumeClaim resource.
Save the following manifest in a file named
my-pod.yaml
.apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: nginx image: nginx volumeMounts: - name: lustre-volume mountPath: /data volumes: - name: lustre-volume persistentVolumeClaim: claimName: lustre-pvc
Apply the manifest to the cluster.
kubectl apply -f my-pod.yaml
Verify that the Pod is running. The Pod runs after the PersistentVolumeClaim is provisioned. This operation might take a few minutes to complete.
kubectl get pods
The output is similar to the following:
NAME READY STATUS RESTARTS AGE my-pod 1/1 Running 0 11s
Use fsGroup with Managed Lustre volumes
You can change the group ownership of the root level directory of the mounted file system to match a user-requested fsGroup specified in the Pod's SecurityContext. fsGroup won't recursively change the ownership of the entire mounted Managed Lustre file system; only the root directory of the mount point is affected.
Troubleshooting
For troubleshooting guidance, refer to the Troubleshooting page in the Managed Lustre documentation.
Clean up
To avoid incurring charges to your Google Cloud account, delete the storage resources you created in this guide.
Delete the Pod and PersistentVolumeClaim.
kubectl delete pod my-pod kubectl delete pvc lustre-pvc
Check the PersistentVolume status.
kubectl get pv
The output is similar to the following:
No resources found
It might take a few minutes for the underlying Managed Lustre instance to be fully deleted.