Connect to an existing Managed Lustre instance from Google Kubernetes Engine
Stay organized with collections
Save and categorize content based on your preferences.
This guide describes how to connect to an existing Google Cloud Managed Lustre
instance with the GKE Managed Lustre CSI driver.
This lets you access existing fully managed Managed Lustre
instances as volumes for your stateful workloads.
Limitations
Using Managed Lustre from GKE has the following limitations:
Node pools with Secure boot
enabled are supported only if the machine type of the node pool is a GPU machine type.
Note that Secure boot is pre-configured for
AutoPilot clusters.
Using AutoPilot requires additional allowlisting. To use it, contact your
sales representative.
Only manual provisioning is supported. The Managed Lustre
instance must be created before you can install the CSI driver.
Dynamic provisioning
is not supported.
Nodes with a machine type other than a GPU machine type
will reboot during the CSI driver installation process. You must wait for the
CSI driver daemon's initialization - including the disable-loadpin step - to
complete on all relevant nodes before deploying your workloads.
Configure IAM permissions
You must have the following IAM permission in order to create a
Kubernetes Engine cluster:
The command output indicates that the DaemonSet and Pods are running as expected:
NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
csidriver.storage.k8s.io/lustre.csi.storage.gke.io false false false <unset> false Persistent 27s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/lustre-csi-node 1 1 1 1 1 kubernetes.io/os=linux 28s
NAME READY STATUS RESTARTS AGE
pod/lustre-csi-node-gqffs 2/2 Running 0 28s
Create a Persistent Volume and Persistent Volume Claim
Follow these instructions to create a Persistent Volume (PV) and Persistent
Volume Claim (PVC).
Open ~/lustre-csi-driver/examples/pre-prov/preprov-pvc-pv.yaml. This is
an example configuration file that you can update for your use:
apiVersion:v1kind:PersistentVolumemetadata:name:preprov-pvspec:storageClassName:""capacity:storage:18Ti# The capacity of the instanceaccessModes:-ReadWriteManypersistentVolumeReclaimPolicy:RetainvolumeMode:Filesystemcsi:driver:lustre.csi.storage.gke.iovolumeHandle:<project-id>/<instance-location>/<instance-name># Update these valuesvolumeAttributes:ip:${EXISTING_LUSTRE_IP_ADDRESS}# The IP address of the existing Lustre instancefilesystem:${EXISTING_LUSTRE_FSNAME}# The filesystem name of the existing Lustre instance---kind:PersistentVolumeClaimapiVersion:v1metadata:name:preprov-pvcspec:accessModes:-ReadWriteManystorageClassName:""volumeName:preprov-pvresources:requests:storage:18Ti# The capacity of the instance
Update the example file with the correct values:
volumeHandle: Update with the correct project ID, zone, and
Managed Lustre instance name.
storage: This value should match the size of the underlying
Managed Lustre instance.
volumeAttributes:
ip must point to the Managed Lustre instance IP.
filesystem must be the Managed Lustre instance's
file system name.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Connect to an existing Managed Lustre instance from Google Kubernetes Engine\n\nThis guide describes how to connect to an existing Google Cloud Managed Lustre\ninstance with the GKE Managed Lustre CSI driver.\nThis lets you access existing fully managed Managed Lustre\ninstances as volumes for your stateful workloads.\n\nLimitations\n-----------\n\nUsing Managed Lustre from GKE has the following limitations:\n\n- The\n [GKE node pool version](https://cloud.google.com/kubernetes-engine/versioning#versioning_scheme)\n must be 1.31.5 or greater.\n\n - For 1.31.5, the GKE patch number must be at least `1299000`.\n - For versions 1.31.6 and 1.31.7, any GKE patch number is supported.\n - For 1.32.1, the GKE patch number must be at least `1673000`.\n - For versions 1.32.2 or higher, any GKE patch number is supported.\n\n Use the following command to check the version of the node pool: \n\n gcloud container clusters describe \\\n \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e --location=\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e | grep currentNodeVersion\n\n Replace \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e with the cluster's zone (for a zonal cluster) or\n region (for a regional cluster). For example, `us-central1-a` or\n `us-central1`.\n\n The patch number is the seven digit number at the end of the node version\n string.\n- The [node image](/kubernetes-engine/docs/concepts/node-images) must be\n [Container-Optimized OS with containerd](/kubernetes-engine/docs/concepts/node-images#cos-variants)\n (`cos_containerd`). Ubuntu and Windows Server node images are not supported.\n\n- Node pools with [Secure boot](/kubernetes-engine/docs/how-to/shielded-gke-nodes#secure_boot)\n enabled are supported only if the machine type of the node pool is a [GPU machine type](/compute/docs/gpus).\n\n - Note that Secure boot is pre-configured for [AutoPilot](/kubernetes-engine/docs/concepts/autopilot-overview) clusters.\n - Using AutoPilot requires additional allowlisting. To use it, contact your sales representative.\n- Only manual provisioning is supported. The Managed Lustre\n instance must be created before you can install the CSI driver.\n [Dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)\n is not supported.\n\n- Nodes with a machine type other than a [GPU machine type](/compute/docs/gpus)\n will reboot during the CSI driver installation process. You must wait for the\n CSI driver daemon's initialization - including the `disable-loadpin` step - to\n complete on all relevant nodes *before* deploying your workloads.\n\nConfigure IAM permissions\n-------------------------\n\nYou must have the following IAM permission in order to create a\nKubernetes Engine cluster:\n\n- Kubernetes Engine Cluster Admin (roles/container.clusterAdmin)\n\nTo grant a role: \n\n gcloud projects add-iam-policy-binding PROJECT_ID \\\n --member=\"user:EMAIL_ADDRESS\" \\\n --role=roles/container.clusterAdmin\n\nEnable the API\n--------------\n\nEnable the Google Kubernetes Engine API. \n\n### gcloud\n\nUse `gcloud services enable` as follows: \n\n gcloud services enable container.googleapis.com --project=\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\n\n### Google Cloud console\n\n1. Go to the **Kubernetes Engine API** page in the Google Cloud console.\n\n [Go to Kubernetes Engine API](https://console.cloud.google.com/marketplace/product/google/container.googleapis.com)\n2. Click **Enable API**. The API is enabled for your project.\n\nCreate a GKE cluster\n--------------------\n\nIf you already have a GKE Standard cluster, you can skip this step. Otherwise,\nrun the following command: \n\n gcloud container clusters create lustre-test \\\n --cluster-version 1.32 --release-channel rapid \\\n --location=$ZONE\n\nRun the following commands to ensure the kubectl context is set up correctly: \n\n gcloud container clusters get-credentials ${CLUSTER_NAME}\n kubectl config current-context\n\nInstall the CSI driver\n----------------------\n\nThe CSI driver can be deployed using Kustomize.\n\n1. Download the Managed Lustre CSI driver from GitHub.\n To clone the repository, use `git clone` as follows:\n\n git clone https://github.com/GoogleCloudPlatform/lustre-csi-driver\n\n2. Install the jq utility:\n\n sudo apt-get update\n sudo apt-get install jq\n\n3. Install the CSI driver:\n\n cd ./lustre-csi-driver\n OVERLAY=gke-release make install\n\n4. Confirm that the CSI driver is successfully installed:\n\n kubectl get CSIDriver,DaemonSet,Pods -n lustre-csi-driver\n\n The command output indicates that the DaemonSet and Pods are running as expected: \n\n NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE\n csidriver.storage.k8s.io/lustre.csi.storage.gke.io false false false \u003cunset\u003e false Persistent 27s\n\n NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE\n daemonset.apps/lustre-csi-node 1 1 1 1 1 kubernetes.io/os=linux 28s\n\n NAME READY STATUS RESTARTS AGE\n pod/lustre-csi-node-gqffs 2/2 Running 0 28s\n\nCreate a Persistent Volume and Persistent Volume Claim\n------------------------------------------------------\n\nFollow these instructions to create a Persistent Volume (PV) and Persistent\nVolume Claim (PVC).\n\n1. Open `~/lustre-csi-driver/examples/pre-prov/preprov-pvc-pv.yaml`. This is\n an example configuration file that you can update for your use:\n\n apiVersion: v1\n kind: PersistentVolume\n metadata:\n name: preprov-pv\n spec:\n storageClassName: \"\"\n capacity:\n storage: 18Ti # The capacity of the instance\n accessModes:\n - ReadWriteMany\n persistentVolumeReclaimPolicy: Retain\n volumeMode: Filesystem\n csi:\n driver: lustre.csi.storage.gke.io\n volumeHandle: \u003cproject-id\u003e/\u003cinstance-location\u003e/\u003cinstance-name\u003e # Update these values\n volumeAttributes:\n ip: ${EXISTING_LUSTRE_IP_ADDRESS} # The IP address of the existing Lustre instance\n filesystem: ${EXISTING_LUSTRE_FSNAME} # The filesystem name of the existing Lustre instance\n ---\n kind: PersistentVolumeClaim\n apiVersion: v1\n metadata:\n name: preprov-pvc\n spec:\n accessModes:\n - ReadWriteMany\n storageClassName: \"\"\n volumeName: preprov-pv\n resources:\n requests:\n storage: 18Ti # The capacity of the instance\n\n2. Update the example file with the correct values:\n\n - `volumeHandle`: Update with the correct project ID, zone, and Managed Lustre instance name.\n - `storage`: This value should match the size of the underlying Managed Lustre instance.\n - `volumeAttributes`:\n - `ip` must point to the Managed Lustre instance IP.\n - `filesystem` must be the Managed Lustre instance's file system name.\n3. Apply the example PV and PVC configuration:\n\n kubectl apply -f ~/lustre-csi-driver/examples/pre-prov/preprov-pvc-pv.yaml\n\n4. Verify that the PV and PVC are bound:\n\n kubectl get pvc\n\n The expected output looks like the following example: \n\n NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\n preprov-pvc Bound preprov-pv 16Ti RWX 76s\n\nUse the Persistent Volume in a Pod\n----------------------------------\n\nThe Managed Lustre CSI driver files include a sample Pod\nconfiguration YAML file.\n\n1. Open\n `~/lustre-csi-driver/examples/pre-prov/preprov-pod.yaml` This is\n an example configuration file that you can update for your use:\n\n apiVersion: v1\n kind: Pod\n metadata:\n name: lustre-pod\n spec:\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - mountPath: /lustre_volume\n name: mypvc\n volumes:\n - name: mypvc\n persistentVolumeClaim:\n claimName: preprov-pvc\n\n2. Update the `volumeMounts` values if needed.\n\n3. Deploy the Pod:\n\n kubectl apply -f ~/lustre-csi-driver/examples/pre-prov/preprov-pod.yaml\n\n4. Verify that the Pod is running. It can take a few minutes for the Pod to\n reach the Running state.\n\n kubectl get pods\n\n The expected output looks like the following example: \n\n NAME READY STATUS RESTARTS AGE\n lustre-pod 1/1 Running 0 11s\n\nYour GKE + Managed Lustre environment is ready to use."]]