This topic explains how to install a Container Storage Interface (CSI) storage driver to GKE on-prem clusters.
Overview
CSI is an open standard API that enables Kubernetes to expose arbitrary storage systems to containerized workloads. When you deploy a CSI-compatible storage driver to a GKE on-prem cluster, the cluster can connect directly to a compatible storage device without having to go through vSphere storage.
Kubernetes volumes are managed by vendor-specific storage drivers, which have historically have been compiled into Kubernetes binaries. Previously, you could not use a storage driver that was not included with Kubernetes. Installing a CSI driver adds support for a storage system that is not natively supported by Kubernetes. Also, CSI enables the use of modern storage features, such as snapshots and resizing.
To use a CSI driver, you need to create a Kubernetes StorageClass. You set the CSI driver as the provisioner for the StorageClass. Then, you can set the StorageClass as the cluster's default, or configure your workloads use the StorageClass (StatefulSet example).
Before you begin
Review Kubernetes' in-tree volume plugins, and confirm whether Kubernetes already includes your driver.
By default, GKE on-prem uses vSphere datastores via the built-in vsphereVolume driver. Additionally, the built-in drivers for NFS and iSCSI can attach and mount existing volumes to your workloads.
Installing a vendor's CSI driver
Storage vendors develop their own CSI drivers, and they are responsible for providing installation instructions. In simple cases, installation might only involve deploying manifests to your clusters. See the list of CSI drivers in the CSI documentation.
Verifying a driver installation
After you install a CSI driver, you can verify the installation by running one of the following commands, depending on your cluster's GKE on-prem version:
1.2.0-gke.5
kubectl get csinodes \ -o jsonpath='{range .items[*]} {.metadata.name}{": "} {range .spec.drivers[*]} {.name}{"\n"} {end}{end}'
1.1.0-gke.6
kubectl get nodes \ -o jsonpath='{.items[*].metadata.annotations.csi\.volume\.kubernetes\.io\/nodeid}'
Using a CSI driver
To use a CSI driver:
Create a Kubernetes StorageClass that references the driver in its
provisioner
field.To provision storage, you can either:
- Reference the StorageClass in a StatefulSet's
volumeClaimTemplates
specification. - Set it as the cluster's default StorageClass.
- Reference the StorageClass in a StatefulSet's
Considerations for StorageClasses backed by a CSI driver
When you create a StorageClass, consider the following:
- CSI driver documentation should include the driver-specific parameters that you provide to your StorageClass, including the provisioner name.
- You should name the StorageClass after its properties (such as "fast" or "highly-replicated"), rather than after the name of the specific driver or appliance behind it. Naming the StorageClass after its properties allows you to create StorageClasses with the same name across multiple clusters and environments, and allows your applications to get storage with the same properties across clusters.
Example: Reference StorageClass in a StatefulSet
The following example models how to define a CSI driver in a StorageClass, and then reference the StorageClass in a StatefulSet workload. The example assumes the driver has already been installed to the cluster.
Below is a simple StorageClass named fast
that uses a fictional CSI driver,
csi.example.com
, as its provisioner:
fast-sc.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast provisioner: csi.example.com # CSI driver parameters: # You provide vendor-specific parameters to this specification type: example-parameter # Be sure to follow the vendor's instructions datastore: my-datastore reclaimPolicy: Retain allowVolumeExpansion: true
You reference the StorageClass in a StatefulSet's volumeClaimTemplates
specification.
When you reference a StorageClass in a StatefulSet's volumeClaimTemplates
specification, Kubernetes provides stable storage using PersistentVolumes (PVs).
Kubernetes calls the provisioner defined in the StorageClass to create a new
storage volume. In this case, Kubernetes calls the fictional csi.example.com
provider, which calls out to the provider's API, to create a volume. After the
volume is provisioned, Kubernetes automatically creates a PV to represent the
storage.
Here is a simple StatefulSet that references the StorageClass:
statefulset.yaml
apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.k8s.io/nginx-slim:0.8 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: # This is the specification in which you reference the StorageClass - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi storageClassName: fast # This field references the existing StorageClass