This page describes how you can set up and prepare to use the Cloud Storage FUSE CSI driver for GKE.
To use the Cloud Storage FUSE CSI driver, perform these steps:
Create the Cloud Storage bucket
If you have not already done so, create your Cloud Storage buckets. You will mount these buckets as volumes in your GKE cluster. To improve performance, set the Location type to Region, and select a region that matches your GKE cluster.
Enable the Cloud Storage FUSE CSI driver
Follow these steps, depending on whether you are using GKE Autopilot or Standard clusters. We recommend that you use an Autopilot cluster for a fully managed Kubernetes experience. To choose the mode that's the best fit for your workloads, see Choose a GKE mode of operation.
Autopilot
The Cloud Storage FUSE CSI driver is enabled by default for Autopilot clusters. You can skip to Configure access to Cloud Storage buckets.
Standard
If your Standard cluster has Cloud Storage FUSE CSI driver enabled, skip to Configure access to Cloud Storage buckets
To create a Standard cluster with the Cloud Storage FUSE CSI driver
enabled, you can use the gcloud container clusters create
` command:
gcloud container clusters create CLUSTER_NAME \
--addons GcsFuseCsiDriver \
--cluster-version=VERSION \
--location=LOCATION \
--workload-pool=PROJECT_ID.svc.id.goog
Replace the following:
CLUSTER_NAME
: the name of your cluster.VERSION
: the GKE version number. You must select 1.24 or later.LOCATION
: the Compute Engine region or zone for the cluster.PROJECT_ID
: your project ID.
To enable the driver on an existing Standard cluster, use the
[gcloud container clusters update
](/sdk/gcloud/reference/container/clusters/update)` command:
gcloud container clusters update CLUSTER_NAME \
--update-addons GcsFuseCsiDriver=ENABLED \
--location=LOCATION
To verify that the Cloud Storage FUSE CSI driver is enabled on your cluster, run the following command:
gcloud container clusters describe CLUSTER_NAME \
--location=LOCATION \
--project=PROJECT_ID \
--format="value(addonsConfig.gcsFuseCsiDriverConfig.enabled)"
Configure access to Cloud Storage buckets
The Cloud Storage FUSE CSI driver uses Workload Identity Federation for GKE so that you can set fine grained permissions on how your GKE Pods can access data stored in Cloud Storage.
To make your Cloud Storage buckets accessible by your GKE cluster, authenticate using Workload Identity Federation for GKE with the Cloud Storage bucket that you want to mount in your Pod specification:
- If you don't have Workload Identity Federation for GKE enabled, follow these steps to enable it.
Get credentials for your cluster:
gcloud container clusters get-credentials CLUSTER_NAME \ --location=LOCATION
Replace the following:
CLUSTER_NAME
: the name of your cluster that has Workload Identity Federation for GKE enabled.LOCATION
: the Compute Engine region or zone for the cluster.
Create a namespace to use for the Kubernetes ServiceAccount. You can also use the
default
namespace or any existing namespace.kubectl create namespace NAMESPACE
Replace
NAMESPACE
with the name of the Kubernetes namespace for the Kubernetes ServiceAccount.Create a Kubernetes ServiceAccount for your application to use. You can also use any existing Kubernetes ServiceAccount in any namespace, including the
default
Kubernetes ServiceAccount.kubectl create serviceaccount KSA_NAME \ --namespace NAMESPACE
Replace
KSA_NAME
with the name of your Kubernetes ServiceAccount.Grant one of the IAM roles for Cloud Storage to the Kubernetes ServiceAccount. Follow these steps, depending on whether you are granting the Kubernetes ServiceAccount access to a specific Cloud Storage bucket only, or global access to all buckets in the project.
Specific bucket access
gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME \ --member "principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/PROJECT_ID.svc.id.goog/subject/ns/NAMESPACE/sa/KSA_NAME" \ --role "ROLE_NAME"
Replace the following:
BUCKET_NAME
: your Cloud Storage bucket name.PROJECT_NUMBER
: the numerical project number of your GKE cluster. To find your project number, see Identifying projects.PROJECT_ID
: the project ID of your GKE cluster.NAMESPACE
: the name of the Kubernetes namespace for the Kubernetes ServiceAccount.KSA_NAME
: the name of your new Kubernetes ServiceAccount.ROLE_NAME
: the IAM role to assign to your Kubernetes ServiceAccount.- For read-only workloads, use the Storage Object Viewer role (
roles/storage.objectViewer
). - For read-write workloads, use the Storage Object User role (
roles/storage.objectUser
).
- For read-only workloads, use the Storage Object Viewer role (
Global bucket access
gcloud projects add-iam-policy-binding GCS_PROJECT \ --member "principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/PROJECT_ID.svc.id.goog/subject/ns/NAMESPACE/sa/KSA_NAME" \ --role "ROLE_NAME"
Replace the following:
GCS_PROJECT
: the project ID of your Cloud Storage buckets.PROJECT_NUMBER
: the numerical project number of your GKE cluster. To find your project number, see Identifying projects.PROJECT_ID
: the project ID of your GKE cluster.NAMESPACE
: the name of the Kubernetes namespace for the Kubernetes ServiceAccount.KSA_NAME
: the name of your new Kubernetes ServiceAccount.ROLE_NAME
: the IAM role to assign to your Kubernetes ServiceAccount.- For read-only workloads, use the Storage Object Viewer role (
roles/storage.objectViewer
). - For read-write workloads, use the Storage Object User role (
roles/storage.objectUser
).
- For read-only workloads, use the Storage Object Viewer role (
What's next
- Learn how to mount Cloud Storage buckets as by specifying your buckets in-line with the Pod specification.
- Learn how to mount Cloud Storage buckets using a PersistentVolume resource.
- Learn more about configuring applications to use Workload Identity Federation for GKE.