Configure NFS volume mounts for worker pools

This page shows how to mount an NFS file share as a volume in Cloud Run. You can use any NFS server, including your own NFS server hosted on-premises, or on a Compute Engine VM. If you don't already have an NFS server, we recommend Filestore, which is a fully managed NFS offering from Google Cloud.

If you are looking to use NBD, 9P, CIFS/Samba, and Ceph network file systems, refer to using NBD, 9P, CIFS/Samba, and Ceph network file systems.

Mounting the NFS file share as a volume in Cloud Run presents the file share as files in the container file system. After you mount the file share as a volume, you access it as if it were a directory on your local file system, using your programming language's file system operations and libraries.

Disallowed paths

Cloud Run does not allow you to mount a volume at /dev, /proc, or /sys, or on their subdirectories.

Limitations

  • In order to write to an NFS volume, your container must run as root. If your container only reads from the file system, it can run as any user.

  • Cloud Run does not support NFS locking. NFS volumes are automatically mounted in no-lock mode.

Before you begin

To mount an NFS server as a volume in Cloud Run, make sure you have the following:

  • A VPC Network where your NFS server or Filestore instance is running.
  • An NFS server running in a VPC network, with your Cloud Run worker pool connected to that VPC network. If you don't already have an NFS server, create one by creating a Filestore instance.
  • Your Cloud Run worker pool is attached to the VPC network where your NFS server is running. For best performance, use Direct VPC rather than VPC Connectors.
  • If you're using an existing project, make sure that your VPC Firewall configuration allows Cloud Run to reach your NFS server. (If you're starting from a new project, this is true by default.) If you're using Filestore as your NFS server, follow the Filestore documentation to create a Firewall egress rule to enable Cloud Run to reach Filestore.

Required roles

For a list of IAM roles and permissions that are associated with Cloud Run, see Cloud Run IAM roles and Cloud Run IAM permissions. If your Cloud Run worker pool interfaces with Google Cloud APIs, such as Cloud Client Libraries, see the service identity configuration guide. For more information about granting roles, see deployment permissions and manage access.

Mount an NFS volume

You can mount multiple NFS servers, Filestore instances, or other volume types at different mount paths.

If you are using multiple containers, first specify the volume(s), then specify the volume mount(s) for each container.

Configure an NFS volume mount using the Google Cloud CLI when you create a new worker pool or deploy a new revision.

gcloud

  • To add a volume and mount it:

    gcloud beta run worker-pools update WORKER_POOL \
    --add-volume name=VOLUME_NAME,type=nfs,location=IP_ADDRESS:NFS_PATH \
    --add-volume-mount volume=VOLUME_NAME,mount-path=MOUNT_PATH

    Replace:

    • WORKER_POOL with the name of your worker pool.
    • VOLUME_NAME with the name you want to give your volume.
    • IP_ADDRESS with the location of the NFS file share.
    • NFS_PATH with the path to the NFS file share starting with a forward slash, for example /example-directory.
    • MOUNT_PATH with the relative path where you are mounting the volume, for example, /mnt/my-volume.
    • VOLUME_NAME with any name you want for your volume. The volume name value is used to map the volume to the volume mount.
  • To mount your volume as a read-only volume:

    --add-volume name=VOLUME_NAME,type=nfs,location=IP_ADDRESS:NFS_PATH,readonly=true
  • If you are using multiple containers, first specify the volumes, then specify the volume mounts for each container:

    gcloud beta run worker-pools update WORKER_POOL \
    --add-volume name=VOLUME_NAME,type=nfs,location=IP_ADDRESS:NFS_PATH \
    --container=CONTAINER_1 \
    --add-volume-mount volume=VOLUME_NAME,mount-path=MOUNT_PATH \
    --container=CONTAINER_2 \
    --add-volume-mount volume=VOLUME_NAME,mount-path=MOUNT_PATH2

View environment variable configuration for the worker pool

  1. In the Google Cloud console, go to Cloud Run:

    Go to Cloud Run

  2. Click Worker pools to display the list of deployed worker pools.

  3. Click the worker pool you want to examine to display its details pane.

  4. Click the Containers tab to display worker pool container configuration.

Troubleshooting NFS

If you experience problems, check the following:

  • Your Cloud Run worker pool is connected to the VPC network that the NFS server is on.
  • There are no firewall rules preventing Cloud Run from reaching the NFS server.
  • If your container writes to your NFS server, make sure it is running as root.

Container startup time and NFS volume mounts

Using NFS volume mounts can slightly increase your Cloud Run container cold start time because the volume mount is started prior to starting the container(s). Your container will start only if NFS is successfully mounted.

Note that NFS successfully mounts a volume only after establishing a connection to the server and fetching a file handle. If Cloud Run fails to establish a connection to the server, the Cloud Run worker pool will fail to start.

Also, any networking delays can have an impact on container startup time since Cloud Run has a total 30-second timeout for all mounts. If NFS takes longer than 30 seconds to mount, then Cloud Run worker pool will fail to start.

NFS performance characteristics

If you create more than one NFS volume, all volumes are mounted in parallel.

Because NFS is a network file system, it is subject to bandwidth limits and access to the file system can be impacted by limited bandwidth.

When you write to your NFS volume, the write is stored in Cloud Run memory until the data is flushed. Data is flushed in the following circumstances:

  • Your application flushes file data explicitly using sync(2), msync(2), or fsync(3).
  • Your application closes a file with close(2).
  • Memory pressure forces reclamation of system memory resources.

For more information, see the Linux documentation on NFS.

Clear and remove volumes and volume mounts

You can clear all volumes and mounts or you can remove individual volumes and volume mounts.

Clear all volumes and volume mounts

To clear all volumes and volume mounts from your single-container worker pool, run the following command:

gcloud beta run worker-pools update WORKER_POOL \
    --clear-volumes
    --clear-volume-mounts

If you have multiple containers, follow the sidecars CLI conventions to clear volumes and volume mounts:

gcloud beta run worker-pools update WORKER_POOL \
    --container=container1 \
    --clear-volumes
    -–clear-volume-mounts \
    --container=container2 \
    --clear-volumes \
    -–clear-volume-mounts

Remove individual volumes and volume mounts

In order to remove a volume, you must also remove all volume mounts using that volume.

To remove individual volumes or volume mounts, use the remove-volume and remove-volume-mount flags:

gcloud beta run worker-pools update WORKER_POOL \
    --remove-volume VOLUME_NAME \
    --container=container1 \
    --remove-volume-mount MOUNT_PATH \
    --container=container2 \
    --remove-volume-mount MOUNT_PATH