Connect from Compute Engine

This guide gives instructions on creating a single Compute Engine client and connecting it to your Google Cloud Managed Lustre instance. Managed Lustre supports connections from up to 4000 clients.

For better performance, client Compute Engine VMs should be created in the same zone as the Managed Lustre instance.

Required permissions

You must have the following IAM roles:

  • Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1) to create a Compute Engine VM.

  • Compute Admin (roles/compute.admin) or Compute Security Admin (roles/compute.securityAdmin) to create a firewall rule.

  • IAP-Secured Tunnel User (roles/iap.tunnelResourceAccessor) to SSH to a Compute Engine VM using Identity-Aware Proxy.

For a full list of the permissions granted by each role, see the IAM roles reference.

Create a Compute Engine VM

Follow the instructions to create a Compute Engine VM using one of the following Google Cloud image families:

  • Rocky Linux 8
  • Ubuntu 20.04 LTS, v20250213 or later
  • Ubuntu 22.04 LTS, v20250128 or later

Machine types and networking

You can choose any machine type and boot disk. We recommend at least a c2-standard-4 machine type.

Network throughput can be affected by your choice of machine type. In general, to obtain the best throughput:

  • Increase the number of vCPUs. Per-instance maximum egress bandwidth is generally 2 Gbps per vCPU, up to the machine type maximum.
  • Select a machine series that supports higher ingress and egress limits. For example, C2 instances with Tier_1 networking support up to 100Gbps egress bandwidth. C3 instances with Tier_1 networking support up to 200Gbps.
  • Enable per VM Tier_1 networking performance with larger machine types.

For detailed information, refer to Network bandwidth.

Create the VM

Google Cloud console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Select your project and click Continue.

  3. Click Create instance.

  4. Enter a name for your VM in Name. For more information, see Resource naming convention.

  5. Select the Region and Zone from the drop-down menus for this VM. Your VM should be in the same zone as your Managed Lustre instance.

  6. Select a Machine configuration for your VM from the list.

  7. Click OS and storage in the left nav.

  8. Under Operating system and storage, click Change.

  9. From the Operating system drop-down, select one of HPC VM image (for Rocky 8) or Ubuntu.

  10. From the Version drop-down, select one of: HPC Rocky Linux 8, Ubuntu 20.04 LTS, or Ubuntu 22.04 LTS. Select either the x86/64 version or the Arm64 version to match your machine type.

  11. To confirm your boot disk options, click Select.

  12. From the left nav, click Networking.

  13. Under Network interfaces, select the VPC network you created in Configure a VPC network.

  14. From the left nav, click Security.

  15. Under Access scopes, select Allow full access to all Cloud APIs.

  16. To create and start the VM, click Create.

gcloud

Use the gcloud command line tool to create a VM:

HPC Rocky Linux 8

Create a VM using the gcloud compute instances create command. You can update the machine type and any disk specifications before running the command.

gcloud compute instances create VM_NAME \
  --project=PROJECT_ID \
  --zone=LOCATION \
  --machine-type=c2d-standard-112 \
  --scopes="https://www.googleapis.com/auth/cloud-platform" \
  --network-interface=stack-type=IPV4_ONLY,subnet=NETWORK_NAME,nic-type=GVNIC \
  --network-performance-configs=total-egress-bandwidth-tier=TIER_1 \
  --create-disk=auto-delete=yes,boot=yes,device-name=VM_NAME,\
image=projects/cloud-hpc-image-public/global/images/hpc-rocky-linux-8-v20240126,\
mode=rw,size=100,type=pd-balanced

Ubuntu 20.04 LTS

Create a VM using the gcloud compute instances create command. You can update the machine type and any disk specifications before running the command.

gcloud compute instances create VM_NAME \
  --project=PROJECT_ID \
  --zone=LOCATION \
  --machine-type=c2d-standard-112 \
  --scopes="https://www.googleapis.com/auth/cloud-platform" \
  --network-interface=stack-type=IPV4_ONLY,subnet=NETWORK_NAME,nic-type=GVNIC \
  --network-performance-configs=total-egress-bandwidth-tier=TIER_1 \
  --create-disk=auto-delete=yes,boot=yes,device-name=VM_NAME,\
image-project=ubuntu-os-cloud,image-family=ubuntu-2004-lts,mode=rw,size=100,type=pd-balanced

Ubuntu 22.04 LTS

Create a VM using the gcloud compute instances create command. You can update the machine type and any disk specifications before running the command.

gcloud compute instances create VM_NAME \
  --project=PROJECT_ID \
  --zone=LOCATION \
  --machine-type=c2d-standard-112 \
  --scopes="https://www.googleapis.com/auth/cloud-platform" \
  --network-interface=stack-type=IPV4_ONLY,subnet=NETWORK_NAME,nic-type=GVNIC \
  --network-performance-configs=total-egress-bandwidth-tier=TIER_1 \
  --create-disk=auto-delete=yes,boot=yes,device-name=VM_NAME,\
image-project=ubuntu-os-cloud,image-family=ubuntu-2204-lts,mode=rw,size=100,type=pd-balanced

For more information about available options, see the Compute Engine documentation.

Create a firewall rule allowing SSH

To SSH to your Compute Engine VM, you must first create a firewall rule allowing access to TCP port 22 on your VM.

VMs without public IPs

When SSHing to VMs without external IPs, both the Google Cloud console SSH button and gcloud compute ssh use Identity-Aware Proxy (IAP) to connect.

For these connections, follow the instructions in Create the firewall rule to create a firewall rule allowing ingress from the IAP source range only, which is always 35.235.240.0/20. This enhances security by not exposing port 22 to the broader internet.

VMs with public IPs

If you've assigned a public IP to your Compute Engine VM, the Google Cloud console SSH button might attempt a direct connection, bypassing IAP.

The source IP for this direct connection is not the IAP range, but one of a large pool of Google IP addresses. Allowing this connection requires a broader source address range, for example 0.0.0.0/0 (any source).

If your VM has a public IP, we recommend that you configure your firewall rule to allow SSH from the IAP range (35.235.240.0/20) only. Then use gcloud compute ssh to connect, specifying the --tunnel-through-iap flag.

If you must assign a public IP and connect from the Google Cloud console, specify 0.0.0.0/0 as the value of the source IPv4 range.

Create the firewall rule

Google Cloud console

Create a firewall rule allowing SSH.

  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. Click Create firewall rule.

  3. Enter a Name for the rule.

  4. For Network, select the VPC network you created earlier.

  5. Select Ingress as the Direction of traffic, and Allow as the Action on match.

  6. From the Targets drop-down, select All instances in the network.

  7. In the Source IPv4 ranges field, enter 35.235.240.0/20.

  8. From Protocols and ports, select Specified protocols and ports.

  9. Select TCP and enter 22 in the Ports field.

  10. Click Create.

gcloud

Create a firewall rule allowing SSH.

gcloud compute firewall-rules create FIREWALL_RULE_NAME \
  --allow=tcp:22 \
  --network=NETWORK_NAME \
  --source-ranges=35.235.240.0/20 \
  --project=PROJECT_ID

SSH to your Compute Engine VM

Once the firewall rule is created, you can SSH to your VM:

Google Cloud console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. In the instances table, find your instance's row, and click SSH in the column titled Connect.

  3. If prompted to do so, click Authorize to allow the connection.

gcloud

gcloud compute ssh VM_NAME \
  --zone=LOCATION \
  --project=PROJECT_ID \
  --tunnel-through-iap

Install the Lustre client packages

The Lustre client packages are hosted in the lustre-client-binaries project in Artifact Registry.

Configure access to the repository

To configure your VM to install from Artifact Registry, run gcloud beta artifacts print-settings to show the installation commands.

Rocky 8

gcloud beta artifacts print-settings yum \
  --repository=lustre-client-rocky-8 \
  --location=us --project=lustre-client-binaries

Ubuntu 20.04 LTS

gcloud beta artifacts print-settings apt \
  --repository=lustre-client-ubuntu-focal \
  --location=us --project=lustre-client-binaries

Ubuntu 22.04 LTS

gcloud beta artifacts print-settings apt \
  --repository=lustre-client-ubuntu-jammy \
  --location=us --project=lustre-client-binaries

You must copy the installation commands that are displayed and run them.

Install the Lustre client packages

Follow the instructions to install the Lustre client packages.

Rocky 8

sudo yum -y --enablerepo=lustre-client-rocky-8 install kmod-lustre-client
sudo yum -y --enablerepo=lustre-client-rocky-8 install lustre-client

Ubuntu 20.04 LTS

Run the following commands.

sudo apt update
sudo apt install \
  lustre-client-modules-$(uname -r)/lustre-client-ubuntu-focal
sudo apt install \
  lustre-client-utils/lustre-client-ubuntu-focal

The Lustre client packages are kernel version-specific. If your Ubuntu kernel version changes due to an automatic kernel update, you must re-run these commands to download the appropriate Lustre client packages.

Ubuntu 22.04 LTS

Run the following commands.

sudo apt update
sudo apt install \
  lustre-client-modules-$(uname -r)/lustre-client-ubuntu-jammy
sudo apt install \
  lustre-client-utils/lustre-client-ubuntu-jammy

The Lustre client packages are kernel version-specific. If your Ubuntu kernel version changes due to an automatic kernel update, you must re-run these commands to download the appropriate Lustre client packages.

Load the Lustre kernel module

After the client packages are installed, run the following command to load the Lustre kernel module:

sudo modprobe lustre

Configure LNet for gke-support-enabled instances

This section applies only to Compute Engine clients connecting to Managed Lustre instances with --gke-support-enabled specified.

To configure LNet to use accept_port 6988:

  1. Create or edit /etc/modprobe.d/lnet.conf.
  2. Add the following line:

    options lnet accept_port=6988
    
  3. Reboot the VM:

    sudo reboot
    

Mount a Managed Lustre instance

Mount the Managed Lustre instance using the mount command. The directory to use as the mount point must exist before you run the mount command.

export IP_ADDRESS=IP_ADDRESS
export FS_NAME=FILESYSTEM_NAME
sudo mkdir -p ~/MOUNT_DIR
sudo mount -t lustre ${IP_ADDRESS}:/${FS_NAME} ~/MOUNT_DIR

To retrieve the IP address and file system name for your instance, use the gcloud lustre instances describe command:

gcloud lustre instances describe INSTANCE_NAME \
  --location=ZONE

Access your Managed Lustre instance

Your Managed Lustre instance is now mounted to your Compute Engine VM and readable/writable using standard POSIX syntax, with some exceptions.

Run the following command to check your mounted directory:

sudo lfs df -h ~/MOUNT_DIR

You can test file copy and write with the following commands:

# sample file copy/file write
sudo dd if=/dev/zero of=~/MOUNT_DIR/bigfile1 bs=1M count=8000
sudo dd if=/dev/urandom of=~/MOUNT_DIR/bigfile1 bs=1M count=1000

To confirm disk space usage:

sudo lfs df -h ~/MOUNT_DIR

Unmount the instance

The Managed Lustre instance can be unmounted using the following command:

sudo umount ~/MOUNT_DIR

What's next