Optimize performance

This page provides details on how you can optimize Google Cloud NetApp Volumes performance.

Before you begin

Before you make changes to your volumes to optimize performance, review performance considerations.

Adjust volume settings

You can optimize performance by adjusting the following volume settings:

  • Increase volume capacity: You can increase the capacity of your Premium, Extreme or Standard service level volume to improve maximum achievable volume throughput. Based on the region or location selected for the Standard service level, increase storage pools capacity instead.

  • Upgrade your service level: You can upgrade your Premium service level volumes to the Extreme service level to improve throughput. We recommend that you assign the volume to a different storage pool with a different service level.

Increasing volume capacity and upgrading service levels are both non-disruptive to I/O workloads in process on the volume and don't affect access to the volume in any way.

Adjust the client

You can improve performance by adjusting the following settings on the client:

  • Co-locate clients: latency results are directly impacted by the capabilities and location of the client. For best results, place the client in the same region as the volume or as close as possible. Test the zonal impact by testing latency from a client in each zone and use the zone with the lowest latency.

  • Configure Compute Engine network bandwidth: the network capabilities of Compute Engine virtual machines depend on the instance type used. Typically, larger instances can drive more network throughput. We recommend that you select a client virtual machine with an appropriate network bandwidth capability, select the Google Virtual NIC (gVNIC) network interface and enable Tier_1 performance. For more information, see Compute Engine documentation on network bandwidth.

  • Open multiple TCP sessions: if your application requires high throughput, you can eventually saturate the single transmission control protocol (TCP) session that underlies a normal NFS and SMB session. For such cases, increase the number of TCP sessions your NFS and SMB connection uses.

    Use one of the following tabs to adjust your client based on the type of client:

    Linux

    Traditionally, an NFS client uses a single TCP session for all NFS-mounted file systems that share a storage endpoint. Using the nconnect mount option lets you increase the number of supported TCP sessions up to a maximum of 16.

    We recommend the following best practices for adjusting your Linux client type to fully take advantage of nconnect:

    • Increase the number of TCP sessions with nconnect: Each additional TCP session adds a queue for 128 outstanding requests, improving potential concurrency.

    • Set sunrpc.max_tcp_slot_table_entries parameter: sunrpc.max_tcp_slot_table_entries is a connection-level adjustment parameter which you can modify to control performance. We recommend setting sunrpc.max_tpc_slot_table_enteries to 128 requests or per connection and not surpassing 10,000 slots for all NFS clients within a single project connecting to NetApp Volumes. To set the sunrpc.max_tcp_slot_table_entries parameter, add the parameter to your /etc/sysctl.conf file and reload the parameter file using the sysctl -p command.

    • Tune maximum supported value per session to 180: Unlike NFSv3, NFSv4.1 clients define the relationship between the client and server in sessions. While NetApp Volumes supports up to 128 outstanding requests per connection using NFSv3, NFSv4.1 is limited to 180 outstanding requests per session. Linux NFSv4.1 clients default to 64 max_session_slots per session but you can tune this value as needed. We recommend changing the maximum supported value per session to 180.

      To tune max_session_slots, create a configuration file under /etc/modprobe.d. Make sure that no quotation marks (" ") appear inline. Otherwise, the option doesn't take effect.

      $ echo "options nfs max_session_slots=180" > /etc/modprobe/d/nfsclient/conf
      $ reboot
      
      Use the systool -v -m nfs command to see the current maximum in use
      by the client. For the command to work, at least one NFSv4.1 mount
      must be in place.
      
      $ systool -v -v nfs
      {
      Module = "nfs"
      …
      Parameters:
      …
      Max_session_slots = "63" <-
      …
      }
      

    The following NFS nconnect comparison graph demonstrates the impact using the nconnect configuration can have on an NFS workload. This information was captured using Fio with the following settings:

    • 100% read workload

    • 8 KiB block size against a single volume

    • n2-standard-32 virtual machine using Red Hat 9 OS

    • 6 TiB working set

    Using an nconnect value of 16 resulted in five times more performance than when it wasn't enabled.

    NFS nconnect comparison using single Red Hat 9 Virtual Machine with an 8 KiB block size.

    Windows

    For Windows-based clients, the client can use SMB Multichannel with Receive Side Scaling (RSS) to open multiple TCP connections. To achieve this configuration, your virtual machine must have an allocated network adapter that supports RSS. We recommend setting RSS to four or eight values, however, any value over one should increase throughput.

    The following graph displays the difference using the RSS configuration can have on an SMB workload. This information was captured using Fio with the following settings:

    • 100% read workload

    • 8 KiB block size against a single volume

    • Single n2-standard-32 virtual machine running a Windows 2022 OS

    • 6 TiB working set

    Eight jobs were run with only the SMB client RSS option changing between test executions. Using RSS values of 4, 8, and 16 increased performance two-fold when compared to using a value of 1. Each RSS instance was run nine times with a numjobs parameter of 8. The iodepth parameter was increased by five each execution until maximum throughput was reached.

    SMB RSS comparison of single Windows 2022 VM with an 8 KiB blocksize

What's next

Read about storage pools.