You can optimize performance by adjusting the following volume settings:
Increase volume capacity: You can increase the capacity of your Premium,
Extreme or Standard service level volume to improve maximum achievable volume
throughput. For volumes of Flex service level, increase storage pools capacity
instead.
Upgrade your service level: You can upgrade your Premium service level
volumes to the Extreme service level to improve throughput. We recommend that
you assign the volume to a different storage pool with a different service
level.
Increasing volume capacity and upgrading service levels are both non-disruptive
to I/O workloads in process on the volume and don't affect access to the volume
in any way.
Adjust the client
You can improve performance by adjusting the following settings on the client:
Co-locate clients: latency results are directly impacted by the
capabilities and location of the client. For best results, place the client
in the same region as the volume or as close as possible. Test the zonal
impact by testing latency from a client in each zone and use the zone with
the lowest latency.
Configure Compute Engine network bandwidth: the network capabilities of
Compute Engine virtual machines depend on the instance type used. Typically,
larger instances can drive more network throughput. We recommend that you
select a client virtual machine with an appropriate network bandwidth
capability, select the Google Virtual NIC (gVNIC) network interface and
enable Tier_1 performance. For more information, see Compute Engine
documentation on network bandwidth.
Open multiple TCP sessions: if your application requires high throughput,
you can eventually saturate the single transmission control protocol (TCP)
session that underlies a normal NFS and SMB session. For such cases, increase
the number of TCP sessions your NFS and SMB connection uses.
Use one of the following tabs to adjust your client based on the type of
client:
Linux
Traditionally, an NFS client uses a single TCP session for all
NFS-mounted file systems that share a storage endpoint. Using the
nconnect mount option
lets you increase the number of supported TCP sessions up to a maximum
of 16.
We recommend the following best practices for adjusting your Linux client
type to fully take advantage of nconnect:
Increase the number of TCP sessions with nconnect: Each
additional TCP session adds a queue for 128 outstanding requests,
improving potential concurrency.
Set sunrpc.max_tcp_slot_table_entries parameter:
sunrpc.max_tcp_slot_table_entries is a connection-level adjustment
parameter which you can modify to control performance. We recommend
setting sunrpc.max_tpc_slot_table_enteries to 128 requests or per
connection and not surpassing 10,000 slots for all NFS clients within
a single project connecting to NetApp Volumes. To set the
sunrpc.max_tcp_slot_table_entries parameter, add the parameter to
your /etc/sysctl.conf file and reload the parameter file using the
sysctl -p command.
Tune maximum supported value per session to 180: Unlike NFSv3,
NFSv4.1 clients define the relationship between the client and server
in sessions. While NetApp Volumes supports up to 128
outstanding requests per connection using NFSv3, NFSv4.1 is limited to
180 outstanding requests per session. Linux NFSv4.1 clients default to
64 max_session_slots per session but you can tune this value as
needed. We recommend changing the maximum supported value per session
to 180.
To tune max_session_slots, create a configuration file under
/etc/modprobe.d. Make sure that no quotation marks (" ") appear
inline. Otherwise, the option doesn't take effect.
The following NFS nconnect comparison graph demonstrates the impact
using the nconnect configuration can have on an NFS workload. This
information was captured using Fio with the following settings:
100% read workload
8 KiB block size against a single volume
n2-standard-32 virtual machine using Red Hat 9 OS
6 TiB working set
Using an nconnect value of 16 resulted in five times more performance
than when it wasn't enabled.
Windows
For Windows-based clients, the client can use SMB Multichannel
with Receive Side Scaling (RSS) to open multiple TCP connections. To
achieve this configuration, your virtual machine must have an allocated
network adapter that supports RSS. We recommend setting RSS to four or
eight values, however, any value over one should increase throughput.
The following graph displays the difference using the RSS configuration
can have on an SMB workload. This information was captured using Fio with
the following settings:
100% read workload
8 KiB block size against a single volume
Single n2-standard-32 virtual machine running a Windows 2022 OS
6 TiB working set
Eight jobs were run with only the SMB client RSS option changing between
test executions. Using RSS values of 4, 8, and 16 increased performance
two-fold when compared to using a value of 1. Each RSS instance was run
nine times with a numjobs parameter of 8. The iodepth parameter was
increased by five each execution until maximum throughput was reached.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Optimize performance\n\nThis page provides details on how you can optimize Google Cloud NetApp Volumes\nperformance.\n\nBefore you begin\n----------------\n\nBefore you make changes to your volumes to optimize performance, review\n[performance considerations](/netapp/volumes/docs/performance/performance-considerations).\n\nAdjust volume settings\n----------------------\n\nYou can optimize performance by adjusting the following volume settings:\n\n- **Increase volume capacity**: You can increase the capacity of your Premium,\n Extreme or Standard service level volume to improve maximum achievable volume\n throughput. For volumes of Flex service level, increase storage pools capacity\n instead.\n\n- **Upgrade your service level**: You can upgrade your Premium service level\n volumes to the Extreme service level to improve throughput. We recommend that\n you assign the volume to a different storage pool with a different service\n level.\n\nIncreasing volume capacity and upgrading service levels are both non-disruptive\nto I/O workloads in process on the volume and don't affect access to the volume\nin any way.\n\nAdjust the client\n-----------------\n\nYou can improve performance by adjusting the following settings on the client:\n\n- **Co-locate clients**: latency results are directly impacted by the\n capabilities and location of the client. For best results, place the client\n in the same region as the volume or as close as possible. Test the zonal\n impact by testing latency from a client in each zone and use the zone with\n the lowest latency.\n\n- **Configure Compute Engine network bandwidth** : the network capabilities of\n Compute Engine virtual machines depend on the instance type used. Typically,\n larger instances can drive more network throughput. We recommend that you\n select a client virtual machine with an appropriate network bandwidth\n capability, select the Google Virtual NIC (gVNIC) network interface and\n enable `Tier_1` performance. For more information, see Compute Engine\n documentation on [network bandwidth](/compute/docs/network-bandwidth).\n\n- **Open multiple TCP sessions**: if your application requires high throughput,\n you can eventually saturate the single transmission control protocol (TCP)\n session that underlies a normal NFS and SMB session. For such cases, increase\n the number of TCP sessions your NFS and SMB connection uses.\n\n Use one of the following tabs to adjust your client based on the type of\n client: \n\n ### Linux\n\n Traditionally, an NFS client uses a single TCP session for all\n NFS-mounted file systems that share a storage endpoint. Using the\n [`nconnect` mount option](https://man7.org/linux/man-pages/man5/nfs.5.html)\n lets you increase the number of supported TCP sessions up to a maximum\n of 16.\n\n We recommend the following best practices for adjusting your Linux client\n type to fully take advantage of `nconnect`:\n - **Increase the number of TCP sessions with `nconnect`**: Each\n additional TCP session adds a queue for 128 outstanding requests,\n improving potential concurrency.\n\n - **Set `sunrpc.max_tcp_slot_table_entries` parameter** :\n `sunrpc.max_tcp_slot_table_entries` is a connection-level adjustment\n parameter which you can modify to control performance. We recommend\n setting `sunrpc.max_tpc_slot_table_enteries` to 128 requests or per\n connection and not surpassing 10,000 slots for all NFS clients within\n a single project connecting to NetApp Volumes. To set the\n `sunrpc.max_tcp_slot_table_entries` parameter, add the parameter to\n your `/etc/sysctl.conf` file and reload the parameter file using the\n `sysctl -p` command.\n\n - **Tune maximum supported value per session to 180** : Unlike NFSv3,\n NFSv4.1 clients define the relationship between the client and server\n in sessions. While NetApp Volumes supports up to 128\n outstanding requests per connection using NFSv3, NFSv4.1 is limited to\n 180 outstanding requests per session. Linux NFSv4.1 clients default to\n `64 max_session_slots` per session but you can tune this value as\n needed. We recommend changing the maximum supported value per session\n to 180.\n\n To tune `max_session_slots`, create a configuration file under\n `/etc/modprobe.d`. Make sure that no quotation marks (\" \") appear\n inline. Otherwise, the option doesn't take effect. \n\n $ echo \"options nfs max_session_slots=180\" \u003e /etc/modprobe/d/nfsclient/conf\n $ reboot\n\n Use the systool -v -m nfs command to see the current maximum in use\n by the client. For the command to work, at least one NFSv4.1 mount\n must be in place.\n\n $ systool -v -v nfs\n {\n Module = \"nfs\"\n ...\n Parameters:\n ...\n Max_session_slots = \"63\" \u003c-\n ...\n }\n\n The following NFS `nconnect` comparison graph demonstrates the impact\n using the nconnect configuration can have on an NFS workload. This\n information was captured using Fio with the following settings:\n - 100% read workload\n\n - 8 KiB block size against a single volume\n\n - `n2-standard-32` virtual machine using Red Hat 9 OS\n\n - 6 TiB working set\n\n Using an `nconnect` value of 16 resulted in five times more performance\n than when it wasn't enabled.\n\n ### Windows\n\n For Windows-based clients, the client can use [SMB Multichannel](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v=ws.11))\n with Receive Side Scaling (RSS) to open multiple TCP connections. To\n achieve this configuration, your virtual machine must have an allocated\n network adapter that supports RSS. We recommend setting RSS to four or\n eight values, however, any value over one should increase throughput.\n\n The following graph displays the difference using the RSS configuration\n can have on an SMB workload. This information was captured using Fio with\n the following settings:\n - 100% read workload\n\n - 8 KiB block size against a single volume\n\n - Single `n2-standard-32` virtual machine running a Windows 2022 OS\n\n - 6 TiB working set\n\n Eight jobs were run with only the SMB client RSS option changing between\n test executions. Using RSS values of 4, 8, and 16 increased performance\n two-fold when compared to using a value of 1. Each RSS instance was run\n nine times with a `numjobs` parameter of 8. The `iodepth` parameter was\n increased by five each execution until maximum throughput was reached.\n\nWhat's next\n-----------\n\nRead about [storage pools](/netapp/volumes/docs/configure-and-use/storage-pools/overview)."]]