Performance benchmarks

This page demonstrates the limits of performance of a single Google Cloud NetApp Volumes volume from multiple client virtual machines. Use the information on this page to size your workloads.

Random I/O versus sequential I/O

Workloads that are primarily random I/O in nature aren't able to drive the same amount of throughput as sequential I/O workloads.

Performance testing

The following test results display performance limits. In these tests, the volume has sufficient capacity so that the throughput doesn't affect benchmark testing. Allocating a single volume's capacity beyond the following throughput numbers doesn't yield additional performance gains.

Note that performance testing was completed using Fio.

For the performance testing results, be aware of the following considerations:

  • Standard, Premium, and Extreme service level performance scales throughput with volume capacity until limits are reached.

  • Performance for some regions or locations within the Standard service level scale throughput with pool capacity until limits are reached.

  • IOPS results are purely informational.

  • The numbers used to produce the following results are set up to show maximum results. The following results should be considered an estimate of the maximum achievable throughput capacity assignment.

  • Using multiple fast volumes per project may be subject to per project limits.

  • The following performance testing results cover only NFSv3 and SMB protocol types. Other protocol types such as NFSv4.1 were not used to test NetApp Volumes performance.

Volume throughput limits for NFSv3 access

The following sections provide details on volume throughput limits for NFSv3 access.

64 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 64 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between around 5,240 MiBps of pure sequential reads and around 2,180 MiBps of pure sequential writes with a 64 KiB block size over NFSv3.

Benchmark results for NFS 64 KiB Sequential 6 n2-standard-32 Red Hat 9 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps 4710 2050 1270 550 0
Write MiBps 0 690 1270 1650 1950

256 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 256 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between around 4,930 MiBps of pure sequential reads and around 2,440 MiBps of pure sequential writes with a 256 KiB block size over NFSv3.

Benchmark results for NFS 256 KiB Sequential 6 n2-standard-32 Red Hat 9 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps 4430 2270 1470 610 0
Write MiBps 0 750 1480 1830 2200

4 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 4 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~380,000 pure random reads and around 120,000 of pure random writes with a 4 KiB block size over NFSv3.

Benchmark results for NFS 4 KiB Random 6 n2-standard-32 Red Hat 9 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS 340,000 154,800 71,820 28,800 0
Write IOPS 0 51,570 71,820 86,580 106,200

8 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 8 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~270,000 pure random reads and ~110,000 of pure random writes with an 8 KiB block size over NFSv3.

Benchmark results for NFS 8 KiB 6 n2-standard-32 Red Hat 9 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS 238,500 118,800 60,210 27,180 0
Write IOPS 0 39,690 60,210 81,450 93,600

Volume throughput limits for SMB access

The following sections provide details for volume throughput limits for SMB access.

64 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 64 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • SMB Connect Count Per RSS Network Interface client-side option configured on each virtual machine for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~5,130 MiBps of pure sequential reads and ~1,790 MiBps of pure sequential writes with a 64 KiB block size over SMB.

SMB 64 KiB Sequential 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps 4610 2410 1310 500 0
Write MiBps 0 800 1310 1510 1600

256 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 256 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • SMB Connection Count Per RSS Network Interface client-side option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~4,620 MiBps of pure sequential reads and ~1,830 MiBps of pure sequential writes with a 256 KiB block size over SMB.

SMB 256 KiB Sequential 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps 4150 2440 1380 530 0
Write MiBps 0 810 1380 1569 1643

4 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 4 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine for a combined total of 6 TiB

  • SMB Connection Count Per RSS Network Interface client-side option enabled on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~390,000 pure random reads and ~110,000 of pure random writes with a 4 KiB block size over SMB.

Benchmark results for SMB 4 KiB Random 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS 351,810 148,230 75,780 29,540 0
Write IOPS 0 49,360 75,780 88,650 98,370

8 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 8 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine for a combined total of 6 TiB

  • SMB Connection Count Per RSS Network Interface client-side option configured on each host for the value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~280,000 pure random reads and ~90,000 of pure random writes with an 8 KiB block size over SMB.

Benchmark results for SMB 8 KiB Random 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS 244,620 122,310 59,130 25,280 0
Write IOPS 0 40,763 59,310 75,960 76,950

What's next

Monitor performance.