Stay organized with collections
Save and categorize content based on your preferences.
Linux
Windows
After you provision your Google Cloud Hyperdisk volumes, your application and operating system
might require performance tuning to meet your performance needs.
In the following sections, we describe a few key elements that can be tuned for
better performance and how you can apply some of these elements to specific
types of workloads.
Hyperdisk volumes have higher latency than locally attached disks such as local
SSDs because they are network-attached devices. They can provide very high IOPS
and throughput, but you need to make sure that enough I/O requests are done in
parallel. The number of I/O requests done in parallel is referred to as the I/O
queue depth.
The following tables show the recommended I/O queue depth to ensure you can
achieve a certain performance level. The tables use a slight
overestimate of typical latency in order to show conservative
recommendations. The example assumes that you are using an I/O size of 16 KB.
Desired IOPS
Queue depth
500
1
1,000
2
2,000
4
4,000
8
8,000
16
16,000
32
32,000
64
64,000
128
100,000
200
200,000
400
320,000
640
Desired throughput (MB/s)
Queue depth
8
1
16
2
32
4
64
8
128
16
256
32
512
64
1,000
128
1,200
153
Ensure you have free CPUs
Reading and writing to Hyperdisk volumes requires CPU cycles from your VM. If
your VM instance is starved for CPU, your application won't be able to manage
the IOPS described earlier. To achieve very high, consistent IOPS levels, you
must have CPUs free to process I/O.
Review Hyperdisk performance metrics
You can review disk performance metrics in
Cloud Monitoring,
Google Cloud's integrated monitoring solution. You can use these metrics
to observe the performance of your disks and other VM resources under different
application workloads.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[[["\u003cp\u003eHyperdisk volumes may require performance tuning after provisioning to meet specific application and operating system needs.\u003c/p\u003e\n"],["\u003cp\u003eUtilizing a high I/O queue depth is essential for achieving the high IOPS and throughput capabilities of network-attached Hyperdisk volumes, with the recommended depth varying based on desired performance levels.\u003c/p\u003e\n"],["\u003cp\u003eSufficient free CPU cycles on the VM are necessary to manage the high IOPS associated with Hyperdisk volumes, ensuring consistent performance.\u003c/p\u003e\n"],["\u003cp\u003eCloud Monitoring and the console's Observability page allow you to review Hyperdisk performance metrics to observe the performance of disks and VM resources under different workloads.\u003c/p\u003e\n"]]],[],null,["# Optimize Hyperdisk performance\n\nLinux Windows\n\n*** ** * ** ***\n\nAfter you provision your Google Cloud Hyperdisk volumes, your application and operating system\nmight require performance tuning to meet your performance needs.\n\nIn the following sections, we describe a few key elements that can be tuned for\nbetter performance and how you can apply some of these elements to specific\ntypes of workloads.\n\nFor an overview of how Google Cloud Hyperdisk performance works, see\n[About Hyperdisk performance](/compute/docs/disks/hyperdisk-performance).\n\nUse a high I/O queue depth\n--------------------------\n\nHyperdisk volumes have higher latency than locally attached disks such as local\nSSDs because they are network-attached devices. They can provide very high IOPS\nand throughput, but you need to make sure that enough I/O requests are done in\nparallel. The number of I/O requests done in parallel is referred to as the *I/O\nqueue depth*.\n\nThe following tables show the recommended I/O queue depth to ensure you can\nachieve a certain performance level. The tables use a slight\noverestimate of typical latency in order to show conservative\nrecommendations. The example assumes that you are using an I/O size of 16 KB.\n\nEnsure you have free CPUs\n-------------------------\n\nReading and writing to Hyperdisk volumes requires CPU cycles from your VM. If\nyour VM instance is starved for CPU, your application won't be able to manage\nthe IOPS described earlier. To achieve very high, consistent IOPS levels, you\nmust have CPUs free to process I/O.\n\nReview Hyperdisk performance metrics\n------------------------------------\n\nYou can review disk performance metrics in\n[Cloud Monitoring](https://console.cloud.google.com/monitoring),\nGoogle Cloud's integrated monitoring solution. You can use these metrics\nto observe the performance of your disks and other VM resources under different\napplication workloads.\n\nTo learn more, see\n[Reviewing disk performance metrics](/compute/docs/disks/review-disk-metrics).\n\nYou can also use the **Observability** page in the console to [view the disk\nperformance metrics](/compute/docs/disks/analyze-iops-hyperdisk#viewing-performance-metrics).\n\nWhat's next\n-----------\n\n- Learn about [Hyperdisk pricing](/compute/disks-image-pricing#hyperdisk).\n- [Analyze the provisioned IOPS for Hyperdisk volumes](/compute/docs/disks/analyze-iops-hyperdisk)."]]