Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini memberikan detail tentang cara memverifikasi performa volume.
Mengukur performa volume menggunakan Fio
Gunakan alat generator I/O, Fio, untuk mengukur performa
dasar pengukuran.
Menggunakan Fio
Fio menerapkan beban kerja yang dapat Anda tentukan melalui antarmuka command line atau
file konfigurasi. Saat berjalan, Fio menampilkan indikator progres dengan throughput saat ini dan angka input dan output per detik (IOPS). Setelah berakhir, ringkasan mendetail akan ditampilkan.
Contoh hasil fio
Contoh berikut menunjukkan tugas tulis acak 4k dengan satu thread yang berjalan selama
60 detik, yang merupakan cara yang berguna untuk mengukur latensi dasar pengukuran. Dalam perintah
berikut, parameter --directory mengarah ke folder dengan berbagi
NetApp Volumes yang terpasang:
Contoh sebelumnya menunjukkan rata-rata 2.000 IOPS. Nilai tersebut
diharapkan untuk tugas single-thread dengan latensi 0,5 md (IOPS = 1000 ms/0.5 ms = 2000).
Rata-rata throughput adalah 8.002 KiBps, yang merupakan hasil yang diharapkan untuk 2.000 IOPS dengan ukuran blok 4 KiB (2000 1/s * 4 KiB = 8,000 KiB/s).
Mengukur latensi
Latensi adalah metrik dasar untuk performa volume. Hal ini disebabkan oleh kemampuan klien
dan server, jarak antara klien dan server (volume Anda),
dan peralatan di antaranya. Komponen utama metrik ini adalah latensi yang disebabkan oleh jarak.
Anda dapat melakukan ping ke IP volume untuk mendapatkan waktu perjalanan bolak-balik, yang merupakan estimasi kasar
latensi Anda.
Latensi dipengaruhi oleh ukuran blok dan apakah Anda melakukan operasi baca atau
tulis. Sebaiknya gunakan parameter berikut untuk mengukur latensi dasar pengukuran antara klien dan volume:
Ganti parameter rw (read/write/randread/randwrite) dan bs (ukuran blok)
agar sesuai dengan beban kerja Anda. Ukuran blok yang lebih besar menghasilkan latensi yang lebih tinggi, dengan operasi baca
yang lebih cepat daripada operasi tulis. Hasilnya dapat ditemukan di baris lat.
Mengukur IOPS
IOPS adalah hasil langsung dari latensi dan konkurensi. Gunakan salah satu tab berikut berdasarkan jenis klien Anda untuk mengukur IOPS:
Ganti parameter rw (read/write/randread/randwrite), bs (blocksize),
dan iodepth (konkurensi) agar sesuai dengan beban kerja Anda. Hasilnya dapat ditemukan di
baris iops.
Mengukur throughput
Throughput adalah IOPS yang dikalikan dengan ukuran blok. Gunakan salah satu tab berikut berdasarkan jenis klien Anda untuk mengukur throughput:
Ganti parameter rw (read/write/randread/randwrite), bs (blocksize),
dan iodepth (konkurensi) agar sesuai dengan beban kerja Anda. Anda hanya dapat mencapai throughput
tinggi menggunakan ukuran blok 64k atau lebih besar dan konkurensi tinggi.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-17 UTC."],[],[],null,["# Verify performance\n\nThis page provides details on how to verify volume performance.\n\nMeasure volume performance using Fio\n------------------------------------\n\nUse the I/O generator tool, [Fio](https://fio.readthedocs.io/en/latest/fio_doc.html), to measure baseline\nperformance.\n\nUsing Fio\n---------\n\nFio applies a workload which you can specify through a command line interface or\na configuration file. While it runs, Fio shows a progress indicator with current\nthroughput and input and output per second (IOPS) numbers. After it ends, a\ndetailed summary displays.\n\nFio results example\n-------------------\n\nThe following examples show a single-threaded, 4k random write job running for\n60 seconds, which is a useful way to measure baseline latency. In the following\ncommands, the `--directory` parameter points to a folder with a mounted\nNetApp Volumes share: \n\n $ FIO_COMMON_ARGS=--size=10g --fallocate=none --direct=1 --runtime=60 --time_based --ramp_time=5\n $ fio $FIO_COMMON_ARGS --directory=/netapp --ioengine=libaio --rw=randwrite --bs=4k --iodepth=1 --name=nv\n cvs: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1\n fio-3.28\n Starting 1 process\n cvs: Laying out IO file (1 file / 10240MiB)\n Jobs: 1 (f=1): [w(1)][100.0%][w=7856KiB/s][w=1964 IOPS][eta 00m:00s]\n cvs: (groupid=0, jobs=1): err= 0: pid=1891: Wed Dec 21 14:56:37 2022\n write: IOPS=1999, BW=7999KiB/s (8191kB/s)(469MiB/60001msec); 0 zone resets\n slat (usec): min=4, max=417, avg=12.06, stdev= 5.71\n clat (usec): min=366, max=27978, avg=483.59, stdev=91.34\n lat (usec): min=382, max=28001, avg=495.96, stdev=91.89\n clat percentiles (usec):\n | 1.00th=[ 408], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 449],\n | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 482], 60.00th=[ 490],\n | 70.00th=[ 498], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 553],\n | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 807], 99.95th=[ 873],\n | 99.99th=[ 1020]\n bw ( KiB/s): min= 7408, max= 8336, per=100.00%, avg=8002.05, stdev=140.09, samples=120\n iops : min= 1852, max= 2084, avg=2000.45, stdev=35.06, samples=120\n lat (usec) : 500=70.67%, 750=29.17%, 1000=0.15%\n lat (msec) : 2=0.01%, 4=0.01%, 50=0.01%\n cpu : usr=2.04%, sys=3.25%, ctx=120561, majf=0, minf=58\n IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, \u003e=64=0.0%\n submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, \u003e=64=0.0%\n complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, \u003e=64=0.0%\n issued rwts: total=0,119984,0,0 short=0,0,0,0 dropped=0,0,0,0\n latency : target=0, window=0, percentile=100.00%, depth=1\n\n Run status group 0 (all jobs):\n WRITE: bw=7999KiB/s (8191kB/s), 7999KiB/s-7999KiB/s (8191kB/s-8191kB/s), io=469MiB (491MB), run=60001-60001msec\n\nRead the following lines for details about the performance results:\n\n- **Latency** : `lat (usec): min=382, max=28001, avg=495.96, stdev=91.89`\n\n The average latency is 495.96 microseconds (usec), roughly 0.5 ms, which is\n an ideal latency.\n- **IOPS** : `min= 1852, max= 2084, avg=2000.45, stdev=35.06, samples=120`\n\n The preceding example shows an average of 2,000 IOPS. That value is\n expected for a single-threaded job with 0.5 ms latency (`IOPS = 1000 ms/0.5 ms = 2000`).\n- **Throughput** : `bw ( KiB/s): min= 7408, max=8336, per=100.00%, avg=8002.05, stdev=140.09`\n\n The throughput average is 8002 KiBps, which is the expected result for 2,000\n IOPS with a block size of 4 KiB (`2000 1/s * 4 KiB = 8,000 KiB/s`).\n\n### Measure latency\n\nLatency is a fundamental metric for volume performance. It's a result of client\nand server capabilities, the distance between client and server (your volume),\nand equipment in between. The main component of the metric is distance-induced\nlatency.\n\nYou can ping the IP of your volume to get the round-trip time, which is a rough\nestimate of your latency.\n\nLatency is affected by the block size and whether you are doing read or write\noperations. We recommend that you use the following parameters to measure the\nbaseline latency between your client and a volume: \n\n### Linux\n\n```bash\nfio --directory=/netapp \\\n --ioengine=libaio \\\n --rw=randwrite \\\n --bs=4k --iodepth=1 \\\n --size=10g \\\n --fallocate=none \\\n --direct=1 \\\n --runtime=60 \\\n --time_based \\\n --ramp_time=5 \\\n --name=latency\n```\n\n### Windows\n\n```bash\nfio --directory=Z\\:\\\n--ioengine=windowsaio\n--thread\n--rw=randwrite\n--bs=4k\n--iodepth=1\n--size=10g\n--fallocate=none\n--direct=1\n--runtime=60\n--time_based\n--ramp_time=5\n--name=latency\n```\n\nReplace the parameters `rw` (read/write/randread/randwrite) and `bs` (block size)\nto fit your workload. Larger block sizes result in higher latency, where reads\nare faster than writes. The results can be found in the `lat` row.\n\n### Measure IOPS\n\nIOPS are a direct result of the latency and concurrency. Use one of the\nfollowing tabs based on your client type to measure IOPS: \n\n### Linux\n\n```bash\nfio --directory=/netapp \\\n--ioengine=libaio \\\n--rw=randread \\\n--bs=4k \\\n--iodepth=32 \\\n--size=10g \\\n--fallocate=none \\\n--direct=1 \\\n--runtime=60 \\\n--time_based \\\n--ramp_time=5 \\\n--name=iops\n```\n\n### Windows\n\n```bash\nfio --directory=Z\\:\\\n--ioengine=windowsaio\n--thread\n--rw=randread\n--bs=4k\n--iodepth=32\n--size=10g\n--fallocate=none\n--direct=1\n--runtime=60\n--time_based\n--ramp_time=5\n--numjobs=16\n--name=iops\n```\n\nReplace the parameters `rw` (read/write/randread/randwrite), `bs` (blocksize),\nand `iodepth` (concurrency) to fit your workload. The results can be found in\nthe `iops` row.\n\n### Measure throughput\n\nThroughput is IOPS multiplied by blocksize. Use one of the following tabs based\non your client type to measure throughput: \n\n### Linux\n\n```bash\nfio --directory=/netapp \\\n--ioengine=libaio \\\n--rw=read \\\n--bs=64k \\\n--iodepth=32 \\\n--size=10g \\\n--fallocate=none \\\n--direct=1 \\\n--runtime=60 \\\n--time_based \\\n--ramp_time=5 \\\n--numjobs=16 \\\n--name=throughput\n```\n\n### Windows\n\n```bash\nfio --directory=Z\\:\\\n--ioengine=windowsaio\n--thread\n--rw=read\n--bs=64k\n--iodepth=32\n--size=10g\n--fallocate=none\n--direct=1\n--runtime=60\n--time_based\n--ramp_time=5\n--numjobs=16\n--name=throughput\n```\n\nReplace the parameters `rw` (read/write/randread/randwrite), `bs` (blocksize),\nand `iodepth` (concurrency) to fit your workload. You can only achieve high\nthroughput using block sizes 64k or larger and high concurrency.\n\nWhat's next\n-----------\n\nReview [performance benchmarks](/netapp/volumes/docs/performance/performance-benchmarks)."]]