[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[],[],null,["# Test instance performance\n\nThis page discusses performance testing for Filestore instances.\n\nOverview\n--------\n\nIf you are using Linux, you can use the [Flexible IO Tester (fio)](https://linux.die.net/man/1/fio) tool to benchmark read throughput, write throughput, read IOPS, and write IOPS, for basic , regional, zonal, and enterprise [tier](/filestore/docs/service-tiers) instances.\n\nYou can test performance of basic instances using a single client VM.\nWe don't recommend using a single client VM to test regional, zonal or enterprise instances because scaleout service tiers are performance optimized for multiple client VMs and a single client usually can't achieve the maximum cluster IOPS or throughput.\n\nFor more information, see [Single and multiple client VM performance](/filestore/docs/performance#single-multiple-performance).\n\nBefore you start\n----------------\n\nMount the Filestore file share you want to test on all client VMs. It can be one or several client VMs depending on the service tier. For detailed instructions and mounting options, see [Mounting file shares on Compute Engine clients](/filestore/docs/mounting-fileshares).\n\nMake sure to specify the [`nconnect`](https://man7.org/linux/man-pages/man5/nfs.5.html) mount option for increased NFS performance. For specific service tiers, we recommend specifying the following number of connections between the client and server:\n\nYou can optimize the NFS read throughput by adjusting the `read_ahead_kb` parameter value. For more information, see [Optimize the NFS read throughput with `read_ahead_kb` parameter](/filestore/docs/performance#read-ahead-kb).\n\nTest performance with a single client VM\n----------------------------------------\n\nUse the following scenarios to perform tests on basic instances. You can run the commands directly from your command line.\n\n- Maximum write throughput for basic instances smaller than 1 TiB:\n\n fio --ioengine=libaio --filesize=4G --ramp_time=2s\n --runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0\n --group_reporting --directory=/mnt/nfs\n --name=write --blocksize=1m --iodepth=64 --readwrite=write\n\n- Maximum read throughput:\n\n fio --ioengine=libaio --filesize=32G --ramp_time=2s \\\n --runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \\\n --group_reporting --directory=/mnt/nfs --buffer_compress_percentage=50 \\\n --name=read --blocksize=1m --iodepth=64 --readwrite=read\n\n- Maximum write throughput:\n\n fio --ioengine=libaio --filesize=32G --ramp_time=2s \\\n --runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \\\n --group_reporting --directory=/mnt/nfs --buffer_compress_percentage=50 \\\n --name=write --blocksize=1m --iodepth=64 --readwrite=write\n\n- Maximum read IOPS:\n\n fio --ioengine=libaio --filesize=32G --ramp_time=2s \\\n --runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \\\n --group_reporting --directory=/mnt/nfs --buffer_compress_percentage=50 \\\n --name=randread --blocksize=4k --iodepth=256 --readwrite=randread\n\n- Maximum write IOPS:\n\n fio --ioengine=libaio --filesize=32G --ramp_time=2s \\\n --runtime=5m --numjobs=16 --direct=1 --verify=0 --randrepeat=0 \\\n --group_reporting --directory=/mnt/nfs --buffer_compress_percentage=50 \\\n --name=randwrite --blocksize=4k --iodepth=256 --readwrite=randwrite\n\nTest performance with multiple client VMs\n-----------------------------------------\n\nTo achieve the maximum performance for zonal, regional, and enterprise instances, use multiple client VMs.\n\nWe recommend using eight client VMs per 1 TiB for instances between 1 and 9.75 TiB. For instances between 10 and 100 TiB use eight client VMs per 10 TiB.\n\n1. Start the fio server on all client VMs. Fio uses port `8765` to communicate, so this port must be open in your firewall policy.\n\n fio --server\n\n2. Select one client VM that will orchestrate the fio run. Create a [fio job file](https://fio.readthedocs.io/en/latest/fio_doc.html#job-file-format) on that client VM:\n\n cat \u003c\u003c EOL \u003e /tmp/fio_job.conf\n [global]\n ioengine=libaio\n ramp_time=2s\n runtime=3m\n time_based\n direct=1\n verify=0\n randrepeat=0\n group_reporting\n buffer_compress_percentage=50\n directory=\\${TMP_DIR}\n create_only=\\${CREATE_ONLY}\n\n [read-throughput]\n blocksize=1048576\n numjobs=8\n readwrite=read\n filesize=100M\n\n [write-throughput]\n blocksize=1048576\n numjobs=8\n readwrite=write\n filesize=100M\n\n [read-iops]\n blocksize=4k\n iodepth=64\n readwrite=randread\n filesize=1GB\n\n [write-iops]\n blocksize=4k\n iodepth=64\n readwrite=randwrite\n filesize=1GB\n\n EOL\n\n3. Create a `hosts.list` file that contains the IP addresses or DNS names of the fio client VMs:\n\n cat \u003c\u003c EOL \u003e /tmp/hosts.list\n \u003cClient 1 IP/DNS\u003e\n \u003cClient 2 IP/DNS\u003e\n ...\n \u003cClient N IP/DNS\u003e\n EOL\n\n4. Create the following dataset in a temporary directory on the client VM you created the job file for:\n\n export TMP_DIR=$(mktemp -d \u003cvar translate=\"no\"\u003eMOUNT_POINT_DIRECTORY\u003c/var\u003e/XXXXX)\n chmod 777 ${TMP_DIR}\n export CREATE_ONLY=1\n fio --client=/tmp/hosts.list \\\n --section=read-throughput --section=read-iops /tmp/fio_job.conf\n\n5. Run benchmarks using the client VM you created the job file for:\n\n - Maximum read throughput\n\n export CREATE_ONLY=0\n fio --client=/tmp/hosts.list --section=read-throughput /tmp/fio_job.conf\n\n - Maximum write throughput\n\n export CREATE_ONLY=0\n fio --client=/tmp/hosts.list --section=write-throughput /tmp/fio_job.conf\n\n - Maximum read IOPS\n\n export CREATE_ONLY=0\n fio --client=/tmp/hosts.list --section=read-iops /tmp/fio_job.conf\n\n - Maximum write IOPS\n\n export CREATE_ONLY=0\n fio --client=/tmp/hosts.list --section=write-iops /tmp/fio_job.conf\n\n6. After you are done with testing, stop fio servers on all client VMs and delete their\n temporary directory:\n\n rm -rf ${TMP_DIR}"]]