[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[[["\u003cp\u003eThis page provides an overview of Filestore's expected performance across different service tiers and capacities, along with factors that can influence performance.\u003c/p\u003e\n"],["\u003cp\u003ePerformance scales linearly with capacity increases in Filestore instances, such as doubling the capacity from 1 TiB to 2 TiB which also doubles the expected performance.\u003c/p\u003e\n"],["\u003cp\u003eTo maximize NFS performance, especially in single- or few-client scenarios, you should increase the number of TCP connections using the \u003ccode\u003enconnect\u003c/code\u003e mount option, with recommendations varying by service tier.\u003c/p\u003e\n"],["\u003cp\u003eFor optimal performance, especially in zonal, regional, and enterprise instances, utilizing at least four client VMs ensures full utilization of the underlying Filestore cluster.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003efio\u003c/code\u003e tool can be used to benchmark read and write throughput and IOPS for basic tier instances on Linux, but it is not recommended for zonal, regional, and enterprise tiers.\u003c/p\u003e\n"]]],[],null,["# Instance performance\n\nThis page describes the performance limits for Filestore instances along with recommended performance settings and testing options.\n\nEach Filestore service tier provides a different level of performance that might vary due to factors such as the use of caching, the number of client VMs, the\n[machine type](/compute/docs/machine-types) of the client VMs, and the workload tested.\n\nThe following table lists the maximum performance you can achieve when setting minimum and maximum capacity for each service tier.\n\nAll table values are estimated limits and not guaranteed. For information on custom performance settings and limits, see [custom performance limits](/filestore/docs/custom-performance#custom-performance-limits).\n\nPerformance scaling\n-------------------\n\nPerformance scales linearly with the capacity within the performance limits listed in the previous table. For example if you double your enterprise instance capacity from 1 TiB to 2 TiB, the performance limit of the instance doubles from 12,000/4,000 read and write IOPS to 24,000/8,000 read and write IOPS.\n\nIn single- and few-client scenarios, you must increase the number of TCP\nconnections with the\n[`nconnect`](https://man7.org/linux/man-pages/man5/nfs.5.html)\nmount option to achieve maximum NFS performance.\n\nFor specific service tiers, we recommend specifying the following number of connections between the client and server:\n\nIn general, the larger the file share capacity and the fewer the connecting client VMs, the more performance you gain by specifying additional connections with `nconnect`.\n\nCustom performance\n------------------\n\nSet custom performance to configure performance according to your workload needs, independently of the specified capacity. You can either specify an IOPS per TiB ratio, or set a fixed number of IOPS. For details, see [Custom performance](/filestore/docs/custom-performance).\n\nRecommended client machine type\n-------------------------------\n\nWe recommend having a Compute Engine [machine type](/compute/docs/machine-types), such as `n2-standard-8`, that provides an egress bandwidth of at least 16 Gbps. This egress bandwidth allows the client to achieve approximately 16 Gbps read bandwidth for cache-friendly workloads. For additional context, see [Network bandwidth](/compute/docs/network-bandwidth).\n\nLinux client mount options\n--------------------------\n\nWe recommend using the following NFS mount options, especially `hard` mount,\n`async`, and the `rsize` and `wsize` options, to achieve the best performance on\nLinux client VM instances. For more information on NFS mount options, see\n[nfs](https://linux.die.net/man/5/nfs).\n\nOptimize the NFS read throughput with `read_ahead_kb` parameter\n---------------------------------------------------------------\n\nThe NFS `read_ahead_kb` parameter specifies the amount of data, in kilobytes, that the Linux kernel should prefetch during a sequential read operation. As a result, the subsequent read requests can be served directly from memory to reduce latency and improve the overall performance.\n\nFor Linux kernel versions `5.4` and higher, the Linux NFS client uses a default `read_ahead_kb` value of 128 KB.\nWe recommend increasing this value to 20 MB to improve the sequential read throughput.\n\nAfter you successfully mount the file share on the Linux client VM, you can use the following script to manually adjust the `read_ahead_kb` parameter value: \n\n mount_point=\u003cvar translate=\"no\"\u003eMOUNT_POINT_DIRECTORY\u003c/var\u003e\n device_number=$(stat -c '%d' $mount_point)\n ((major = ($device_number & 0xFFF00) \u003e\u003e 8))\n ((minor = ($device_number & 0xFF) | (($device_number \u003e\u003e 12) & 0xFFF00)))\n sudo bash -c \"echo 20480 \u003e /sys/class/bdi/$major:$minor/read_ahead_kb\"\n\nWhere:\n\n\u003cvar translate=\"no\"\u003eMOUNT_POINT_DIRECTORY\u003c/var\u003e is the path to the directory where the file share is mounted.\n| **Note:** You can't use this script directly with GKE if the pod is not running as the root user. To set the `read_ahead_kb` parameter, use an [init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) with root permissions or [DeaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/).\n\nSingle and multiple client VM performance\n-----------------------------------------\n\nFilestore's scalable service tiers are performance optimized for\nmultiple client VMs, not a single client VM.\n\nFor zonal, regional, and enterprise instances, at least four client VMs are\nneeded to take advantage of full performance. This ensures that all of the VMs\nin the underlying Filestore cluster are fully utilized.\n\nFor added context, the smallest scalable Filestore cluster has four\nVMs. Each client VM communicates with just one Filestore cluster VM,\nregardless of the number of NFS connections per client specified using the\n[`nconnect`](https://man7.org/linux/man-pages/man5/nfs.5.html)\nmount option. If using a single client VM, read and write operations are only\nperformed from a single Filestore cluster VM.\n\nImprove performance across Google Cloud resources\n-------------------------------------------------\n\nOperations across multiple Google Cloud resources, such as copying data\nfrom Cloud Storage to a Filestore instance using the\ngcloud CLI, can be slow. To help mitigate performance issues, try the\nfollowing:\n\n- Ensure the Cloud Storage bucket, client VM, and Filestore\n instance all reside in the same [region](/filestore/docs/regions).\n\n [Dual-regions](/storage/docs/locations#location-dr) provide a maximally-performant option for data stored in Cloud Storage. If using this option, ensure the other resources reside in one of the single regions contained in the dual-region. For example, if your Cloud Storage data resides in `us-central1,us-west1`, ensure that your client VM and Filestore instance reside in `us-central1`.\n- For a point of reference, verify the performance of a VM with a Persistent Disk\n (PD) attached and compare to the performance of a Filestore instance.\n\n - If the PD-attached VM is similar or slower in performance when compared to the Filestore instance, this might indicate a performance bottleneck unrelated to Filestore. To improve the baseline performance of your non-Filestore resources, you can adjust the gcloud CLI properties associated with parallel composite uploads. For more information see [How tools and APIs use parallel composite uploads](/storage/docs/parallel-composite-uploads#behavior).\n\n | **Note:** The gcloud CLI has no built-in support for throttling requests. Users should experiment with requested values as optimal values can vary based on a number of factors including network speed, number of CPUs, and available memory.\n - If the performance of the Filestore instance is notably slower than the \n\n PD-attached VM, try spreading the operation over multiple VMs.\n\n - This helps to improve performance of read operations from Cloud Storage.\n\n - For zonal, regional, and enterprise instances, at least four client VMs\n are needed to take advantage of full performance. This ensures that all of\n the VMs in the underlying Filestore cluster are fully utilized.\n For more information, see [Single and multiple client VM performance](/filestore/docs/performance#single-multiple-performance).\n\nWhat's next\n-----------\n\n- [Test performance](/filestore/docs/testing-performance)\n- [Troubleshoot performance-related issues](/filestore/docs/troubleshooting)\n- [Scale capacity](/filestore/docs/scale)"]]