[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-11。"],[],[],null,["# Using NFS volume as vSphere datastore hosted by Filestore\n=========================================================\n\nYou can use Filestore zonal, regional, and enterprise\ntier instances as external datastores for [VMware ESXi](https://en.wikipedia.org/wiki/VMware_ESXi)\nhosts in Google Cloud VMware Engine.\n\nTo do so, you can create your Filestore instances in regions where both\nVMware Engine and Filestore [are available](/about/locations#regions)\nand then mount them as external datastores to your existing VMware ESXi hosts\nin VMware Engine.\n\nVMware Engine offers the following vSphere storage options:\n\n- [VMware vSAN](https://www.vmware.com/products/vsan.html). This includes the storage that comes with each VMware Engine node.\n- External NFS storage. This includes the following options:\n - [**Filestore instances**](/filestore/docs/service-tiers) used as a vSphere datastore.\n - [**Google Cloud NetApp Volumes**](/netapp/volumes/docs/discover/overview) service instances used as a vSphere datastore.\n\nWhy external datastores for VMware Engine?\n------------------------------------------\n\nVMware Engine vSAN provides high performance virtual storage for\nVMs running in VMware Engine. The VMware Engine\nservice uses hardware nodes with local [NVMe](https://en.wikipedia.org/wiki/NVM_Express)\nsolid-state drives (SSDs) that are managed by vSAN to offer a virtual\ninfrastructure for VMware VMs. If you want to scale only the storage resources\nin your cluster, you must purchase an entire node, along with compute and\nnetworking capabilities---resources you might not need. This limitation of\nvSAN-based [hyper-converged infrastructure (HCI)](https://en.wikipedia.org/wiki/Hyper-converged_infrastructure)\ncreates demand to scale storage independent of other resources.\n\nWith external NFS datastores, you can scale storage independently of computing\nresources, relying on VMware Engine for all of your VMware\nworkloads.\n\nHigh Scale and Enterprise tier instances are VMware certified for use with\nVMware Engine datastores and are available in all\n[VMware Engine regions](/about/locations#regions).\n\nFeature limitations\n-------------------\n\nThe following limitations apply:\n\n- Available only for Filestore High Scale and Enterprise tier instances. Basic SSD and Basic HDD tier instances are not supported.\n- Crash-consistent [snapshot](/filestore/docs/snapshots) support available in Filestore Enterprise tier instances only.\n- [Backup](/filestore/docs/backups) support available for both High Scale and Enterprise tiers.\n- Copy offload ([VAAI](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-storage-api-array-integration-white-paper.pdf)) is not available.\n- You can't mount Filestore instances that are connected with direct peering to VMware Engine. For more information, see [Network configuration and IP resource requirements](/filestore/docs/network-ip-requirements).\n\nProtocol support\n----------------\n\nThe [NFSv3 protocol](/filestore/docs/overview) is supported.\n\nNetworking\n----------\n\nFilestore and VMware Engine services are connected\nthrough [private service access (PSA)](/vpc/docs/private-services-access).\nNetwork charges resulting from storage access within a region don't apply.\n\nBefore you begin\n----------------\n\nThe steps in this document assume that you have done the following:\n\n- Earmarked a `/26` CIDR for the Google Cloud VMware Engine service network to be used for external NFS storage.\n\nService subnets\n---------------\n\nWhen you create a private cloud, VMware Engine creates additional service subnets\n(for example, `service-1`, `service-2`, `service-3`). Service subnets are\ntargeted for appliance or service deployment scenarios, such as storage, backup\nand disaster recovery, or media streaming, providing high scale, linear\nthroughput and packet processing for even the largest-scaled private clouds. VM\ncommunication across a service subnet travels from the VMware ESXi host directly\ninto the Google Cloud networking infrastructure, empowering high-speed\ncommunication.\n\nNSX-T gateway and distributed firewall rules don't apply to any service subnet.\n\nConfiguring service subnets\n---------------------------\n\nService subnets don't have a CIDR allocation on initial creation. Instead, you must specify a non-overlapping CIDR range and prefix for service subnets using the VMware Engine console or API.\n\nThe first usable address becomes the gateway address. To allocate a CIDR\nrange and prefix edit one of the service subnets.\n\nService subnets can be updated if CIDR requirements change. However, modification\nof an existing service subnet CIDR can cause network availability disruption for\nVMs attached to that service subnet.\n\nYou should add the reserved CIDR allocations for the service subnets you defined\nin the [VMware Engine portal](/vmware-engine/docs/howto-access-portal) to the list of\nimported clients in your network's [VPC peering connection](/vpc/docs/using-vpc-peering#update-peer-connection).\n\nFailing to do so returns the following error or similar in `vmkernel.log`: \n\n 2022-09-23T04:58:14.266Z cpu23:2103354 opID=be2a0887)NFS: 161: Command: (mount)\n Server: (10.245.17.21) IP: (10.245.17.21) Path: (/vol-g-shared-vmware-002) Label:\n (NFS) Options: (None)\n ...\n 2022-09-23T04:58:14.270Z cpu23:2103354 opID=be2a0887)NFS: 194: NFS mount\n 10.245.17.21:/vol-g-shared-vmware-002 failed: The mount request was denied by the\n NFS server. Check that the export exists and that the client is permitted to\n mount it.\n\nCreate and manage Filestore instances\n-------------------------------------\n\nTo see how to use the Google Cloud console to create and manage a Filestore\ninstance, see [Create an instance](/filestore/docs/creating-instances).\n\nTo see how to import the reserved CIDR allocations you created for your service\nsubnets, see [Update a peering connection](/vpc/docs/using-vpc-peering#update-peer-connection).\n\nCustomers must reach out to GCVE support to mount their Filestore NFS datastores. After the NFS datastore is mounted to all hosts in a given cluster and becomes available, you can use the vCenter console to provision VMs against the external datastore, view metrics and view logs related to Google I/O operations performed against the external datastore.\n\nIf you're interested in this feature, contact your account team or Google Cloud support.\n\nWhat's next\n-----------\n\n- Learn more about [Filestore](/filestore/docs/overview).\n- [Compare the relative advantages of block, file, and object storage](/architecture/storage-advisor#review_the_storage_options).\n- [Review the storage options for High Performance Computing (HPC) workloads\n in Google Cloud](/architecture/parallel-file-systems-for-hpc#storage-options-for-hpc-workloads-in-google-cloud)."]]