[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-07-21。"],[],[],null,["# RDMA RoCE network profile\n=========================\n\nThis page provides an overview of the\n[Remote Direct Memory Access (RDMA)](https://en.wikipedia.org/wiki/Remote_direct_memory_access)\nover Converged Ethernet (RoCE) network profile in Google Cloud.\n\nOverview\n--------\n\nThe RDMA RoCE network profile lets you create a Virtual Private Cloud (VPC)\nnetwork that provides low-latency, high-bandwidth RDMA communication between\nthe GPUs of VMs that are created in the network by using the\n[RoCE v2 protocol](https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet).\nA VPC network that uses the RoCE network profile is called\nan *RoCE VPC network*.\n\nRoCE VPC networks are useful for running AI workloads. For more\ninformation about running AI workloads in Google Cloud, see\n[AI Hypercomputer overview](/ai-hypercomputer/docs/overview).\n\nThe resource name of an RoCE network profile has the following format\n\u003cvar translate=\"no\"\u003eZONE\u003c/var\u003e`-vpc-roce`---for example `europe-west1-b-vpc-roce`.\nTo view specific network profile names, see\n[List network profiles](/vpc/docs/view-network-profiles#list_network_profiles).\n\nSupported zones\n---------------\n\nThe RoCE network profile is available in a limited set of zones. You can only\ncreate an RoCE VPC network in a zone where the RoCE network\nprofile is available.\n\nTo view the supported zones, see\n[list network profiles](/vpc/docs/view-network-profiles#list_network_profiles).\n\nAlternatively, you can view the supported zones for the GPU machine type\nthat you intend to create in the RoCE VPC network. The RoCE\nnetwork profile is available in the same zones as the supported machine\ntypes described in [Specifications](#roce-vpc-specs). For more information, see\n[GPU availability regions and zones](/compute/docs/gpus/gpu-regions-zones#view-using-table).\n\nSpecifications\n--------------\n\nRoCE VPC networks have the following specifications:\n\n- **NVIDIA ConnectX NICs** . NVIDIA ConnectX NICs appear as `MRDMA` network\n interfaces in Google Cloud.\n\n- **Zonal constraint**. Resources using an RoCE VPC network are\n limited to the same zone as the RoCE network profile associated with the RoCE\n VPC network during the RoCE network creation. This zonal limit\n has the following effects:\n\n - All instances that have network interfaces in an RoCE VPC\n network must be created in the zone that matches the zone of the RoCE\n network profile used by the RoCE VPC network.\n\n - All subnets created in an RoCE VPC network must be located\n in the region that contains the zone of the RoCE network profile used by\n the RoCE VPC network.\n\n- **MRDMA network interfaces only** . RoCE VPC networks only\n support `MRDMA` network interfaces (NICs), which are only available on\n the [A3 Ultra](/compute/docs/accelerator-optimized-machines#a3-ultra-vms),\n [A4](/compute/docs/accelerator-optimized-machines#a4-vms), and\n [A4X](/compute/docs/accelerator-optimized-machines#a4x-vms) machine series.\n\n All non-MRDMA NICs of a virtual machine (VM) instance must be attached to\n a regular VPC network.\n- **8896 byte MTU** . For best performance, we recommend a [maximum transmission unit\n (MTU)](https://en.wikipedia.org/wiki/Maximum_transmission_unit) of `8896` bytes\n for RoCE VPC networks. This allows the RDMA driver in\n the VM's guest operating system to use smaller MTUs if needed.\n\n If you create an RoCE VPC network by using the gcloud CLI\n or the API, then `8896` bytes is the default MTU. If you create an RoCE\n VPC network by using the Google Cloud console, then you must\n set the MTU to `8896`.\n- **Firewall differences** . RoCE VPC networks use different\n implied firewall rules. They only support regional network firewall policies\n that have an RoCE firewall policy type. The set of parameters for rules within\n a supported regional network firewall policy are limited. For more\n information, see\n [Cloud NGFW for RoCE VPC networks](/firewall/docs/firewall-for-roce).\n\n- **No Connectivity Tests support** .\n [Connectivity Tests](/network-intelligence-center/docs/connectivity-tests/concepts/overview)\n doesn't support RoCE VPC networks.\n\n- **Other VPC features** . RoCE VPC networks\n support a limited set of other VPC features. For more\n information, see the following [Supported and unsupported features](#supported-features) section.\n\nSupported and unsupported features\n----------------------------------\n\nThe following table lists which VPC features are supported\nby RoCE VPC networks.\n\nRoCE VPC network multi-NIC considerations\n-----------------------------------------\n\nTo support workloads that benefit from cross-rail GPU-to-GPU communication, RoCE\nVPC networks support VMs that have multiple `MRDMA` NICs in the\nnetwork. Each `MRDMA` NIC must be in a unique subnet. Placing two or more\n`MRDMA` NICs in the same RoCE VPC network might affect network\nperformance, including increased latency. `MRDMA` NICs use\n[NCCL](https://developer.nvidia.com/nccl). NCCL attempts to align all network\ntransfers, even for cross-rail communication. For example, it uses PXN to copy\ndata through NVlink to a rail-aligned GPU before transferring it over the\nnetwork.\n\nWhat's next\n-----------\n\n- [Network profiles for specific use cases](/vpc/docs/network-profiles)\n- [Create a VPC network for RDMA NICs](/vpc/docs/create-vpc-network-rdma)\n- [Cloud NGFW for RoCE VPC networks](/firewall/docs/firewall-for-roce)"]]