[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2024-07-29。"],[[["\u003cp\u003eThis document outlines two main architectural approaches for creating a hub-and-spoke network topology in Google Cloud: one using VPC Network Peering and the other utilizing Cloud VPN.\u003c/p\u003e\n"],["\u003cp\u003eHub-and-spoke topologies are useful for enterprises that need to share resources across multiple VPC networks while maintaining workload isolation, often with a centralized hub network for shared services and on-premises connectivity.\u003c/p\u003e\n"],["\u003cp\u003eVPC Network Peering enables communication between VPC networks using internal IP addresses, maintaining traffic within Google's network and providing full bandwidth, but it has a non-transitive routing limitation unless VPN tunnels are added.\u003c/p\u003e\n"],["\u003cp\u003eCloud VPN offers transitive routing between spoke networks, overcoming the limitations of VPC Network Peering, but its total bandwidth is constrained by the capacity of the VPN tunnels.\u003c/p\u003e\n"],["\u003cp\u003eAlternatives to hub-and-spoke network include using a gateway in the hub VPC network for inter-spoke connectivity, direct VPC network peering for simple use cases, or multiple Shared VPC networks for isolating groups of resources.\u003c/p\u003e\n"]]],[],null,["# Hub-and-spoke network architecture\n\n\u003cbr /\u003e\n\nThis document presents three architectural options for setting up a\nhub-and-spoke network topology in Google Cloud. The first option uses\nNetwork Connectivity Center, the second uses VPC Network Peering, and the third uses\nCloud VPN.\n\nAn enterprise can separate workloads into individual VPC networks for the purposes\nof billing, environment isolation, and other considerations. However, the\nenterprise might also need to share specific resources across these networks,\nsuch as a shared service or a connection to on-premises. In such cases, it can\nbe useful to place the shared resource in a hub network (referred\nto as the *routing* network in the rest of this document) and to attach the other VPC networks as spoke\nnetworks (referred to as *workload* networks in the rest of this document). The following diagram\nshows a hub-and-spoke network with two workload VPCs, though more workload VPCs\ncan be added.\n\nIn this example, separate workload VPC networks are used for the workloads of\nindividual business units within a large enterprise. Each workload VPC network is\nconnected to a central routing VPC network that contains shared services and can\nserve as the sole entry point to the cloud from the enterprise's on-premises\nnetwork.\n\nSummary of options\n------------------\n\nWhen you choose one of the architectures discussed in this document, consider the\nrelative merits of Network Connectivity Center, VPC Network Peering, and\nCloud VPN:\n\n- Network Connectivity Center provides full bandwidth between workload VPCs and provides transitivity between workload VPCs.\n- VPC Network Peering provides full bandwidth between workload VPCs and the routing VPC. It does not provide transitivity among workload VPCs. VPC Network Peering supports routing to NVAs in other VPCs.\n- Cloud VPN allows transitive routing, but the total bandwidth (ingress plus egress) between networks is limited to the bandwidths of the tunnels. Additional tunnels can be added to increase bandwidth.\n\nArchitecture using Network Connectivity Center\n----------------------------------------------\n\nThe following diagram shows a hub-and-spoke network that uses\n[Network Connectivity Center](/network-connectivity/docs/network-connectivity-center/concepts/overview).\n\nNetwork Connectivity Center has a hub resource that provides control plane management,\nbut it's not a hub network for the data plane.\n\n- Network Connectivity Center can connect the networks together by using a star (hub-and-spoke) or mesh topology. Using a star topology prevents inter-VPC-spoke (workload VPC) communication while the mesh topology does not.\n- The routing (hub) VPC network has a connection to on-premises using Cloud VPN or Cloud Interconnect connections.\n- Dynamic routes can be propagated across VPC networks.\n- Private Service Connect routes are transitive between workload VPCs.\n- Private services access routes are transitive between workload VPCs through the use of [producer spokes](/network-connectivity/docs/network-connectivity-center/concepts/producer-vpc-spokes-overview) for [many Google-provided services](/network-connectivity/docs/network-connectivity-center/concepts/producer-vpc-spokes-supported-services). For services where routes are not transitive, a workaround is to connect the consumer VPC network to the routing VPC network by using Cloud VPN instead of Network Connectivity Center.\n- All of the VMs in the peered networks can communicate at the [full bandwidth](/compute/docs/network-bandwidth) of the VMs.\n- Each workload VPC and the routing VPC network has a Cloud NAT gateway for outbound communication with the internet.\n- DNS peering and forwarding is set up so that workloads in workload VPCs can be reached from on-premises.\n\nArchitecture using VPC Network Peering\n--------------------------------------\n\nThe following diagram shows a hub-and-spoke network that uses\n[VPC Network Peering](/vpc/docs/vpc-peering).\nVPC Network Peering enables communication using internal IP addresses\nbetween resources in separate VPC networks. Traffic stays on Google's internal\nnetwork and does not traverse the public internet.\n\n- Each workload (spoke) VPC network in this architecture has a peering relationship with a central routing (hub) VPC network.\n- The routing VPC network has a connection to on-premises using Cloud VPN or Cloud Interconnect connections.\n- All of the VMs in the peered networks can communicate at the [full bandwidth](/compute/docs/network-bandwidth) of the VMs.\n- VPC Network Peering connections are not transitive. In this architecture, the on-premises and workload VPC networks can exchange traffic with the routing network, but not with each other. To provide shared services, either put them in the routing network or connect them to the routing network by using Cloud VPN.\n- Each workload VPC and the routing VPC network has a Cloud NAT gateway for outbound communication with the internet.\n- DNS peering and forwarding is set up so that workloads in workloads VPCs can be reached from on-premises.\n\nArchitecture using Cloud VPN\n----------------------------\n\nThe scalability of a hub-and-spoke topology that uses VPC Network Peering\nis subject to\n[VPC Network Peering limits](/vpc/docs/quota#vpc-peering).\nAnd as noted earlier, VPC Network Peering connections don't allow transitive\ntraffic beyond the two VPC networks that are in a peering relationship. The\nfollowing diagram shows an alternative hub-and-spoke network architecture that\nuses\n[Cloud VPN](/network-connectivity/docs/vpn/concepts/overview)\nto overcome the limitations of VPC Network Peering.\n\n- IPsec VPN tunnels connect each workload (spoke) VPC network to a routing (hub) VPC network.\n- A DNS private zone in the routing network and a DNS peering zone and private zone exist in each workload network.\n- Connections are transitive. The on-premises and spoke VPC networks can reach each other through the routing network, though this can be restricted.\n- Bandwidth between networks is limited by the total [bandwidths of the tunnels](/network-connectivity/docs/vpn/concepts/overview#network-bandwidth).\n- Each workload (spoke) VPC and the routing VPC network has a Cloud NAT gateway for outbound communication with the internet.\n- VPC Network Peering does not provide for transitive route announcements.\n- DNS peering and forwarding is set up so that workloads in workload VPCs can be reached from on-premises.\n\nDesign alternatives\n-------------------\n\nConsider the following architectural alternatives for interconnecting resources\nthat are deployed in separate VPC networks in Google Cloud:\n\n**Inter-spoke connectivity using a gateway in the routing VPC network**\n\nTo enable inter-spoke communication, you can deploy a network virtual appliance\n(NVA) or a next-generation firewall (NGFW) on the routing VPC network, to serve as a\ngateway for spoke-to-spoke traffic.\n\n**Multiple Shared VPC networks**\n\nCreate a Shared VPC network for each group of resources that you want to\nisolate at the network level. For example, to separate the resources used for\nproduction and development environments, create a Shared VPC network for\nproduction and another Shared VPC network for development. Then, peer\nthe two VPC networks to enable inter-VPC network communication. Resources in\nindividual projects for each application or department can use services from the\nappropriate Shared VPC network.\n\nFor connectivity between the VPC networks and your on-premises network, you can\nuse either separate VPN tunnels for each VPC network, or separate VLAN\nattachments on the same Dedicated Interconnect connection.\n\nWhat's next\n-----------\n\n- Deploy a hub-and-spoke network [terraform](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/tree/master/fast/stages/2-networking-a-simple).\n- Learn about connecting your hub-and-spoke topology to on-premises and other clouds in the [Cross-Cloud Network design guide](/architecture/ccn-distributed-apps-design).\n- Learn about more design options for [connecting multiple VPC networks](/architecture/best-practices-vpc-design#connecting_multiple_networks).\n- Learn about the [best practices](/architecture/framework) for building a secure and resilient cloud topology that's optimized for cost and performance."]]