Bare Metal 解决方案通过区域扩展提供。自 2024 年 2 月起,所有 Bare Metal 解决方案区域都将在非 Google 设施中进行物理托管。由于采用区域扩展模型,裸金属解决方案不遵循其他服务(例如 Compute Engine)使用的传统区域分离模型。 Google Cloud 区域扩展中的每个裸金属解决方案部署都称为 pod。在某些地区,Bare Metal 解决方案资源由多个 pod 提供,但没有要求或预期 pod 必须地理分隔。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eBare Metal Solution deployments, known as pods, do not follow the zonal separation model and lack direct connectivity between them, impacting traditional storage-based disaster recovery options.\u003c/p\u003e\n"],["\u003cp\u003ePlanning for disaster recovery is highly recommended for mission-critical workloads running in the Bare Metal Solution environment, leveraging resources such as the Google Cloud disaster recovery planning guide and Oracle-specific options if applicable.\u003c/p\u003e\n"],["\u003cp\u003eDisaster recovery region selection should consider network latency between regions and any industry-specific data locality regulations.\u003c/p\u003e\n"],["\u003cp\u003eReplication traffic can be isolated from application sessions by provisioning separate Partner Interconnects in each region, terminating into a transit VPC dedicated to replication, ensuring traffic flows over the designated network.\u003c/p\u003e\n"],["\u003cp\u003eInterconnects that terminate at the transit VPC can also be used to access Google Cloud services, such as Cloud Storage, Filestore, or Backup and DR.\u003c/p\u003e\n"]]],[],null,["# Plan for disaster recovery\n==========================\n\nThis page provides information that you can use to plan for disaster recovery\nfor your workloads running in the Bare Metal Solution environment.\n\nBare Metal Solution is delivered from a region extension. As of February 2024,\nall Bare Metal Solution regions are physically hosted in non-Google facilities.\nDue to the region extension model, Bare Metal Solution doesn't follow the\nconventional zonal separation model used by other Google Cloud services,\nsuch as Compute Engine. Each Bare Metal Solution deployment inside of a region\nextension is known as a pod. In some regions, Bare Metal Solution resources are\nserved from multiple pods, but there is no requirement or expectation that pods\nare geographically separated.\n\nIf you're running mission-critical workloads, we recommend that you\nplan for disaster recovery.\n\n**Recommended resources for disaster recovery planning**\n\nWe recommend that you go through the following resources to plan for\ndisaster recovery:\n\n- Plan for disaster recovery (this document)\n- [Google Cloud disaster recovery planning guide](https://cloud.google.com/architecture/dr-scenarios-planning-guide) (provides more guidance that you can use to implement your disaster recovery plan)\n- [Disaster recovery options for Oracle databases workloads](/bare-metal/docs/oracle-db-dr-options) (applicable if you're running Oracle databases workloads)\n\nCross-pod connectivity\n----------------------\n\nPods and region extensions don't have direct connectivity. All the traffic\n(in and out) of your Bare Metal Solution deployment transits over an interconnect\nand through the Google Cloud backbone. There is no supported data path for\nstorage-level replication. This eliminates disaster recovery options based on\nthe storage technologies, such as block-level storage replication or remote\nsnapshot replication.\n\nDisaster recovery region planning\n---------------------------------\n\nYou might typically select a Bare Metal Solution region based on other\nGoogle Cloud services that you are using. However, the disaster recovery\nfor databases typically falls in line with regions used for corresponding\napplications and their integrations. Therefore, consider network latency between\nregions when planning which regions you want to use for disaster recovery.\n\nDepending on your industry, there may be regulatory requirements about data\nlocality that dictate where you can replicate data. Each application has its own\nrequirements, so specific disaster recovery region selection is left to you.\n\nNetworking considerations\n-------------------------\n\n### Isolating traffic for interconnect\n\nIn many cases, you might want to isolate replication traffic from application\nsessions.\n\nTraffic isolation can be achieved by provisioning separate\nPartner Interconnects in each region that terminate into a\ntransit VPC used for replication. The following diagram depicts this type of\nconfiguration.\n\nIn the diagram, the Bare Metal Solution servers in the `us-west2` region use the\n`10.10.10.0/24` network and the Bare Metal Solution servers in the `us-east4`\nregion use the `10.20.20.0/24` network. The user project contains separate VPCs\nfor application and replication traffic, named `Application VPC` and\n`Replication VPC`, respectively. The [BGP advertisements](/bare-metal/docs/bms-setup#set_up_routing_between_and) are configured so that\neach Cloud Router in the `Replication VPC` advertises a route to the\ncross-region Bare Metal Solution network, forcing cross-region traffic to flow\nover the `Replication VPC`. The Cloud Routers in the `Application VPC`\nadvertise a generic `0.0.0.0/0` route, or routes to specific CIDR blocks that\nthe Bare Metal Solution servers must communicate with. In this example,\n`0.0.0.0/0` is used to signify a route that sends traffic to any other\ndestination.\n\nThe application servers and other services that connect from on-premises\ndata centers connect through the `Application VPC`. The instances within the\n`Application VPC` can still communicate with databases running in either\nBare Metal Solution region extension.\n\nThe interconnects that terminate at the transit VPC can also be used to access\nGoogle Cloud services, such as Cloud Storage, Filestore,\nor Backup and DR. This can be achieved by creating the\nFilestore instance in the transit VPC or through the use of\n[Private Service Connect](/vpc/docs/private-service-connect#endpoints)\nendpoints that reside within the transit VPC."]]