Auf dieser Seite wird das Wiederherstellen von Cassandra in mehreren Regionen beschrieben.
In einer multiregionalen Bereitstellung wird Apigee Hybrid an mehreren geografischen Standorten in verschiedenen Rechenzentren bereitgestellt. Wenn Sie mehrere Apigee-Organisationen in Ihrer Bereitstellung haben, werden bei der Wiederherstellung Daten für alle Organisationen wiederhergestellt. In einer Einrichtung mit mehreren Organisationen wird nur die Wiederherstellung einer bestimmten Organisation nicht unterstützt.
Cassandra wiederherstellen
In einer Bereitstellung mit mehreren Regionen gibt es zwei Möglichkeiten, eine fehlgeschlagene Region zu speichern. In diesem Thema werden folgende Ansätze beschrieben:
Fehlgeschlagene Region(en) wiederherstellen: Beschreibt die Schritte zum Wiederherstellen fehlgeschlagener Regionen aus einer Sicherung. Dieser Ansatz ist nur erforderlich, wenn alle Hybridregionen betroffen sind.
Fehlgeschlagene Region(en) wiederherstellen
So stellen Sie fehlgeschlagene Regionen aus einer fehlerfreien Region wieder her:
Leiten Sie den API-Traffic von den betroffenen Regionen an die funktionierende Arbeitsregion weiter. Planen Sie die Kapazität entsprechend, um den weitergeleiteten Traffic aus fehlgeschlagenen Regionen zu unterstützen.
Deaktivieren Sie die betroffene Region. Führen Sie für jede betroffene Region die Schritte aus, die unter Hybridregion außer Betrieb nehmen beschrieben werden. Warten Sie, bis die Außerbetriebnahme abgeschlossen ist, bevor Sie fortfahren.
Die Cassandra-Sicherung kann sich je nach Konfiguration entweder in Cloud Storage oder auf einem Remote-Server befinden.
Gehen Sie so vor, um Cassandra aus einer Sicherung wiederherzustellen:
Löschen Sie die Apigee Hybrid-Bereitstellung aus allen Regionen:
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],["Zuletzt aktualisiert: 2025-08-28 (UTC)."],[[["\u003cp\u003eThis documentation outlines how to restore Cassandra in a multi-region Apigee hybrid deployment, noting that restoring affects all organizations if multiple are present.\u003c/p\u003e\n"],["\u003cp\u003eThere are two methods for handling a failed region: recovering it from a healthy region or restoring it from a backup if all regions are impacted.\u003c/p\u003e\n"],["\u003cp\u003eRecovering a failed region involves redirecting API traffic, decommissioning the region, and then recreating it.\u003c/p\u003e\n"],["\u003cp\u003eRestoring from a backup requires deleting the existing hybrid deployment, restoring the region from the backup, and then updating the \u003ccode\u003eKeySpaces\u003c/code\u003e metadata.\u003c/p\u003e\n"],["\u003cp\u003eThe process of restoring from a backup includes updating the replication settings in the CQL interface for each keyspace to include the restored region.\u003c/p\u003e\n"]]],[],null,["# Restoring in multiple regions\n\n| You are currently viewing version 1.6 of the Apigee hybrid documentation. **This version is end of life.** You should upgrade to a newer version. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nThis page describes how to restore Cassandra in multiple regions.\n\nIn a multi-region deployment, Apigee hybrid is deployed in multiple geographic locations across\ndifferent datacenters. It is important to note that, if you have multiple Apigee organizations in your deployment,\nthe restore process restores data for **all** the organizations. In a multi-organization setup,\nrestoring only a specific organization is **not** supported.\n\nRestoring cassandra\n-------------------\n\nIn a multi-region deployment, there are two possible ways to salvage a failed region. This topic\ndescribes the following approaches:\n\n- [Recover failed region(s)](#recover-failed-region) - Describes the steps to recover failed region(s) based on a healthy region.\n- [Restore failed region(s)](#restore-nongcs) - Describes the steps to restore failed region(s) from a backup. This approach is only required if *all* hybrid regions are impacted.\n\n### Recover failed region(s)\n\nTo recover failed region(s) from a healthy region, perform the following steps:\n\n1. Redirect the API traffic from the impacted region(s) to the good working region. Plan the capacity accordingly to support the diverted traffic from failed region(s).\n2. Decommission the impacted region. For each impacted region, follow the steps outlined in [Decommission a hybrid region](/apigee/docs/hybrid/v1.6/decomission-region). Wait for decommissioning to complete before moving on to the next step.\n\n \u003cbr /\u003e\n\n3. Restore the impacted region. To restore, create a new region, as described in [Multi-region deployment on GKE, GKE on-prem, and AKS](/apigee/docs/hybrid/v1.6/multi-region).\n\n### Restoring from a backup\n\n| **Note** : If you want to preserve an existing setup for troubleshooting and root cause analysis (RCA), you should delete all the `org` and `env` components from the Kubernetes cluster *except* the Apigee controller, and retain the cluster. The cluster will contain the existing Apigee datastore (Cassandra) which you can use for troubleshooting. Create a new Kubernetes cluster and then restore Cassandra in the new cluster.\n\nThe Cassandra backup can either reside on Cloud Storage or on a remote server based on your configuration.\nTo restore Cassandra from a backup, perform the following steps:\n\n1. Delete apigee hybrid deployment from all the regions: \n\n ```\n apigeectl delete -f overrides.yaml\n ```\n2. Restore the desired region from a backup. For more information,\n see [Restoring a region from a backup](/apigee/docs/hybrid/v1.6/restore-cassandra-single-region#restore-gcs).\n\n3. Remove the deleted region(s) references and add the restored region(s) references in the `KeySpaces` metadata.\n4. Get the region name by using the `nodetool status` option. \n\n ```\n kubectl exec -n apigee -it apigee-cassandra-default-0 -- bash\n nodetool -u ${APIGEE_JMX_USER} -pw ${APIGEE_JMX_PASSWORD} status |grep -i Datacenter\n ```\n5. Update the `KeySpaces` replication.\n 1. [Create a client container](/apigee/docs/hybrid/v1.6/ts-cassandra#create-a-client-container-for-debugging) and connect to the Cassandra cluster through the CQL interface.\n 2. Get the list of user keyspaces from CQL interface: \n\n ```\n cqlsh ${CASSANDRA_SEEDS} -u ${CASS_USERNAME} -p ${CASS_PASSWORD}\n --ssl -e \"select keyspace_name from system_schema.keyspaces;\"|grep -v system\n ```\n 3. For each keyspace, run the following command from the CQL interface to update the replication settings: \n\n ```\n ALTER KEYSPACE KEYSPACE_NAME WITH replication = {'class': 'NetworkTopologyStrategy', 'REGION_NAME':3};\n ```\n\n where:\n - \u003cvar translate=\"no\"\u003eKEYSPACE_NAME\u003c/var\u003e is the name of the keyspace listed in the previous step's output.\n - \u003cvar translate=\"no\"\u003eREGION_NAME\u003c/var\u003e is the region name obtained in Step 4."]]