[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-25。"],[[["\u003cp\u003eThis document outlines the procedures for recovering or restoring Cassandra in multi-region Apigee hybrid deployments, ensuring data integrity and availability.\u003c/p\u003e\n"],["\u003cp\u003eFailed Cassandra regions can be recovered using a healthy region as a source, by redirecting traffic, decommissioning the failed region, and then restoring it as a new region.\u003c/p\u003e\n"],["\u003cp\u003eIn cases where all hybrid regions suffer a catastrophic failure, Cassandra can be restored from a backup, affecting all organizations in a multi-organization setup.\u003c/p\u003e\n"],["\u003cp\u003eBefore restoring from a backup, you must ensure that the \u003ccode\u003ecassandra:hostNetwork\u003c/code\u003e setting is set to \u003ccode\u003efalse\u003c/code\u003e in the overrides file for the region being restored.\u003c/p\u003e\n"],["\u003cp\u003eAfter restoring from a backup, it's crucial to update the \u003ccode\u003eKeySpaces\u003c/code\u003e metadata and replication settings to reflect the restored region(s) by using \u003ccode\u003enodetool status\u003c/code\u003e and the CQL interface.\u003c/p\u003e\n"]]],[],null,["# Restoring in multiple regions\n\n| You are currently viewing version 1.12 of the Apigee hybrid documentation. **This version is end of life.** You should upgrade to a newer version. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nThis page describes how to *recover* or *restore* Cassandra in multiple regions.\n\nIn a multi-region deployment, Apigee hybrid is deployed in multiple geographic locations across\ndifferent datacenters. If one or more regions fail, but healthy regions remain, you can use a healthy region to *recover*\nfailed Cassandra regions with the latest data.\n\n\nIn the event of a catastrophic failure of all hybrid regions, Cassandra can be *restored* .\nIt is important to note that, if you have multiple Apigee organizations in your deployment,\nthe restore process restores data for **all** the organizations. In a multi-organization setup,\nrestoring only a specific organization is **not** supported.\n\nThis topic describes both approaches to salvaging failed region(s):\n\n- [Recover failed region(s)](#recover-failed-region) - Describes the steps to recover failed region(s) based on a healthy region.\n- [Restore failed region(s)](#restore-nongcs) - Describes the steps to restore failed region(s) from a backup. This approach is only required if *all* hybrid regions are impacted.\n\nRecover failed region(s)\n------------------------\n\nTo recover failed region(s) from a healthy region, perform the following steps:\n\n1. Redirect the API traffic from the impacted region(s) to the good working region. Plan the capacity accordingly to support the diverted traffic from failed region(s).\n2. Decommission the impacted region. For each impacted region, follow the steps outlined in [Decommission a hybrid region](/apigee/docs/hybrid/v1.12/decommission-region). Wait for decommissioning to complete before moving on to the next step.\n\n \u003cbr /\u003e\n\n3. Restore the impacted region. To restore, create a new region, as described in [Multi-region deployment on GKE, GKE on-prem, and AKS](/apigee/docs/hybrid/v1.12/multi-region).\n\nRestoring from a backup\n-----------------------\n\n| **Note** : If you want to preserve an existing setup for troubleshooting and root cause analysis (RCA), you should delete all the `org` and `env` components from the Kubernetes cluster *except* the Apigee controller, and retain the cluster. The cluster will contain the existing Apigee datastore (Cassandra) which you can use for troubleshooting. Create a new Kubernetes cluster and then restore Cassandra in the new cluster.\n\nThe Cassandra backup can either reside on Cloud Storage or on a remote server based on your configuration.\nTo restore Cassandra from a backup, perform the following steps:\n| **Important:** If you installed Apigee hybrid into multiple regions, you must check the overrides file for the region you are restoring to make sure the `cassandra:hostNetwork` is set to `false` before you perform the restoration. To check the `hostNetwork` setting in the region you are trying to restore, use this command: \n|\n| ```\n| kubectl -n apigee get apigeeds -o=jsonpath='{.items[].spec.components.cassandra.hostNetwork}'\n| ```\n\n1. Open the overrides file for the region you wish to restore.\n2. Set `cassandra:hostNetwork` to `false`.\n3. Apply the overrides file: \n\n ```\n helm upgrade datastore apigee-datastore/ \\\n --install \\\n --namespace apigee \\\n -f OVERRIDES_FILE\n ```\n4. Before continuing, check to make sure the `hostNetwork` is set to `false`: \n\n ```\n kubectl -n apigee get apigeeds -o=jsonpath='{.items[].spec.components.cassandra.hostNetwork}'\n ```\n5. Delete hybrid from the region you are restoring: \n\n ```\n helm delete DATASTORE_RELEASE_NAME \\\n --namespace apigee\n ```\n\n Where \u003cvar translate=\"no\"\u003eDATASTORE_RELEASE_NAME\u003c/var\u003e is the release name of the datastore you\n installed Cassandra in the region, for example `datastore-region1`.\n | **Note:** Be careful which release of the datastore component you delete. You can see your releases with the command `helm -n `\u003cvar translate=\"no\"\u003eapigee\u003c/var\u003e` ls`.\n6. Restore the desired region from a backup. For more information,\n see [Restoring a region from a backup](/apigee/docs/hybrid/v1.12/restore-cassandra-single-region#restore-gcs).\n\n7. Remove the deleted region(s) references and add the restored region(s) references in the `KeySpaces` metadata.\n8. Get the cassandra datacenter name by using the `nodetool status` option. \n\n ```\n kubectl exec -n apigee -it apigee-cassandra-default-0 -- bash\n nodetool -u APIGEE_JMX_USER -pw APIGEE_JMX_PASSWORD status |grep -i Datacenter\n ```\n\n where:\n - \u003cvar translate=\"no\"\u003eAPIGEE_JMX_USER\u003c/var\u003e is the username for the Cassandra JMX operations user. Used to authenticate and communicate with the Cassandra JMX interface. See [`cassandra:auth:jmx:username`](/apigee/docs/hybrid/v1.12/config-prop-ref#cassandra-auth-jmx-username).\n - \u003cvar translate=\"no\"\u003eAPIGEE_JMX_PASSWORD\u003c/var\u003e is the password for the Cassandra JMX operations user. See [`cassandra:auth:jmx:password`](/apigee/docs/hybrid/v1.12/config-prop-ref#cassandra-auth-jmx-password).\n9. Update the `KeySpaces` replication.\n 1. [Create a client container](/apigee/docs/api-platform/troubleshoot/playbooks/cassandra/ts-cassandra#create-a-client-container-for-debugging) and connect to the Cassandra cluster through the CQL interface.\n 2. Get the list of user keyspaces from CQL interface: \n\n ```\n cqlsh CASSANDRA_SEED_HOST -u APIGEE_DDL_USER -p APIGEE_DDL_PASSWORD\n --ssl -e \"select keyspace_name from system_schema.keyspaces;\"|grep -v system\n ```\n\n where:\n - \u003cvar translate=\"no\"\u003eCASSANDRA_SEED_HOST\u003c/var\u003e is the Cassandra multi-region seed host. For most multi-region installations, use the IP address of a host in your first region. See [Configure Apigee\n hybrid for multi-region](/apigee/docs/hybrid/v1.12/multi-region#configure-apigee-hybrid-for-multi-region) and [`cassandra:externalSeedHost`](/apigee/docs/hybrid/v1.12/config-prop-ref#cassandra-externalseedhost).\n - \u003cvar translate=\"no\"\u003eAPIGEE_DDL_USER\u003c/var\u003e and \u003cvar translate=\"no\"\u003eAPIGEE_DDL_PASSWORD\u003c/var\u003e are the admin username and password for the Cassandra Data Definition Language (DDL) user. The default values are \"`ddl_user`\" and \"`iloveapis123`\". You can check the DDL username and password by checking the value stored in the `apigee-datastore-default-creds`secret. You must have admin privileges to check this secret. The following command will return base-64 encoded values for the DDL username and password: \n |\n | ```\n | kubectl get secret apigee-datastore-default-creds -n apigee -o yaml\n | ```\n\n\n See [`cassandra.auth.ddl.password`](/apigee/docs/hybrid/v1.12/config-prop-ref#cassandra-auth.ddl.password)\n in the Configuration properties reference and\n [Command\n Line Options](https://cassandra.apache.org/doc/4.1/cassandra/tools/cqlsh.html#command-line-options) in the Apache Cassandra cqlsh documentation.\n 3. For each keyspace, run the following command from the CQL interface to update the replication settings: \n\n ```\n ALTER KEYSPACE KEYSPACE_NAME WITH replication = {'class': 'NetworkTopologyStrategy', 'DATACENTER_NAME':3};\n ```\n\n where:\n - \u003cvar translate=\"no\"\u003eKEYSPACE_NAME\u003c/var\u003e is the name of the keyspace listed in the previous step's output.\n - \u003cvar translate=\"no\"\u003eDATACENTER_NAME\u003c/var\u003e is name of the Cassandra datacenter you obtained with the `nodetool status` option in step 8."]]