This page provides an overview of restoring Cassandra in Apigee hybrid.
Why use restore?
You can use backups to restore Apigee infrastructure from the ground up in the event of
catastrophic failures, such as irrecoverable data loss in your Apigee hybrid instance from a disaster.
Restoration takes your data from the backup location and restores the data into a new Cassandra
cluster with the same number of nodes. No cluster data is taken from the old Cassandra cluster.
The goal of the restoration process is to bring an Apigee hybrid installation back to a
previously operational state using backup data from a snapshot.
The use of backups to restore is not recommended for the following scenarios:
Cassandra node failures.
Accidental deletion of data like apps, developers, and api_credentials.
One or more regions going down in a multi-region hybrid deployment.
Apigee Cassandra deployments and operational architecture take care of redundancy and fault tolerance for a single region.
In most cases, the recommended multi-region production implementation of hybrid means that a region failure can be recovered from
another live region using region decommissioning and expansion procedures
instead of restoring from a backup.
Before you begin implementing a restore from a Cassandra backup, be aware of the following:
Downtime: There will be downtime for the duration of the restoration.
Data loss: There will be data loss between the last valid backup and the time the
restoration is complete.
Restoration time: Restoration time depends on the size of the data and cluster.
Cherry-picking data: You cannot select only specific data to restore. Restoration
restores the entire backup you select.
Multi-region restores
If you installed Apigee hybrid into multiple regions, you must check the overrides file
for the region you are restoring to make sure the cassandra:hostNetwork is set
to false before you perform the restoration. For more information, see
Restoring in multiple regions.
Prerequisites
Check all the following prerequisites are successful. Investigate any prerequisite failures before
proceeding with restoration.
Verify all Cassandra pods are up and running with the following command.
kubectl get pods -n apigee -l app=apigee-cassandra
Your output should look something like the following example:
NAME READY STATUS RESTARTS AGE
apigee-cassandra-default-0 1/1 Running 0 14m
apigee-cassandra-default-1 1/1 Running 0 13m
apigee-cassandra-default-2 1/1 Running 0 11m
exampleuser@example hybrid-files %
Verify the Cassandra statefulset shows all pods are running with the following command.
kubectl get sts -n apigee -l app=apigee-cassandra
Your output should look something like the following example:
NAME READY AGE
apigee-cassandra-default 3/3 15m
Verify the ApigeeDatastore resource is in a running state with the following command.
kubectl get apigeeds -n apigee
Your output should look something like the following example:
NAME STATE AGE
default running 16m
Verify all Cassandra PVCs are in Bound status with the following command.
kubectl get pvc -n apigee -l app=apigee-cassandra
Your output should look something like the following example:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-apigee-cassandra-default-0 Bound pvc-a14184e7-8745-4b30-8069-9d50642efe04 10Gi RWO standard-rwo 17m
cassandra-data-apigee-cassandra-default-1 Bound pvc-ed129dcb-4706-4bad-a692-ac7c78bad64d 10Gi RWO standard-rwo 15m
cassandra-data-apigee-cassandra-default-2 Bound pvc-faed0ad1-9019-4def-adcd-05e7e8bb8279 10Gi RWO standard-rwo 13m
Verify all Cassandra PVs are in Bound status with the following command.
kubectl get pv -n apigee
Your output should look something like the following example:
Verify the Apigee Controller resource is in Running status with the following command.
kubectl get pods -n apigee-system -l app=apigee-controller
Your output should look something like the following example:
NAME READY STATUS RESTARTS AGE
apigee-controller-manager-856d9bb7cb-cfvd7 2/2 Running 0 20m
How to restore?
Cassandra's restoration steps differ slightly depending on whether your Apigee hybrid is
deployed in a single region or multiple regions. For the detailed restoration steps,
see the following documentation:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[[["\u003cp\u003eThis documentation outlines how to restore Cassandra in Apigee hybrid using backups in the event of catastrophic failures like data loss, not for node failures or accidental data deletions.\u003c/p\u003e\n"],["\u003cp\u003eRestoring from a backup involves replacing an entire Cassandra cluster with backup data, and it is intended to return an Apigee hybrid installation to a previous state, but it will incur downtime and data loss.\u003c/p\u003e\n"],["\u003cp\u003eBefore starting a restore, confirm that all Cassandra pods, statefulsets, PVCs, PVs, and the ApigeeDatastore and Apigee Controller resources are running and in their correct status.\u003c/p\u003e\n"],["\u003cp\u003eRestoration steps differ based on whether the Apigee hybrid deployment is in a single region or multiple regions, with specific guides provided for each scenario.\u003c/p\u003e\n"],["\u003cp\u003eIt is crucial that the version of the Apigee hybrid installation matches the version of the backup file being used for the restoration, as version mismatches are not supported and can cause the restoration to fail.\u003c/p\u003e\n"]]],[],null,["# Cassandra restore overview\n\n| You are currently viewing version 1.11 of the Apigee hybrid documentation. **This version is end of life.** You should upgrade to a newer version. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nThis page provides an overview of restoring Cassandra in Apigee hybrid.\n\nWhy use restore?\n----------------\n\nYou can use [backups](/apigee/docs/hybrid/v1.11/cassandra-backup-overview) to restore Apigee infrastructure from the ground up in the event of\ncatastrophic failures, such as irrecoverable data loss in your Apigee hybrid instance from a disaster.\nRestoration takes your data from the backup location and restores the data into a new Cassandra\ncluster with the same number of nodes. No cluster data is taken from the old Cassandra cluster.\nThe goal of the restoration process is to bring an Apigee hybrid installation back to a\npreviously operational state using backup data from a snapshot.\n\nThe use of backups to restore is not recommended for the following scenarios:\n\n- Cassandra node failures.\n- Accidental deletion of data like `apps`, `developers`, and `api_credentials`.\n- One or more regions going down in a multi-region hybrid deployment.\n\nApigee Cassandra deployments and operational architecture take care of redundancy and fault tolerance for a single region.\nIn most cases, the recommended multi-region production implementation of hybrid means that a region failure can be recovered from\nanother live region using [region decommissioning and expansion procedures]()\ninstead of restoring from a backup.\n\nBefore you begin implementing a restore from a Cassandra backup, be aware of the following:\n\n- **Downtime:** There will be downtime for the duration of the restoration.\n- **Data loss:** There will be data loss between the last valid backup and the time the restoration is complete.\n- **Restoration time:** Restoration time depends on the size of the data and cluster.\n- **Cherry-picking data:** You cannot select only specific data to restore. Restoration restores the entire backup you select.\n\n| **Note:** When restoring an Apigee hybrid installation from backup, ensure that the backup file is created from the same hybrid version as the installation. For example, if you have a backup file created from Apigee hybrid 1.5.x, use it to restore only an Apigee hybrid 1.5.x installation. If there is a version mismatch between the backup file and the hybrid installation, the restoration might not work and Apigee does not support any issues that arise because of such restoration.\n\nMulti-region restores\n---------------------\n\nIf you installed Apigee hybrid into multiple regions, you must check the overrides file\nfor the region you are restoring to make sure the `cassandra:hostNetwork` is set\nto `false` before you perform the restoration. For more information, see\n[Restoring in multiple regions](./restore-cassandra-multi-region).\n\nPrerequisites\n-------------\n\n\nCheck all the following prerequisites are successful. Investigate any prerequisite failures before\nproceeding with restoration.\n\n1. Verify all Cassandra pods are up and running with the following command. \n\n ```\n kubectl get pods -n apigee -l app=apigee-cassandra\n ```\n\n\n Your output should look something like the following example: \n\n ```\n NAME READY STATUS RESTARTS AGE\n apigee-cassandra-default-0 1/1 Running 0 14m\n apigee-cassandra-default-1 1/1 Running 0 13m\n apigee-cassandra-default-2 1/1 Running 0 11m\n exampleuser@example hybrid-files %\n \n ```\n2. Verify the Cassandra statefulset shows all pods are running with the following command. \n\n ```\n kubectl get sts -n apigee -l app=apigee-cassandra\n ```\n\n\n Your output should look something like the following example: \n\n ```\n NAME READY AGE\n apigee-cassandra-default 3/3 15m\n \n ```\n3. Verify the ApigeeDatastore resource is in a *running* state with the following command. \n\n ```\n kubectl get apigeeds -n apigee\n ```\n\n\n Your output should look something like the following example: \n\n ```\n NAME STATE AGE\n default running 16m\n \n ```\n4. Verify all Cassandra PVCs are in *Bound* status with the following command. \n\n ```\n kubectl get pvc -n apigee -l app=apigee-cassandra\n ```\n\n\n Your output should look something like the following example: \n\n ```\n NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\n cassandra-data-apigee-cassandra-default-0 Bound pvc-a14184e7-8745-4b30-8069-9d50642efe04 10Gi RWO standard-rwo 17m\n cassandra-data-apigee-cassandra-default-1 Bound pvc-ed129dcb-4706-4bad-a692-ac7c78bad64d 10Gi RWO standard-rwo 15m\n cassandra-data-apigee-cassandra-default-2 Bound pvc-faed0ad1-9019-4def-adcd-05e7e8bb8279 10Gi RWO standard-rwo 13m\n \n ```\n5. Verify all Cassandra PVs are in Bound status with the following command. \n\n ```\n kubectl get pv -n apigee\n ```\n\n\n Your output should look something like the following example: \n\n ```\n NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\n pvc-a14184e7-8745-4b30-8069-9d50642efe04 10Gi RWO Delete Bound apigee/cassandra-data-apigee-cassandra-default-0 standard-rwo 17m\n pvc-ed129dcb-4706-4bad-a692-ac7c78bad64d 10Gi RWO Delete Bound apigee/cassandra-data-apigee-cassandra-default-1 standard-rwo 16m\n pvc-faed0ad1-9019-4def-adcd-05e7e8bb8279 10Gi RWO Delete Bound apigee/cassandra-data-apigee-cassandra-default-2 standard-rwo 14m\n \n ```\n6. Verify the Apigee Controller resource is in Running status with the following command. \n\n ```\n kubectl get pods -n apigee-system -l app=apigee-controller\n ```\n\n\n Your output should look something like the following example: \n\n ```\n NAME READY STATUS RESTARTS AGE\n apigee-controller-manager-856d9bb7cb-cfvd7 2/2 Running 0 20m\n \n ```\n\nHow to restore?\n---------------\n\nCassandra's restoration steps differ slightly depending on whether your Apigee hybrid is\ndeployed in a single region or multiple regions. For the detailed restoration steps,\nsee the following documentation:\n\n- [Restoring in a single region](/apigee/docs/hybrid/v1.11/restore-cassandra-single-region)\n- [Restoring in a multi-region](/apigee/docs/hybrid/v1.11/restore-cassandra-multi-region)"]]