Halaman ini menjelaskan cara memulihkan Cassandra di satu region.
Dalam deployment region tunggal, Apigee hybrid di-deploy di satu pusat data atau region. Jika Anda memiliki beberapa organisasi Apigee dalam deployment, proses pemulihan akan memulihkan data untuk semua organisasi.
Dalam penyiapan multi-organisasi, Anda tidak dapat memulihkan organisasi tertentu.
Memulihkan region dari cadangan
Perbarui detail pemulihan Cassandra dalam file overrides.yaml:
Namespace untuk pemulihan. Gunakan namespace yang sama seperti di cluster asli Anda.
cassandra:hostNetwork
hostNetwork diperlukan dan harus selalu ditetapkan ke
false.
restore:enabled
Pemulihan dinonaktifkan secara default. Anda harus menetapkan properti
ini ke true.
restore:serviceAccountPath
SA_JSON_FILE_PATH
Jalur di sistem file Anda ke akun layanan yang Anda buat
untuk pencadangan.
restore:dbStorageBucket
CLOUD_STORAGE_BUCKET_PATH
Jalur bucket Cloud Storage tempat data cadangan Anda disimpan dalam format berikut: gs://BUCKET_NAME.
gs:// wajib diisi.
restore:cloudProvider
GCP
Properti cloudProvider: "GCP" wajib diisi.
restore:snapshotTimestamp
TIMESTAMP
Stempel waktu snapshot cadangan yang akan dipulihkan. Untuk memeriksa stempel waktu yang dapat digunakan,
buka dbStorageBucket dan lihat file yang ada di
bucket. Setiap nama file berisi nilai stempel waktu. Misalnya,
backup_20210203213003_apigee-cassandra-default-0.tgz
Dengan 20210203213003 adalah nilai snapshotTimestamp yang akan Anda
gunakan jika ingin memulihkan cadangan yang dibuat pada waktu tersebut.
backup:enabled
Anda harus menetapkan properti ini ke false jika sebelumnya ditetapkan
ke true.
Jika Anda tidak memiliki cluster bersih untuk memulai, ikuti dokumentasi Menonaktifkan region hybrid untuk helm untuk mengubah penginstalan Hybrid yang ada ke status bersih (Anda dapat membiarkan Cert Manager diinstal). Tindakan ini akan membawa Anda ke status yang sama
seolah-olah Anda telah mengikuti panduan penyiapan runtime Helm
hingga awal Langkah 11.
Pastikan tidak ada pod yang tersisa di namespace Apigee:
kubectl get pods -n apigeekubectl get pods -n apigee-system
Jika Anda menggunakan pencadangan CSI, pastikan Anda dapat melihat volumesnapshot yang ingin digunakan untuk proses pemulihan dengan menjalankan:
kubectl get volumesnapshot -n apigee
Instal semua komponen Hybrid satu per satu seperti yang dijelaskan dalam
Langkah 11 pada
manual penginstalan. Perhatikan bahwa pod apigee-cassandra-restore akan
dibuat setelah Anda menjalankan perintah untuk menginstal datastore, tetapi pod tersebut hanya akan
berstatus running setelah Anda menginstal komponen apigee-org.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-03 UTC."],[[["\u003cp\u003eThis document outlines the process for restoring Cassandra in a single-region Apigee hybrid deployment, which impacts all organizations within that deployment.\u003c/p\u003e\n"],["\u003cp\u003eBefore initiating the restore, you may need to delete \u003ccode\u003eorg\u003c/code\u003e and \u003ccode\u003eenv\u003c/code\u003e components in your cluster for troubleshooting, or to obtain the overrides files for each organization if multiple organizations exist in the hybrid installation.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eoverrides.yaml\u003c/code\u003e file requires specific updates to enable the Cassandra restore, including setting \u003ccode\u003erestore:enabled\u003c/code\u003e to \u003ccode\u003etrue\u003c/code\u003e, specifying the service account path, cloud storage bucket path, the \u003ccode\u003eGCP\u003c/code\u003e cloud provider, and the desired snapshot timestamp.\u003c/p\u003e\n"],["\u003cp\u003eThe process involves decommissioning the existing hybrid installation to a clean state if needed, installing Hybrid components, and then confirming the \u003ccode\u003eapigeeds\u003c/code\u003e and other pods status.\u003c/p\u003e\n"],["\u003cp\u003eAfter the restoration, you should remove the \u003ccode\u003erestore\u003c/code\u003e configuration and add the \u003ccode\u003ebackup\u003c/code\u003e configuration to the \u003ccode\u003eoverrides-restore.yaml\u003c/code\u003e file to set up backups on the cluster.\u003c/p\u003e\n"]]],[],null,["# Restoring in a single region\n\n| You are currently viewing version 1.12 of the Apigee hybrid documentation. **This version is end of life.** You should upgrade to a newer version. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nThis page describes how to restore Cassandra in a single region.\n\nIn a single region deployment, Apigee hybrid is deployed in a single data center or a region. If you\nhave multiple Apigee organizations in your deployment, the restore process restores data for all the organizations.\nIn a multi-organization setup, you cannot restore a specific organization.\n| **Note** : Before you begin restoring a single region, consider whether the following prerequisite steps are applicable:\n|\n| - If you want to preserve an existing setup for troubleshooting and root cause analysis (RCA), you should delete all the `org` and `env` components from the Kubernetes cluster *except* the Apigee controller, and then retain the cluster. The cluster will contain the existing Apigee datastore (Cassandra) which you can use for troubleshooting. Create a new Kubernetes cluster and then restore Cassandra in the new cluster.\n| - If your hybrid installation was set up with multiple organizations, get the overrides files for each organization before proceeding with restore in a single region. You can add the restore configuration as described in [Step 3](#step3) to any **one** of the overrides files. Do not add the restore configuration to any other overrides file.\n\nRestoring a region from a backup\n--------------------------------\n\n1. Update the Cassandra restore details in the `overrides.yaml` file:\n\n ```actionscript-3\n namespace: YOUR_RESTORE_NAMESPACE # Use the same namespace as in your original cluster.\n cassandra:\n hostNetwork: false\n ...\n restore:\n enabled: true\n serviceAccountPath: \"\u003cvar translate=\"no\"\u003eSA_JSON_FILE_PATH\u003c/var\u003e\"\n dbStorageBucket: \"\u003cvar translate=\"no\"\u003eCLOUD_STORAGE_BUCKET_PATH\u003c/var\u003e\"\n cloudProvider: \"GCP\" # required verbatim \"GCP\" (all caps)\n snapshotTimestamp: \"\u003cvar translate=\"no\"\u003eTIMESTAMP\u003c/var\u003e\"\n ...\n backup:\n enabled: false\n ...\n ```\n\n\n Where:\n\n | **Note:** In case you are using **CSI backup** , please follow the **example restore config** in the [CSI backup and restore](/apigee/docs/hybrid/v1.12/cassandra-csi-backup-restore#example-restore-config) documentation.\n2. In case you do not have a clean cluster to start out with, follow the\n [Decommission a hybrid region for helm](/apigee/docs/hybrid/v1.12/decommission-region#helm)\n documentation to bring your existing Hybrid installation into a clean state\n (you can leave the **Cert Manager** installed). This would bring you to an equal state\n as if you would have followed [Helm runtime setup manual](/apigee/docs/hybrid/v1.12/install-create-cluster)\n until the beginning of Step 11.\n\n3. Verify there are no pods remaining in the Apigee namespaces:\n\n kubectl get pods -n apigee\n kubectl get pods -n apigee-system\n\n4. If you are using CSI backup, make sure that you can see\n the volumesnapshots you want to use for the restoration process by running:\n\n ```\n kubectl get volumesnapshot -n apigee\n \n ```\n5. Install all Hybrid components one by one as described in\n [Step 11](/apigee/docs/hybrid/v1.12/install-helm-charts) on the\n installation manual. Note that the `apigee-cassandra-restore` pod will get\n created once you run the command to install the `datastore`, but it will only\n go into `running` state after you install the `apigee-org` component.\n\n | **Important:** Omit the `‑‑atomic` flag for all `upgrade` commands for [migrated clusters](/apigee/docs/hybrid/v1.12/helm-migration).\n\nSee [Cassandra backup overview]() for more details on Cassandra backup and restore.\n\nVerify the restoration job progress and confirm that `apigeeds` and all the other pods are up:\n\n1. Check `apigeeds`: \n\n ```\n kubectl get apigeeds -n apigee\n ```\n2. Check all other pods: \n\n ```\n kubectl get pods -n apigee\n ```\n\nUpon successful completion of the restore and confirmation that the runtime components are healthy,\nwe recommend configuring a backup on the cluster:\n\n1. Remove the `restore` configuration from the `overrides-restore.yaml` file.\n2. Add the `backup` configuration to the `overrides-restore.yaml` file.\n3. Apply the `backup` configuration with the following command: \n\n ```\n helm upgrade datastore apigee-datastore/ \\\n --namespace apigee \\\n --atomic \\\n -f overrides-restore.yaml\n ```"]]