Auf dieser Seite wird beschrieben, wie Sie Sicherungen für Cassandra ohne Cloud Storage planen.
Bei dieser Methode werden Sicherungen auf einem von Ihnen angegebenen Remoteserver und nicht in einem Cloud Storage-Bucket gespeichert. Apigee verwendet SSH, um mit dem Remote-Server zu kommunizieren.
Sie müssen die Sicherungen als cron-Jobs planen. Nachdem ein Sicherungszeitplan auf Ihren Hybrid-Cluster angewendet wurde, wird regelmäßig ein Kubernetes-Sicherungsjob gemäß dem Zeitplan in der Laufzeitebene ausgeführt. Der Job löst auf jedem Cassandra-Knoten in Ihrem Hybridcluster ein Sicherungsskript aus, das alle Daten auf dem Knoten erfasst, eine (komprimierte) Archivdatei der Daten erstellt und das Archiv an den in der Datei overrides.yaml angegeben Server sendet.
Die folgenden Schritte enthalten gängige Beispiele für die Ausführung bestimmter Aufgaben, z. B. das Erstellen eines SSH-Schlüsselpaars. Verwenden Sie die für Ihre Installation geeigneten Methoden.
Geben Sie einen Linux- oder Unix-Server für Ihre Sicherungen an. Dieser Server muss über SSH von Ihrer Apigee Hybrid-Laufzeitebene erreichbar sein. Es muss genügend Speicherplatz für die Sicherungen haben.
Richten Sie einen SSH-Server auf dem Server ein bzw. prüfen Sie, ob ein sicherer SSH-Server konfiguriert ist.
Erstellen Sie ein SSH-Schlüsselpaar und speichern Sie die Datei mit dem privaten Schlüssel in einem Pfad, auf den von der Hybrid-Laufzeitebene aus zugegriffen werden kann. Sie müssen ein leeres Passwort für Ihr Schlüsselpaar verwenden. Andernfalls schlägt die Sicherung fehl. Beispiel:
Dabei ist exampleuser@example.com ein String.
Jeder String, der im Befehl ssh-keygen auf -C folgt, wird zu einem Kommentar, der im neu erstellten Schlüssel ssh enthalten ist. Der Eingabestring kann ein beliebiger String sein. Wenn Sie einen Kontonamen im Format exampleuser@example.com verwenden, können Sie schnell feststellen, welches Konto mit dem Schlüssel verknüpft wird.
Erstellen Sie auf dem Sicherungsserver ein Nutzerkonto mit dem Namen apigee. Achten Sie darauf, dass der neue apigee-Nutzer ein Basisverzeichnis unter /home hat.
Erstellen Sie auf dem Sicherungsserver das Verzeichnis .ssh im neuen Verzeichnis /home/apigee.
Kopieren Sie den öffentlichen Schlüssel (im vorherigen Beispiel ssh_key.pub) in eine Datei mit dem Namen authorized_keys im neuen Verzeichnis /home/apigee/.ssh. Beispiel:
cd /home/apigee
mkdir .sshcd .sshvi authorized_keys
Erstellen Sie auf dem Sicherungsserver ein Sicherungsverzeichnis im Verzeichnis /home/apigee/. Das Sicherungsverzeichnis kann ein beliebiges Verzeichnis sein, solange der Nutzer apigee Zugriff darauf hat. Beispiel:
cd /home/apigee
mkdir cassandra-backup
Testen Sie die Verbindung. Sie müssen sicherstellen, dass Ihre Cassandra-Pods über SSH eine Verbindung zu Ihrem Sicherungsserver herstellen können:
Melden Sie sich in der Shell des Cassandra-Pods an. Beispiel:
Dabei ist APIGEE_CASSANDRA_DEFAULT_0 der Name eines Cassandra-Pods. Ändern Sie dies in den Namen des Pods, von dem Sie eine Verbindung herstellen möchten.
Stellen Sie per SSH eine Verbindung zu Ihrem Sicherungsserver her. Verwenden Sie dazu den privaten SSH-Schlüssel, der den Cassandra-Pod und die IP-Adresse des Servers bereitgestellt hat:
Legen Sie für eine Cloud Storage-Sicherung das Attribut auf GCP fest. Beispiel: cloudProvider: "GCP".
Legen Sie für eine Remote-Serversicherung das Attribut HYBRID fest. Beispiel: cloudProvider: "HYBRID".
backup:schedule
SCHEDULE
Die Zeit, zu der das Backup startet, angegeben in der Standard-Crontab-Syntax. Die Zeitangaben werden in der lokalen Zeitzone des Kubernetes-Clusters angegeben. Standard: 0 2 * * *
Wenden Sie die Sicherungskonfiguration auf den Speicherbereich Ihres Clusters an:
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],["Zuletzt aktualisiert: 2025-08-28 (UTC)."],[[["\u003cp\u003eThis document details how to schedule and manage Cassandra backups for Apigee hybrid version 1.11, storing backups on a remote server via SSH instead of Cloud Storage.\u003c/p\u003e\n"],["\u003cp\u003eBackups are scheduled as \u003ccode\u003ecron\u003c/code\u003e jobs, executed periodically by a Kubernetes job that triggers a backup script on each Cassandra node, compressing the data, and sending it to the designated remote server.\u003c/p\u003e\n"],["\u003cp\u003eTo set up, you must designate a Linux/Unix server for backups, configure an SSH server, create an SSH key pair with no password, create a user account on the server, and establish a backup directory.\u003c/p\u003e\n"],["\u003cp\u003eThe backup schedule, remote server details, and SSH key file path are configured in the \u003ccode\u003eoverrides.yaml\u003c/code\u003e file under the \u003ccode\u003ecassandra.backup\u003c/code\u003e section, which includes properties like enabling backups, the server's IP, and the storage directory.\u003c/p\u003e\n"],["\u003cp\u003eA manual backup can be initiated using a specific \u003ccode\u003ekubectl\u003c/code\u003e command, in addition to the automated backups scheduled through the cron configuration in the \u003ccode\u003eoverrides.yaml\u003c/code\u003e file.\u003c/p\u003e\n"]]],[],null,["# Scheduling backups in a remote server\n\n| You are currently viewing version 1.11 of the Apigee hybrid documentation. **This version is end of life.** You should upgrade to a newer version. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n| **Warning:** This Apigee Hybrid version has a known issue [388608440](/apigee/docs/release/known-issues#388608440) affecting backups that use `HYBRID` or `GCP` Cloud Providers. Please see the [Cassandra troubleshooting guide](/apigee/docs/api-platform/troubleshoot/playbooks/cassandra/ts-cassandra#ki-388608440-hybrid-ssh-backup-111) to check whether your setup is affected and for steps needed for resolution.\n\nThis page describes how to schedule backups for Cassandra without the Cloud Storage.\nIn this method, backups are stored on a remote server specified by you instead of a Cloud Storage bucket. Apigee uses\nSSH to communicate with the remote server.\n\nYou must schedule the backups as `cron` jobs. Once a backup schedule\nhas been applied to your hybrid cluster, a Kubernetes backup job is\nperiodically executed according to the schedule in the runtime plane. The job triggers a\nbackup script on each Cassandra node in your hybrid cluster that collects all the\ndata on the node, creates an archive (compressed) file of the data, and sends the archive\nto the server specified in your `overrides.yaml` file.\n| **Note:**\n|\n| - You must ensure there is enough space on the file system for the backups, and adjust the frequency of the backups to avoid unnecessarily filling the allotted storage space. Apigee does not dictate a retention policy for the backup files. You may want to create a retention policy for files appropriate to your installation.\n| - Applying backup configuration on the existing cluster will rolling restart Cassandra pods.\n\n\nThe following steps include common examples for completing specific tasks, like creating an SSH\nkey pair. Use the methods that are appropriate to your installation.\n\n\nThe procedure has the following parts:\n\n- [Set up the server and SSH](#server-ssh)\n- [Set the schedule and destination for backup](#overrides-backup)\n\n### Set up the server and SSH\n\n1. Designate a Linux or Unix server for your backups. This server must be reachable using SSH from your Apigee hybrid runtime plane. It must have enough storage for your backups.\n2. Set up an SSH server on the server, or ensure that it has a secure SSH server configured. **Caution:** For security purposes, make sure your SSH server is up to date.\n3. Create an SSH key pair and store the private key file in a path that is accessible from your hybrid runtime plane. **You must use a blank password for your key pair or the backup will fail** . For example: \n\n ssh-keygen -t rsa -b 4096 -C \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-n\"\u003eexampleuser\u003c/span\u003e\u003cspan class=\"devsite-syntax-nv\"\u003e@example\u003c/span\u003e\u003cspan class=\"devsite-syntax-p\"\u003e.\u003c/span\u003e\u003cspan class=\"devsite-syntax-n\"\u003ecom\u003c/span\u003e\u003c/var\u003e\n Enter file in which to save the key (/Users/exampleuser/.ssh/id_rsa): $APIGEE_HOME/hybrid-files/certs/ssh_key\n Enter passphrase (empty for no passphrase):\n Enter same passphrase again:\n Your identification has been saved in ssh_key\n Your public key has been saved in ssh_key.pub\n The key fingerprint is:\n SHA256:DWKo334XMZcZYLOLrd/8HNpjTERPJJ0mc11UYmrPvSA exampleuser@example.com\n The key's randomart image is:\n +---[RSA 4096]----+\n | +. ++X|\n | . . o.=.*+|\n | . o . . o==o |\n | . . . =oo+o...|\n | . S +E oo .|\n | . . .. . o .|\n | . . . . o.. |\n | . ...o ++. |\n | .. .. +o+. |\n +----[SHA256]-----+\n\n\n Where: \u003cvar translate=\"no\"\u003eexampleuser@example.com\u003c/var\u003e is a string.\n Any string that follows `-C` in the `ssh-keygen`\n command becomes a comment included in the newly created `ssh`\n key. The input string can be any string. When you use an account name\n in the form of \u003cvar translate=\"no\"\u003eexampleuser@example.com\u003c/var\u003e, you can quickly\n identify which account goes with the key.\n4. Create a user account on the backup server with the name `apigee`. Make sure the new `apigee` user has a home directory under `/home`.\n5. On the backup server, create an `.ssh` directory in the new `/home/apigee` directory.\n6. Copy the public key (`ssh_key.pub` in the previous example) into a file named `authorized_keys` in the new `/home/apigee/.ssh` directory. For example: \n\n cd /home/apigee\n mkdir .ssh\n cd .ssh\n vi authorized_keys\n\n7. On your backup server, create a backup directory within the `/home/apigee/` directory. The backup directory can be any directory as long as the `apigee` user has access to it. For example: \n\n cd /home/apigee\n mkdir cassandra-backup\n\n8. Test the connection. You need to make sure that your Cassandra pods can connect to your backup server using SSH:\n 1. Log into the shell of your Cassandra pod. For example: \n\n ```\n kubectl exec -it -n APIGEE_NAMESPACE APIGEE_CASSANDRA_POD -- /bin/bash\n ```\n\n\n Where \u003cvar translate=\"no\"\u003eAPIGEE_CASSANDRA_POD\u003c/var\u003e is the name of a Cassandra pod. Change this to\n the name of the pod you want to connect from.\n 2. Connect by SSH to your backup server, using the private SSH key mounted the Cassandra pod and server IP address: \n\n ```\n ssh -i /var/secrets/keys/key apigee@BACKUP_SERVER_IP\n ```\n | **Note:** You may see a warning at this point saying your server's fingerprint is unrecognized and asks if you would like to continue. You can continue at that prompt and verify the SSH configuration.\n\n### Set the schedule and destination for backup\n\n\nYou set the schedule and destination for backups in your `overrides.yaml` file.\n\n1. Add the following parameters to your `overrides.yaml` file:\n\n ### Parameters\n\n ```actionscript-3\n cassandra:\n backup:\n enabled: true\n keyFile: \"\u003cvar translate=\"no\"\u003ePATH_TO_PRIVATE_KEY_FILE\u003c/var\u003e\"\n server: \"\u003cvar translate=\"no\"\u003eBACKUP_SERVER_IP\u003c/var\u003e\"\n storageDirectory: \"/home/apigee/\u003cvar translate=\"no\"\u003eBACKUP_DIRECTORY\u003c/var\u003e\"\n cloudProvider: \"HYBRID\" # required verbatim \"HYBRID\" (all caps)\n schedule: \"\u003cvar translate=\"no\"\u003eSCHEDULE\u003c/var\u003e\"\n ```\n\n ### Example Helm\n\n ```actionscript-3\n cassandra:\n backup:\n enabled: true\n keyFile: \"private.key\"# path relative to apigee-datastore path\n server: \"34.56.78.90\"\n storageDirectory: \"/home/apigee/cassbackup\"\n cloudProvider: \"HYBRID\"\n schedule: \"0 2 * * *\"\n ```\n\n ### Example `apigeectl`\n\n ```actionscript-3\n cassandra:\n backup:\n enabled: true\n keyFile: \"home/exampleuser/apigee-hybrid/hybrid-files/service-accounts/private.key\"\n server: \"34.56.78.90\"\n storageDirectory: \"/home/apigee/cassbackup\"\n cloudProvider: \"HYBRID\"\n schedule: \"0 2 * * *\"\n ```\n\n\n Where:\n\n2. Apply the backup configuration to the storage scope of your cluster:\n\n ### Helm\n\n ```\n helm upgrade datastore apigee-datastore/ \\\n --install \\\n --namespace APIGEE_NAMESPACE \\\n --atomic \\\n -f OVERRIDES_FILE.yaml\n ```\n\n ### `apigeectl`\n\n ```\n $APIGEECTL_HOME/apigeectl apply -f OVERRIDES_FILE.yaml --datastore\n ```\n\n\n Where \u003cvar translate=\"no\"\u003eOVERRIDES_FILE\u003c/var\u003e is the path to the overrides file you just edited.\n3. Verify the backup job. For example: \n\n ```\n kubectl get cronjob -n apigee\n ``` \n\n ```component-pascal\n NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE\n apigee-cassandra-backup 33 * * * * False 0 \u003cnone\u003e 94s\n ```\n\n### Launch a manual backup\n\n\nBackup jobs are triggered automatically according to the cron schedule set in\n[cassandra.backup.schedule](/apigee/docs/hybrid/v1.11/config-prop-ref#cassandra-backup-schedule) in your\n`overrides.yaml` file. However, you can also initiate a backup job manually if needed\nusing the following command: \n\n```\nkubectl create job -n APIGEE_NAMESPACE --from=cronjob/apigee-cassandra-backup MANUAL_BACKUP_JOB_NAME\n```\n\nWhere \u003cvar translate=\"no\"\u003eMANUAL_BACKUP_JOB_NAME\u003c/var\u003e is the name of a manual backup job to be be created.\n\n### Troubleshooting\n\n1. Test the connection from a Cassandra pod. You need to make sure that your Cassandra pods can connect to your backup server using SSH:\n 1. Log into the shell of your Cassandra pod. For example: \n\n ```\n kubectl exec -it -n APIGEE_NAMESPACE APIGEE_CASSANDRA_POD -- /bin/bash\n ```\n\n\n Where \u003cvar translate=\"no\"\u003eAPIGEE_CASSANDRA_POD\u003c/var\u003e is the name of a Cassandra pod. Change this to\n the name of the pod you want to connect from.\n 2. Connect by SSH to your backup server, using the private SSH key mounted the Cassandra pod and server IP address: \n\n ```\n ssh -i /var/secrets/keys/key apigee@BACKUP_SERVER_IP\n ```\n | **Note:** You may see a warning at this point saying your server's fingerprint is unrecognized and asks if you would like to continue. You can continue at that prompt and verify the SSH configuration.\n2. If you have problems accessing your remote server from the Cassandra pod, please check your ssh configuration on the remote server again and also make sure that [upgrading the datastore](#overrides-backup) was successful.\n3. You can check if Cassandra uses the correct private key by running the following command while you are logged in to your Cassandra pod, and compare the output [with the private key you created](#create-key): \n\n ```\n cat /var/secrets/keys/key\n ```"]]