[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-22。"],[[["\u003cp\u003eThis guide outlines how to enable high availability (HA) for Kubernetes-based AlloyDB Omni database clusters by creating standby replicas that mirror the primary database instance.\u003c/p\u003e\n"],["\u003cp\u003eEnabling HA involves modifying the database cluster's manifest to specify the desired number of standby replicas and ensuring sufficient storage and compute resources.\u003c/p\u003e\n"],["\u003cp\u003eThe AlloyDB Omni Operator automatically manages failovers to a standby instance if the primary instance becomes unavailable, which can be turned off, or performed manually.\u003c/p\u003e\n"],["\u003cp\u003eManual switchovers, which switch the roles of the primary and standby database instances with zero data loss, are also supported for testing disaster recovery setups.\u003c/p\u003e\n"],["\u003cp\u003eStandby replicas can be configured as read-only instances by setting \u003ccode\u003eenableStandbyAsReadReplica\u003c/code\u003e to \u003ccode\u003etrue\u003c/code\u003e in the cluster's manifest.\u003c/p\u003e\n"]]],[],null,["Select a documentation version: 15.5.2keyboard_arrow_down\n\n- [Current (16.8.0)](/alloydb/omni/current/docs/kubernetes-ha)\n- [16.8.0](/alloydb/omni/16.8.0/docs/kubernetes-ha)\n- [16.3.0](/alloydb/omni/16.3.0/docs/kubernetes-ha)\n- [15.12.0](/alloydb/omni/15.12.0/docs/kubernetes-ha)\n- [15.7.1](/alloydb/omni/15.7.1/docs/kubernetes-ha)\n- [15.7.0](/alloydb/omni/15.7.0/docs/kubernetes-ha)\n- [15.5.5](/alloydb/omni/15.5.5/docs/kubernetes-ha)\n- [15.5.4](/alloydb/omni/15.5.4/docs/kubernetes-ha)\n- [15.5.2](/alloydb/omni/15.5.2/docs/kubernetes-ha)\n\n\u003cbr /\u003e\n\nThis page shows you how to enable and test high availability (HA) on your Kubernetes-based AlloyDB Omni database cluster. Performing the tasks documented here requires basic knowledge about applying Kubernetes manifest files and using the `kubectl` command-line tool.\n\n\u003cbr /\u003e\n\nOverview\n\nYou can enable HA in your database cluster by directing the AlloyDB Omni Kubernetes Operator to create standby replicas of your\nprimary database instance. The AlloyDB Omni Operator configures your database cluster to\ncontinuously update the data on this replica, matching all changes to data on your primary instance.\n\nEnable HA\n\nBefore you enable HA on your database cluster, ensure that your Kubernetes\ncluster has the following:\n\n- Storage for two complete copies of your data\n- Compute resources for two database instances running in parallel\n\nTo enable HA, follow these steps:\n\n1. Modify the database cluster's manifest to include an `availability` section\n under its `spec` section. This section defines the number of standbys that you want to add by setting the `numberOfStandbys` parameter.\n\n spec:\n availability:\n numberOfStandbys: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNUMBER_OF_STANDBYS\u003c/span\u003e\u003c/var\u003e\n\n Replace \u003cvar translate=\"no\"\u003eNUMBER_OF_STANDBYS\u003c/var\u003e with the number of standbys you want to add. The maximum value is `5`. If you're setting up HA and are unsure about the number of standbys you need, then start by setting the value to either `1` or `2`.\n2. Re-apply the manifest.\n\nDisable HA\n\nTo disable HA, follow these steps:\n\n1. Set `numberOfStandbys` to `0` in the cluster's manifest:\n\n spec:\n availability:\n numberOfStandbys: 0\n\n2. Re-apply the manifest.\n\nVerify HA on a database cluster\n\nTo view the current HA status of a database cluster, check the `HAReady`\ncondition of that cluster's status. If this value has a `status` set to `True`,\nthen HA is set up and working on the database cluster.\n\nTo check this value on the command line, run the following command: \n\n kubectl get dbcluster.alloydbomni.dbadmin.goog \u003cvar translate=\"no\"\u003eDB_CLUSTER_NAME\u003c/var\u003e -o jsonpath={.status.conditions[?(@.type == \\'HAReady\\')]} -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eDB_CLUSTER_NAME\u003c/var\u003e: the name of the database cluster.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the namespace of the database cluster.\n\nFail over to a standby instance\n\nIf your primary instance becomes unavailable for more than 90 seconds, then the\nAlloyDB Omni Operator automatically fails over from the primary database instance to\nthe standby instance.\n\nFailovers are a good option when you want to quickly recover from an unexpected\nfailure and minimize downtime, even if it means potentially losing a small\namount of data, if the primary database becomes unavailable before the backup is\nfully updated.\n\nThe AlloyDB Omni Operator supports both automatic and manual failover.\nAutomatic failover is enabled by default.\n\nFailover results in the following sequence of events:\n\n1. The AlloyDB Omni Operator takes the primary database instance\n offline.\n\n2. The AlloyDB Omni Operator promotes the standby replica to be the new\n primary database instance.\n\n3. The AlloyDB Omni Operator deletes the previous primary database\n instance.\n\n4. The AlloyDB Omni Operator creates a new standby replica.\n\nDisable automatic failover\n\nAutomatic failovers are enabled by default on database clusters.\n\nTo disable a failover, follow these steps:\n\n1. Set `enableAutoFailover` to `false` in the cluster's manifest:\n\n spec:\n availability:\n enableAutoFailover: false\n\n2. Re-apply the manifest.\n\nTrigger a manual failover\n\nTo trigger a manual failover, create and apply a manifest for a new failover resource: \n\n apiVersion: alloydbomni.dbadmin.goog/v1\n kind: Failover\n metadata:\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eFAILOVER_NAME\u003c/span\u003e\u003c/var\u003e\n namespace: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNAMESPACE\u003c/span\u003e\u003c/var\u003e\n spec:\n dbclusterRef: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDB_CLUSTER_NAME\u003c/span\u003e\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eFAILOVER_NAME\u003c/var\u003e: a name for this resource---for\n example, `failover-1`.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the namespace for this failover resource,\n which must match the namespace of the database cluster that it applies to.\n\n- \u003cvar translate=\"no\"\u003eDB_CLUSTER_NAME\u003c/var\u003e: the name of the database cluster to fail\n over.\n\nTo monitor the failover, run the following command: \n\n kubectl get failover \u003cvar translate=\"no\"\u003eFAILOVER_NAME\u003c/var\u003e -o jsonpath={.status.state} -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eFAILOVER_NAME\u003c/var\u003e: the name that you assigned the\n failover resource when you created it.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the namespace of the database cluster.\n\nThe command returns `Success` after the new primary database instance is ready\nfor use. To monitor the status of the new standby instance, see the next section.\n\nSwitchover to a standby instance\n\nSwitchover is performed when you want to test your disaster recovery setup or\nany other planned activities that require switching the roles of the primary\ndatabase and the standby replica.\n\nAfter the switchover completes, the roles of the primary database instance and\nthe standby replica are reversed along with the direction of replication. You\nmust opt for switchovers if you want better control over the process of testing\nyour disaster recovery setup with zero data loss.\n\nThe AlloyDB Omni Operator supports manual switchover.\n\nSwitchover results in the following sequence of events:\n\n1. The AlloyDB Omni Operator takes the primary database instance\n offline.\n\n2. The AlloyDB Omni Operator promotes the standby replica to be the new\n primary database instance.\n\n3. The AlloyDB Omni Operator switches the previous primary database\n instance to a standby replica.\n\n| **Note:** You can configure a standby replica to be used as a read-only instance. For more information, see [Use a standby as a read-only instance](#create-read-only-instance)\n\nPerform a switchover\n\nBefore you perform a switchover, ensure the following:\n\n- Verify that both your primary database instance and the standby replica are in healthy state. For more information, see [Manage and monitor AlloyDB\n Omni](/alloydb/omni/15.5.2/docs/manage).\n- Verify that the current HA status is in `HAReady` condition. For more information, see [Verify HA on a database cluster](#monitor).\n\nTo perform a switchover, create and apply a manifest for a new switchover resource: \n\n apiVersion: alloydbomni.dbadmin.goog/v1\n kind: Switchover\n metadata:\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eSWITCHOVER_NAME\u003c/span\u003e\u003c/var\u003e\n spec:\n dbclusterRef: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eDB_CLUSTER_NAME\u003c/span\u003e\u003c/var\u003e\n NewPrimary: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eSTANDBY_REPLICA_NAME\u003c/span\u003e\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eSWITCHOVER_NAME\u003c/var\u003e: a name for this switchover resource---for\n example, `switchover-1`.\n\n- \u003cvar translate=\"no\"\u003eDB_CLUSTER_NAME\u003c/var\u003e: the name of the primary database instance that switchover operation applies to.\n\n- \u003cvar translate=\"no\"\u003eSTANDBY_REPLICA_NAME\u003c/var\u003e: the name of the database instance that you want to promote as new primary.\n\n To identify the standby replica name, run the following command:\n `posix-terminal\n kubectl get instances.alloydbomni.internal.dbadmin.goog`\n\n| **Important:** Local backups taken before the switchover was performed cannot be automatically restored using the `Restore` custom resource. If you want to manually clone the backup, see [Clone a database cluster in Kubernetes using a local backup](/alloydb/omni/15.5.2/docs/kubernetes-dr-backup-k8-local).\n\nUse standby replica as a read-only instance\n\nTo use a standby replica as a read-only instance, complete the following steps:\n\n1. Modify the database cluster's manifest to set the `enableStandbyAsReadReplica` parameter to `true`.\n\n spec:\n availability:\n enableStandbyAsReadReplica: true\n\n2. Re-apply the manifest.\n\n3. Verify that the read-only endpoint is reported in the `status` field of the `DBCluster` object:\n\n kubectl describe dbcluster -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \u003cvar translate=\"no\"\u003eDB_CLUSTER_NAME\u003c/var\u003e\n\n The following example response shows the endpoint of the read-only instance: \n\n Status:\n [...]\n Primary:\n [...]\n Endpoints:\n Name: Read-Write\n Value: 10.128.0.81:5432\n Name: Read-Only\n Value: 10.128.0.82:5432"]]