在 SLES 上的纵向扩容高可用性集群中将 SAPHanaSR 升级到 SAPHanaSR-angi

本文档介绍了如何在 Google Cloud上运行的基于 SLES 的纵向扩容高可用性 (HA) 集群中,将 SAPHanaSR 资源代理升级为 SAPHanaSR 高级下一代接口 (SAPHanaSR-angi) 资源代理。

SAPHanaSR-angiSAPHanaSR 的后继产品。此外,从 SLES for SAP 15 SP6 开始, Google Cloud 建议您使用 SAPHanaSR-angi。如需了解此资源代理(包括其优势),请参阅 SUSE 页面“SAPHanaSR-angi 是什么?”。

为说明升级过程,本指南假定一个 SAP HANA 纵向扩容高可用性系统在 SLES for SAP 15 SP4 上运行,SID 为 ABC。此升级过程源自 SUSE 博客如何升级到 SAPHanaSR-angi

准备工作

在基于 SLES 的 SAP HANA 纵向扩容高可用性集群中从 SAPHanaSR 升级到 SAPHanaSR-angi 之前,请确保您使用的是以下操作系统版本之一的最新补丁:SLES 15 SP4 或更高版本。

如果您使用的是较低的操作系统版本,则需要先更新到这些操作系统版本之一的最新补丁。只有这些操作系统版本的最新补丁才包含 SUSE 为此升级提供的 SAPHanaSR-upgrade-to-angi-demo 脚本。该脚本位于 /usr/share/SAPHanaSR/samples 目录中。

在纵向扩容集群中升级到 SAPHanaSR-angi 资源代理

如需在基于 SLES 的 SAP HANA 纵向扩容高可用性集群中升级到 SAPHanaSR-angi 资源代理,请完成以下步骤:

  1. 为升级准备好实例
  2. 从集群中移除资源配置
  3. 向集群添加资源配置
  4. 验证 HANA 集群属性的详细信息

为升级准备好实例

  1. 在 SAP HANA 高可用性系统的主实例上,安装 ClusterTools2 Linux 软件包:

    zypper -y install ClusterTools2

    验证软件包安装

    验证 ClusterTools2 软件包的安装:

    zypper info ClusterTools2

    输出类似于以下内容:

    Information for package ClusterTools2:
     --------------------------------------
     Repository     : SLE-Module-SAP-Applications15-SP4-Updates
     Name           : ClusterTools2
     Version        : 3.1.3-150100.8.12.1
     Arch           : noarch
     Vendor         : SUSE LLC <https://www.suse.com/>
     Support Level  : Level 3
     Installed Size : 340.1 KiB
     Installed      : Yes
     Status         : up-to-date
     Source package : ClusterTools2-3.1.3-150100.8.12.1.src
     Upstream URL   : http://www.suse.com
     Summary        : Tools for cluster management
     Description    :
         ClusterTools2 provides tools for setting up and managing a corosync/
         pacemaker cluster.
         There are some other commandline tools to make life easier.
         Starting with version 3.0.0 is support for SUSE Linux Enterprise Server 12.
  2. 将 SUSE 提供的 SAPHanaSR-upgrade-to-angi-demo 演示脚本复制到 /root/bin。此脚本会执行配置文件备份,并生成升级所需的命令。

    如需将演示脚本复制到 /root/bin,请执行以下操作:

    cp -p /usr/share/SAPHanaSR/samples/SAPHanaSR-upgrade-to-angi-demo /root/bin/
    cd /root/bin
    ls -lrt

    输出类似于以下示例:

    ...
    -r--r--r-- 1 root root   157 Nov  8 14:45 global.ini_susTkOver
    -r--r--r-- 1 root root   133 Nov  8 14:45 global.ini_susHanaSR
    -r--r--r-- 1 root root   157 Nov  8 14:45 global.ini_susCostOpt
    -r--r--r-- 1 root root   175 Nov  8 14:45 global.ini_susChkSrv
    -r-xr-xr-x 1 root root 22473 Nov  8 14:45 SAPHanaSR-upgrade-to-angi-demo
    drwxr-xr-x 3 root root    26 Nov  9 07:50 crm_cfg
  3. 在 SAP HANA 高可用性系统的次要实例上,重复第 1-2 步。

  4. 在高可用性系统的任一实例中,运行演示脚本:

    ./SAPHanaSR-upgrade-to-angi-demo --upgrade > upgrade-to-angi.txt
  5. 验证备份文件是否存在:

    ls -l

    输出类似于以下示例:

    /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/*
    -rw-r--r-- 1 root   root     443 Dec 10 20:09 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/20-saphana.sudo
    -rwxr-xr-x 1 root   root   22461 May 14  2024 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/SAPHanaSR-upgrade-to-angi-demo
    -rw-r--r-- 1 root   root   28137 Jan  9 04:36 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/cib.xml
    -rw-r--r-- 1 root   root    3467 Jan  9 04:36 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/crm_configure.txt
    -rw-r--r-- 1 abcadm sapsys   929 Dec 10 20:09 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/global.ini

从集群中移除资源配置

  1. 在高可用性系统的任一实例中,将 SAPHanaSAPHanaTopology 集群资源置于维护模式:

    cs_wait_for_idle -s 3 >/dev/null
    crm resource maintenance msl_SAPHana_ABC_HDB00 on
    cs_wait_for_idle -s 3 >/dev/null
    crm resource maintenance cln_SAPHanaTopology_ABC_HDB00 on
    cs_wait_for_idle -s 3 >/dev/null
    echo "property cib-bootstrap-options: stop-orphan-resources=false" | crm configure load update -

    验证集群资源的状态

    如需验证 SAPHanaSAPHanaTopology 资源,请执行以下操作:

    crm status

    输出必须显示所述资源为 unmanaged

    Cluster Summary:
     ...
     Full List of Resources:
       *  STONITH-sles-sp4-angi-vm1   (stonith:fence_gce):     Started sles-sp4-angi-vm2
       *  STONITH-sles-sp4-angi-vm2   (stonith:fence_gce):     Started sles-sp4-angi-vm1
       *  Resource Group: g-primary:
         *    rsc_vip_int-primary       (ocf::heartbeat:IPaddr2):        Started sles-sp4-angi-vm1
         *    rsc_vip_hc-primary        (ocf::heartbeat:anything):       Started sles-sp4-angi-vm1
       *  Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTopology_ABC_HDB00] (unmanaged):
         *    rsc_SAPHanaTopology_ABC_HDB00     (ocf::suse:SAPHanaTopology):     Started sles-sp4-angi-vm1 (unmanaged)
         *    rsc_SAPHanaTopology_ABC_HDB00     (ocf::suse:SAPHanaTopology):     Started sles-sp4-angi-vm2 (unmanaged)
       *  Clone Set: msl_SAPHana_ABC_HDB00 [rsc_SAPHana_ABC_HDB00] (promotable, unmanaged):
         *    rsc_SAPHana_ABC_HDB00     (ocf::suse:SAPHana):     Master sles-sp4-angi-vm1 (unmanaged)
         *    rsc_SAPHana_ABC_HDB00     (ocf::suse:SAPHana):     Slave sles-sp4-angi-vm2 (unmanaged)
  2. 在高可用性系统的主实例上,移除现有的高可用性/灾难恢复提供方钩子配置:

    grep "^\[ha_dr_provider_" /hana/shared/ABC/global/hdb/custom/config/global.ini
    su - abcadm -c "/usr/sbin/SAPHanaSR-manageProvider --sid=ABC --show --provider=SAPHanaSR" > /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.SAPHanaSR
    su - abcadm -c "/usr/sbin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --remove /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.SAPHanaSR"
    rm /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.SAPHanaSR
    rm /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.suschksrv
    su - abcadm -c "hdbnsutil -reloadHADRProviders"
    grep "^\[ha_dr_provider_" /hana/shared/ABC/global/hdb/custom/config/global.ini
    cp /etc/sudoers.d/20-saphana /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic
    grep -v "abcadm.*ALL..NOPASSWD.*crm_attribute.*abc" /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic > /etc/sudoers.d/20-saphana
    cp /etc/sudoers.d/20-saphana /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic
    grep -v "abcadm.*ALL..NOPASSWD.*SAPHanaSR-hookHelper.*sid=ABC" /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic > /etc/sudoers.d/20-saphana
    rm /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic
  3. 在高可用性系统的次要实例上,重复上述步骤以移除现有的高可用性/灾难恢复提供方钩子配置。

  4. 在高可用性系统的任一实例中,移除 SAPHana 资源代理的配置:

    cibadmin --delete --xpath "//rsc_colocation[@id='col_saphana_ip_ABC_HDB00']"
    cibadmin --delete --xpath "//rsc_order[@id='ord_SAPHana_ABC_HDB00']"
    cibadmin --delete --xpath "//master[@id='msl_SAPHana_ABC_HDB00']"
    cs_wait_for_idle -s 3 >/dev/null
    crm resource refresh rsc_SAPHana_ABC_HDB00

    验证 SAPHana 资源代理的状态

    如需验证 SAPHana 资源代理配置是否已移除,请执行以下操作:

    crm status

    输出必须显示所述资源为 unmanaged

    Cluster Summary:
     ...
     Full List of Resources:
       * STONITH-sles-sp4-angi-vm1   (stonith:fence_gce):     Started sles-sp4-angi-vm2
       * STONITH-sles-sp4-angi-vm2   (stonith:fence_gce):     Started sles-sp4-angi-vm1
       * Resource Group: g-primary:
         * rsc_vip_int-primary       (ocf::heartbeat:IPaddr2):        Started sles-sp4-angi-vm1
         * rsc_vip_hc-primary        (ocf::heartbeat:anything):       Started sles-sp4-angi-vm1
       * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTopology_ABC_HDB00] (unmanaged):
         * rsc_SAPHanaTopology_ABC_HDB00     (ocf::suse:SAPHanaTopology):     Started sles-sp4-angi-vm1 (unmanaged)
         * rsc_SAPHanaTopology_ABC_HDB00     (ocf::suse:SAPHanaTopology):     Started sles-sp4-angi-vm2 (unmanaged)
  5. 在高可用性系统的任一实例中,移除 SAPHanaTopology 资源代理的配置:

    cs_wait_for_idle -s 3 >/dev/null
    cibadmin --delete --xpath "//rsc_order[@id='ord_SAPHana_ABC_HDB00']"
    cibadmin --delete --xpath "//clone[@id='cln_SAPHanaTopology_ABC_HDB00']"
    cs_wait_for_idle -s 3 >/dev/null
    crm resource refresh rsc_SAPHanaTopology_ABC_HDB00

    验证 SAPHanaTopology 资源代理的状态

    如需验证 SAPHanaTopology 资源代理配置是否已移除,请执行以下操作:

    crm status

    输出类似于以下内容:

    Cluster Summary:
     * Stack: corosync
     * Current DC: sles-sp4-angi-vm1 (version 2.1.2+20211124.ada5c3b36-150400.4.20.1-2.1.2+20211124.ada5c3b36) - partition with quorum
     * Last updated: Wed Jan 29 03:30:36 2025
     * Last change:  Wed Jan 29 03:30:32 2025 by hacluster via crmd on sles-sp4-angi-vm1
     * 2 nodes configured
     * 4 resource instances configured
    
     Node List:
       * Online: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
    
     Full List of Resources:
       * STONITH-sles-sp4-angi-vm1   (stonith:fence_gce):     Started sles-sp4-angi-vm2
       * STONITH-sles-sp4-angi-vm2   (stonith:fence_gce):     Started sles-sp4-angi-vm1
       * Resource Group: g-primary:
         * rsc_vip_int-primary       (ocf::heartbeat:IPaddr2):        Started sles-sp4-angi-vm1
         * rsc_vip_hc-primary        (ocf::heartbeat:anything):       Started sles-sp4-angi-vm1
  6. 在高可用性系统的主实例上,移除其他集群属性:

    cs_wait_for_idle -s 3 >/dev/null
    crm_attribute --delete --type crm_config --name hana_abc_site_srHook_sles-sp4-angi-vm2
    crm_attribute --delete --type crm_config --name hana_abc_site_srHook_sles-sp4-angi-vm1
    cs_wait_for_idle -s 3 >/dev/null
    crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_op_mode --delete
    crm_attribute --node sles-sp4-angi-vm1 --name lpa_abc_lpt --delete
    crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_srmode --delete
    crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_vhost --delete
    crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_remoteHost --delete
    crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_site --delete
    crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_sync_state --lifetime reboot --delete
    crm_attribute --node sles-sp4-angi-vm1 --name master-rsc_SAPHana_ABC_HDB00 --lifetime reboot --delete
    crm_attribute --node sles-sp4-angi-vm2 --name lpa_abc_lpt --delete
    crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_op_mode --delete
    crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_vhost --delete
    crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_site --delete
    crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_srmode --delete
    crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_remoteHost --delete
    crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_sync_state --lifetime reboot --delete
    crm_attribute --node sles-sp4-angi-vm2 --name master-rsc_SAPHana_ABC_HDB00 --lifetime reboot --delete
  7. 在高可用性系统的主实例上,移除 SAPHanaSR 软件包:

    cs_wait_for_idle -s 3 >/dev/null
    crm cluster run "rpm -e --nodeps 'SAPHanaSR'"

    验证集群状态

    如需检查高可用性集群的状态,请执行以下操作:

    crm status

    输出类似于以下示例:

    Cluster Summary:
     ...
     Node List:
       * Online: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
    
     Full List of Resources:
       * STONITH-sles-sp4-angi-vm1   (stonith:fence_gce):     Started sles-sp4-angi-vm2
       * STONITH-sles-sp4-angi-vm2   (stonith:fence_gce):     Started sles-sp4-angi-vm1
       * Resource Group: g-primary:
         * rsc_vip_int-primary       (ocf::heartbeat:IPaddr2):        Started sles-sp4-angi-vm1
         * rsc_vip_hc-primary        (ocf::heartbeat:anything):       Started sles-sp4-angi-vm1
  8. 在高可用性系统的主实例上,移除 SAPHanaSR-doc 软件包:

    zypper remove SAPHanaSR-doc
  9. 在高可用性系统的次要实例上,移除 SAPHanaSR-doc 软件包。

向集群添加资源配置

  1. 在高可用性系统的任一实例中,安装 SAPHanaSR-angi 资源软件包管理系统:

    cs_wait_for_idle -s 3 >/dev/null
    crm cluster run "zypper --non-interactive in -l -f -y 'SAPHanaSR-angi'"
    crm cluster run "rpm -q 'SAPHanaSR-angi' --queryformat %{NAME}"
    hash -r
  2. 在高可用性系统的主实例上,添加高可用性/灾难恢复提供方钩子配置:

    su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --add /usr/share/SAPHanaSR-angi/samples/global.ini_susHanaSR"
    su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --add /usr/share/SAPHanaSR-angi/samples/global.ini_susTkOver"
    su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --add /usr/share/SAPHanaSR-angi/samples/global.ini_susChkSrv"
    su - abcadm -c "hdbnsutil -reloadHADRProviders"
    grep -A2 "^\[ha_dr_provider_" /hana/shared/ABC/global/hdb/custom/config/global.ini
    su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --show --provider=SAPHanaSR"
    su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --show --provider=suschksrv"
    echo "abcadm ALL=(ALL) NOPASSWD: /usr/bin/SAPHanaSR-hookHelper --sid=ABC *" >> /etc/sudoers.d/20-saphana
    echo "abcadm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_abc_*" >> /etc/sudoers.d/20-saphana
    sudo -l -U abcadm | grep -e crm_attribute -e SAPHanaSR-hookHelper
  3. 在高可用性系统的次要实例上,重复上述步骤以添加高可用性/灾难恢复提供方钩子配置。

  4. 在高可用性系统的任一实例中,添加 SAPHanaTopology 资源代理配置:

    cs_wait_for_idle -s 3 >/dev/null
    echo "
    #
    primitive rsc_SAPHanaTop_ABC_HDB00 ocf:suse:SAPHanaTopology op start interval=0 timeout=600 op stop interval=0 timeout=600 op monitor interval=50 timeout=600 params SID=ABC InstanceNumber=00
    #
    clone cln_SAPHanaTopology_ABC_HDB00 rsc_SAPHanaTop_ABC_HDB00 meta clone-node-max=1 interleave=true
    #
    " | crm configure load update -
    crm configure show cln_SAPHanaTopology_ABC_HDB00

    验证集群状态

    如需检查高可用性集群的状态,请执行以下操作:

    crm status

    输出类似于以下示例:

    Cluster Summary:
     ...
       * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]:
            * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
    
  5. 在高可用性系统的任一实例中,将 SAPHanaCon 添加为非托管式集群资源:

    cs_wait_for_idle -s 3 >/dev/null
    echo "
    #
    primitive rsc_SAPHanaCon_ABC_HDB00 ocf:suse:SAPHanaController op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op promote interval=0 timeout=900 op demote interval=0 timeout=320 op monitor interval=60 role=Promoted timeout=700 op monitor interval=61 role=Unpromoted timeout=700 params SID=ABC InstanceNumber=00 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true meta maintenance=true
    #
    clone mst_SAPHanaCon_ABC_HDB00 rsc_SAPHanaCon_ABC_HDB00 meta clone-node-max=1 promotable=true interleave=true maintenance=true
    #
    order ord_SAPHanaTop_first Optional: cln_SAPHanaTopology_ABC_HDB00 mst_SAPHanaCon_ABC_HDB00
    #
    colocation col_SAPHanaCon_ip_ABC_HDB00 2000: g-primary:Started mst_SAPHanaCon_ABC_HDB00:Promoted
    #
    " | crm configure load update -
    crm configure show mst_SAPHanaCon_ABC_HDB00

    验证集群状态

    如需检查高可用性集群的状态,请执行以下操作:

    crm status

    输出类似于以下示例:

    Cluster Summary:
    ...
     * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]:
       * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
     * Clone Set: mst_SAPHanaCon_ABC_HDB00 [rsc_SAPHanaCon_ABC_HDB00] (promotable, unmanaged):
       * rsc_SAPHanaCon_ABC_HDB00  (ocf::suse:SAPHanaController):   Slave sles-sp4-angi-vm1 (unmanaged)
       * rsc_SAPHanaCon_ABC_HDB00  (ocf::suse:SAPHanaController):   Slave sles-sp4-angi-vm2 (unmanaged)
  6. 在高可用性系统的任一实例中,添加 HANA 文件系统集群资源:

    cs_wait_for_idle -s 3 >/dev/null
    echo "
    #
    primitive rsc_SAPHanaFil_ABC_HDB00 ocf:suse:SAPHanaFilesystem op start interval=0 timeout=10 op stop interval=0 timeout=20 on-fail=fence op monitor interval=120 timeout=120 params SID=ABC InstansceNumber=00
    #
    clone cln_SAPHanaFil_ABC_HDB00 rsc_SAPHanaFil_ABC_HDB00 meta clone-node-max=1 interleave=true
    #
    " | crm configure load update -
    crm configure show cln_SAPHanaFil_ABC_HDB00
    #
    clone cln_SAPHanaFil_ABC_HDB00 rsc_SAPHanaFil_ABC_HDB00 \
           meta clone-node-max=1 interleave=true

    验证集群状态

    如需检查高可用性集群的状态,请执行以下操作:

    crm status

    输出类似于以下示例:

    Cluster Summary:
     ...
       * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]:
         * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
       * Clone Set: mst_SAPHanaCon_ABC_HDB00 [rsc_SAPHanaCon_ABC_HDB00] (promotable, unmanaged):
         * rsc_SAPHanaCon_ABC_HDB00  (ocf::suse:SAPHanaController):   Slave sles-sp4-angi-vm1 (unmanaged)
         * rsc_SAPHanaCon_ABC_HDB00  (ocf::suse:SAPHanaController):   Slave sles-sp4-angi-vm2 (unmanaged)
       * Clone Set: cln_SAPHanaFil_ABC_HDB00 [rsc_SAPHanaFil_ABC_HDB00]:
         * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
  7. 在高可用性系统的任一实例中,让高可用性集群退出维护模式:

    cs_wait_for_idle -s 3 >/dev/null
    crm resource refresh cln_SAPHanaTopology_ABC_HDB00
    cs_wait_for_idle -s 3 >/dev/null
    crm resource maintenance cln_SAPHanaTopology_ABC_HDB00 off
    cs_wait_for_idle -s 3 >/dev/null
    crm resource refresh mst_SAPHanaCon_ABC_HDB00
    cs_wait_for_idle -s 3 >/dev/null
    crm resource maintenance mst_SAPHanaCon_ABC_HDB00 off
    cs_wait_for_idle -s 3 >/dev/null
    crm resource refresh cln_SAPHanaFil_ABC_HDB00
    cs_wait_for_idle -s 3 >/dev/null
    crm resource maintenance cln_SAPHanaFil_ABC_HDB00 off
    cs_wait_for_idle -s 3 >/dev/null
    echo "property cib-bootstrap-options: stop-orphan-resources=true" | crm configure load update -
  8. 检查高可用性集群的状态:

    cs_wait_for_idle -s 3 >/dev/null
    crm_mon -1r --include=failcounts,fencing-pending;echo;SAPHanaSR-showAttr;cs_clusterstate -i|grep -v "#"

    输出类似于以下示例:

    Cluster Summary:
     * Stack: corosync
     * Current DC: sles-sp4-angi-vm1 (version 2.1.2+20211124.ada5c3b36-150400.4.20.1-2.1.2+20211124.ada5c3b36) - partition with quorum
     * Last updated: Wed Jan 29 05:21:05 2025
     * Last change:  Wed Jan 29 05:21:05 2025 by root via crm_attribute on sles-sp4-angi-vm1
     * 2 nodes configured
     * 10 resource instances configured
    
    Node List:
     * Online: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
    
    Full List of Resources:
     * STONITH-sles-sp4-angi-vm1   (stonith:fence_gce):     Started sles-sp4-angi-vm2
     * STONITH-sles-sp4-angi-vm2   (stonith:fence_gce):     Started sles-sp4-angi-vm1
     * Resource Group: g-primary:
       * rsc_vip_int-primary       (ocf::heartbeat:IPaddr2):        Started sles-sp4-angi-vm1
       * rsc_vip_hc-primary        (ocf::heartbeat:anything):       Started sles-sp4-angi-vm1
     * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]:
       * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
     * Clone Set: mst_SAPHanaCon_ABC_HDB00 [rsc_SAPHanaCon_ABC_HDB00] (promotable):
       * Masters: [ sles-sp4-angi-vm1 ]
       * Slaves: [ sles-sp4-angi-vm2 ]
     * Clone Set: cln_SAPHanaFil_ABC_HDB00 [rsc_SAPHanaFil_ABC_HDB00]:
       * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
    
    Migration Summary:
    
    Global cib-update dcid prim              sec               sid topology
    ------------------------------------------------------------------------
    global 0.68471.0  1    sles-sp4-angi-vm1 sles-sp4-angi-vm2 ABC ScaleUp
    
    Resource                      maintenance promotable
    -----------------------------------------------------
    mst_SAPHanaCon_ABC_HDB00      false       true
    cln_SAPHanaTopology_ABC_HDB00 false
    
    Site              lpt        lss mns               opMode    srHook srMode  srPoll srr
    ---------------------------------------------------------------------------------------
    sles-sp4-angi-vm1 1738128065 4   sles-sp4-angi-vm1 logreplay PRIM   syncmem PRIM   P
    sles-sp4-angi-vm2 30         4   sles-sp4-angi-vm2 logreplay        syncmem SOK    S
    
    Host              clone_state roles                        score site              srah version     vhost
    ----------------------------------------------------------------------------------------------------------------------
    sles-sp4-angi-vm1 PROMOTED    master1:master:worker:master 150   sles-sp4-angi-vm1 -    2.00.073.00 sles-sp4-angi-vm1
    sles-sp4-angi-vm2 DEMOTED     master1:master:worker:master 100   sles-sp4-angi-vm2 -    2.00.073.00 sles-sp4-angi-vm2

验证 HANA 集群属性的详细信息

  • 在高可用性集群的任一实例中,查看 HANA 集群属性的详细信息:

    SAPHanaSR-showAttr

    输出类似于以下示例:

    Global cib-update dcid prim              sec               sid topology
    ------------------------------------------------------------------------
    global 0.98409.1  1    sles-sp4-angi-vm2 sles-sp4-angi-vm1 ABC ScaleUp
    
    Resource                      maintenance promotable
    -----------------------------------------------------
    mst_SAPHanaCon_ABC_HDB00      false       true
    cln_SAPHanaTopology_ABC_HDB00 false
    
    Site              lpt        lss mns               opMode    srHook srMode  srPoll srr
    ---------------------------------------------------------------------------------------
    sles-sp4-angi-vm1 30         4   sles-sp4-angi-vm1 logreplay SOK    syncmem SOK    S
    sles-sp4-angi-vm2 1742448908 4   sles-sp4-angi-vm2 logreplay PRIM   syncmem PRIM   P
    
    Host              clone_state roles                        score site              srah version     vhost
    ----------------------------------------------------------------------------------------------------------------------
    sles-sp4-angi-vm1 DEMOTED     master1:master:worker:master 100   sles-sp4-angi-vm1 -    2.00.073.00 sles-sp4-angi-vm1
    sles-sp4-angi-vm2 PROMOTED    master1:master:worker:master 150   sles-sp4-angi-vm2 -    2.00.073.00 sles-sp4-angi-vm2

获取支持

如果您是从 Compute Engine 购买 SLES for SAP 操作系统映像的 PAYG 许可,则可以向 Cloud Customer Care 寻求帮助,以便联系 SUSE。如需了解详情,请参阅在 Compute Engine 上如何以随用随付 (PAYG) SLES 许可提供支持?

如需了解如何从 Google Cloud获取支持,请参阅获取 SAP on Google Cloud支持