SAPHanaSR
資源代理程式升級為 SAPHanaSR
進階下一代介面 (SAPHanaSR-angi
) 資源代理程式。
SAPHanaSR-angi
是 SAPHanaSR
的後繼者。此外,從 SLES for SAP 15 SP6 開始, Google Cloud 建議您使用 SAPHanaSR-angi
。如要瞭解這項資源代理程式 (包括其優點),請參閱 SUSE 網頁「What is SAPHanaSR-angi
?」。
為說明升級程序,本指南假設 SAP HANA 擴充 HA 系統在 SLES for SAP 15 SP4 上執行,且 SID 為 ABC
。這個升級程序取自 SUSE 網誌「如何升級至 SAPHanaSR-angi
」一文。
事前準備
在以 SLES 為基礎的 SAP HANA 擴充 HA 叢集中,從 SAPHanaSR
升級至 SAPHanaSR-angi
之前,請確認您使用的是下列任一 OS 版本的最新修補程式:SLES 15 SP4 以上版本。
如果您使用的是舊版作業系統,則必須先更新至其中一個作業系統版本的最新修補程式。只有這些作業系統版本的最新修補程式,才包含 SUSE 為此升級提供的 SAPHanaSR-upgrade-to-angi-demo
指令碼。指令碼位於 /usr/share/SAPHanaSR/samples
目錄中。
在擴充叢集中升級至 SAPHanaSR-angi
資源代理程式
如要在以 SLES 為基礎的 SAP HANA 向上擴充 HA 叢集中升級至 SAPHanaSR-angi
資源代理程式,請完成下列步驟:
準備要升級的執行個體
在 SAP HANA HA 系統的主要執行個體上安裝
ClusterTools2
Linux 套件:zypper -y install ClusterTools2
驗證套件安裝作業
驗證
ClusterTools2
套件的安裝作業:zypper info ClusterTools2
輸出結果會與下列內容相似:
Information for package ClusterTools2: -------------------------------------- Repository : SLE-Module-SAP-Applications15-SP4-Updates Name : ClusterTools2 Version : 3.1.3-150100.8.12.1 Arch : noarch Vendor : SUSE LLC <https://www.suse.com/> Support Level : Level 3 Installed Size : 340.1 KiB Installed : Yes Status : up-to-date Source package : ClusterTools2-3.1.3-150100.8.12.1.src Upstream URL : http://www.suse.com Summary : Tools for cluster management Description : ClusterTools2 provides tools for setting up and managing a corosync/ pacemaker cluster. There are some other commandline tools to make life easier. Starting with version 3.0.0 is support for SUSE Linux Enterprise Server 12.
將 SUSE 提供的
SAPHanaSR-upgrade-to-angi-demo
示範指令碼複製到/root/bin
。這個指令碼會執行設定檔備份作業,並產生升級作業所需的指令。如要將示範指令碼複製到
/root/bin
:cp -p /usr/share/SAPHanaSR/samples/SAPHanaSR-upgrade-to-angi-demo /root/bin/ cd /root/bin ls -lrt
輸出結果會與下列範例相似:
... -r--r--r-- 1 root root 157 Nov 8 14:45 global.ini_susTkOver -r--r--r-- 1 root root 133 Nov 8 14:45 global.ini_susHanaSR -r--r--r-- 1 root root 157 Nov 8 14:45 global.ini_susCostOpt -r--r--r-- 1 root root 175 Nov 8 14:45 global.ini_susChkSrv -r-xr-xr-x 1 root root 22473 Nov 8 14:45 SAPHanaSR-upgrade-to-angi-demo drwxr-xr-x 3 root root 26 Nov 9 07:50 crm_cfg
在 SAP HANA HA 系統的次要執行個體上,重複步驟 1 至 2。
在 HA 系統的任一執行個體中執行示範指令碼:
./SAPHanaSR-upgrade-to-angi-demo --upgrade > upgrade-to-angi.txt
驗證備份檔案是否存在:
ls -l
輸出結果會與下列範例相似:
/root/SAPHanaSR-upgrade-to-angi-demo.1736397409/* -rw-r--r-- 1 root root 443 Dec 10 20:09 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/20-saphana.sudo -rwxr-xr-x 1 root root 22461 May 14 2024 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/SAPHanaSR-upgrade-to-angi-demo -rw-r--r-- 1 root root 28137 Jan 9 04:36 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/cib.xml -rw-r--r-- 1 root root 3467 Jan 9 04:36 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/crm_configure.txt -rw-r--r-- 1 abcadm sapsys 929 Dec 10 20:09 /root/SAPHanaSR-upgrade-to-angi-demo.1736397409/global.ini
從叢集中移除資源設定
從 HA 系統的任一執行個體,將
SAPHana
和SAPHanaTopology
叢集資源設為維護模式:cs_wait_for_idle -s 3 >/dev/null crm resource maintenance msl_SAPHana_ABC_HDB00 on cs_wait_for_idle -s 3 >/dev/null crm resource maintenance cln_SAPHanaTopology_ABC_HDB00 on cs_wait_for_idle -s 3 >/dev/null echo "property cib-bootstrap-options: stop-orphan-resources=false" | crm configure load update -
驗證叢集資源的狀態
如要驗證
SAPHana
和SAPHanaTopology
資源:crm status
輸出內容必須顯示上述資源為
unmanaged
:Cluster Summary: ... Full List of Resources: * STONITH-sles-sp4-angi-vm1 (stonith:fence_gce): Started sles-sp4-angi-vm2 * STONITH-sles-sp4-angi-vm2 (stonith:fence_gce): Started sles-sp4-angi-vm1 * Resource Group: g-primary: * rsc_vip_int-primary (ocf::heartbeat:IPaddr2): Started sles-sp4-angi-vm1 * rsc_vip_hc-primary (ocf::heartbeat:anything): Started sles-sp4-angi-vm1 * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTopology_ABC_HDB00] (unmanaged): * rsc_SAPHanaTopology_ABC_HDB00 (ocf::suse:SAPHanaTopology): Started sles-sp4-angi-vm1 (unmanaged) * rsc_SAPHanaTopology_ABC_HDB00 (ocf::suse:SAPHanaTopology): Started sles-sp4-angi-vm2 (unmanaged) * Clone Set: msl_SAPHana_ABC_HDB00 [rsc_SAPHana_ABC_HDB00] (promotable, unmanaged): * rsc_SAPHana_ABC_HDB00 (ocf::suse:SAPHana): Master sles-sp4-angi-vm1 (unmanaged) * rsc_SAPHana_ABC_HDB00 (ocf::suse:SAPHana): Slave sles-sp4-angi-vm2 (unmanaged)
在 HA 系統的主要執行個體上,移除現有的 HA/DR 供應器鉤子設定:
grep "^\[ha_dr_provider_" /hana/shared/ABC/global/hdb/custom/config/global.ini su - abcadm -c "/usr/sbin/SAPHanaSR-manageProvider --sid=ABC --show --provider=SAPHanaSR" > /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.SAPHanaSR su - abcadm -c "/usr/sbin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --remove /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.SAPHanaSR" rm /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.SAPHanaSR rm /run/SAPHanaSR-upgrade-to-angi-demo.12915.global.ini.suschksrv su - abcadm -c "hdbnsutil -reloadHADRProviders" grep "^\[ha_dr_provider_" /hana/shared/ABC/global/hdb/custom/config/global.ini cp /etc/sudoers.d/20-saphana /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic grep -v "abcadm.*ALL..NOPASSWD.*crm_attribute.*abc" /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic > /etc/sudoers.d/20-saphana cp /etc/sudoers.d/20-saphana /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic grep -v "abcadm.*ALL..NOPASSWD.*SAPHanaSR-hookHelper.*sid=ABC" /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic > /etc/sudoers.d/20-saphana rm /run/SAPHanaSR-upgrade-to-angi-demo.12915.sudoers.classic
在 HA 系統的次要執行個體上,重複執行上述步驟,移除現有的 HA/DR 供應器掛鉤設定。
從 HA 系統的任一執行個體中,移除
SAPHana
資源代理程式的設定:cibadmin --delete --xpath "//rsc_colocation[@id='col_saphana_ip_ABC_HDB00']" cibadmin --delete --xpath "//rsc_order[@id='ord_SAPHana_ABC_HDB00']" cibadmin --delete --xpath "//master[@id='msl_SAPHana_ABC_HDB00']" cs_wait_for_idle -s 3 >/dev/null crm resource refresh rsc_SAPHana_ABC_HDB00
驗證
SAPHana
資源代理程式的狀態如要確認已移除
SAPHana
資源代理程式設定,請按照下列步驟操作:crm status
輸出內容必須顯示該資源為
unmanaged
:Cluster Summary: ... Full List of Resources: * STONITH-sles-sp4-angi-vm1 (stonith:fence_gce): Started sles-sp4-angi-vm2 * STONITH-sles-sp4-angi-vm2 (stonith:fence_gce): Started sles-sp4-angi-vm1 * Resource Group: g-primary: * rsc_vip_int-primary (ocf::heartbeat:IPaddr2): Started sles-sp4-angi-vm1 * rsc_vip_hc-primary (ocf::heartbeat:anything): Started sles-sp4-angi-vm1 * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTopology_ABC_HDB00] (unmanaged): * rsc_SAPHanaTopology_ABC_HDB00 (ocf::suse:SAPHanaTopology): Started sles-sp4-angi-vm1 (unmanaged) * rsc_SAPHanaTopology_ABC_HDB00 (ocf::suse:SAPHanaTopology): Started sles-sp4-angi-vm2 (unmanaged)
從 HA 系統的任一執行個體中,移除
SAPHanaTopology
資源代理程式的設定:cs_wait_for_idle -s 3 >/dev/null cibadmin --delete --xpath "//rsc_order[@id='ord_SAPHana_ABC_HDB00']" cibadmin --delete --xpath "//clone[@id='cln_SAPHanaTopology_ABC_HDB00']" cs_wait_for_idle -s 3 >/dev/null crm resource refresh rsc_SAPHanaTopology_ABC_HDB00
驗證
SAPHanaTopology
資源代理程式的狀態如要確認已移除
SAPHanaTopology
資源代理程式設定,請按照下列步驟操作:crm status
輸出結果會與下列內容相似:
Cluster Summary: * Stack: corosync * Current DC: sles-sp4-angi-vm1 (version 2.1.2+20211124.ada5c3b36-150400.4.20.1-2.1.2+20211124.ada5c3b36) - partition with quorum * Last updated: Wed Jan 29 03:30:36 2025 * Last change: Wed Jan 29 03:30:32 2025 by hacluster via crmd on sles-sp4-angi-vm1 * 2 nodes configured * 4 resource instances configured Node List: * Online: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ] Full List of Resources: * STONITH-sles-sp4-angi-vm1 (stonith:fence_gce): Started sles-sp4-angi-vm2 * STONITH-sles-sp4-angi-vm2 (stonith:fence_gce): Started sles-sp4-angi-vm1 * Resource Group: g-primary: * rsc_vip_int-primary (ocf::heartbeat:IPaddr2): Started sles-sp4-angi-vm1 * rsc_vip_hc-primary (ocf::heartbeat:anything): Started sles-sp4-angi-vm1
在 HA 系統的主要執行個體上,移除其他叢集資源:
cs_wait_for_idle -s 3 >/dev/null crm_attribute --delete --type crm_config --name hana_abc_site_srHook_sles-sp4-angi-vm2 crm_attribute --delete --type crm_config --name hana_abc_site_srHook_sles-sp4-angi-vm1 cs_wait_for_idle -s 3 >/dev/null crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_op_mode --delete crm_attribute --node sles-sp4-angi-vm1 --name lpa_abc_lpt --delete crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_srmode --delete crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_vhost --delete crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_remoteHost --delete crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_site --delete crm_attribute --node sles-sp4-angi-vm1 --name hana_abc_sync_state --lifetime reboot --delete crm_attribute --node sles-sp4-angi-vm1 --name master-rsc_SAPHana_ABC_HDB00 --lifetime reboot --delete crm_attribute --node sles-sp4-angi-vm2 --name lpa_abc_lpt --delete crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_op_mode --delete crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_vhost --delete crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_site --delete crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_srmode --delete crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_remoteHost --delete crm_attribute --node sles-sp4-angi-vm2 --name hana_abc_sync_state --lifetime reboot --delete crm_attribute --node sles-sp4-angi-vm2 --name master-rsc_SAPHana_ABC_HDB00 --lifetime reboot --delete
在 HA 系統的主要執行個體上移除
SAPHanaSR
套件:cs_wait_for_idle -s 3 >/dev/null crm cluster run "rpm -e --nodeps 'SAPHanaSR'"
驗證叢集狀態
如何查看 HA 叢集狀態:
crm status
輸出結果會與下列範例相似:
Cluster Summary: ... Node List: * Online: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ] Full List of Resources: * STONITH-sles-sp4-angi-vm1 (stonith:fence_gce): Started sles-sp4-angi-vm2 * STONITH-sles-sp4-angi-vm2 (stonith:fence_gce): Started sles-sp4-angi-vm1 * Resource Group: g-primary: * rsc_vip_int-primary (ocf::heartbeat:IPaddr2): Started sles-sp4-angi-vm1 * rsc_vip_hc-primary (ocf::heartbeat:anything): Started sles-sp4-angi-vm1
在 HA 系統的主要執行個體上移除
SAPHanaSR-doc
套件:zypper remove SAPHanaSR-doc
在 HA 系統的次要執行個體上移除
SAPHanaSR-doc
套件。
為叢集新增資源設定
從 HA 系統的任一執行個體安裝
SAPHanaSR-angi
資源套件管理工具:cs_wait_for_idle -s 3 >/dev/null crm cluster run "zypper --non-interactive in -l -f -y 'SAPHanaSR-angi'" crm cluster run "rpm -q 'SAPHanaSR-angi' --queryformat %{NAME}" hash -r
在 HA 系統的主要執行個體上,新增 HA/DR 供應器掛鉤設定:
su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --add /usr/share/SAPHanaSR-angi/samples/global.ini_susHanaSR" su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --add /usr/share/SAPHanaSR-angi/samples/global.ini_susTkOver" su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --reconfigure --add /usr/share/SAPHanaSR-angi/samples/global.ini_susChkSrv" su - abcadm -c "hdbnsutil -reloadHADRProviders" grep -A2 "^\[ha_dr_provider_" /hana/shared/ABC/global/hdb/custom/config/global.ini su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --show --provider=SAPHanaSR" su - abcadm -c "/usr/bin/SAPHanaSR-manageProvider --sid=ABC --show --provider=suschksrv" echo "abcadm ALL=(ALL) NOPASSWD: /usr/bin/SAPHanaSR-hookHelper --sid=ABC *" >> /etc/sudoers.d/20-saphana echo "abcadm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_abc_*" >> /etc/sudoers.d/20-saphana sudo -l -U abcadm | grep -e crm_attribute -e SAPHanaSR-hookHelper
在 HA 系統的次要執行個體上,重複執行上述步驟,新增 HA/DR 供應器掛鉤設定。
從 HA 系統的任一執行個體新增
SAPHanaTopology
資源代理程式設定:cs_wait_for_idle -s 3 >/dev/null echo " # primitive rsc_SAPHanaTop_ABC_HDB00 ocf:suse:SAPHanaTopology op start interval=0 timeout=600 op stop interval=0 timeout=600 op monitor interval=50 timeout=600 params SID=ABC InstanceNumber=00 # clone cln_SAPHanaTopology_ABC_HDB00 rsc_SAPHanaTop_ABC_HDB00 meta clone-node-max=1 interleave=true # " | crm configure load update - crm configure show cln_SAPHanaTopology_ABC_HDB00
驗證叢集狀態
如何查看 HA 叢集狀態:
crm status
輸出結果會與下列範例相似:
Cluster Summary: ... * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]: * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
從 HA 系統的任一執行個體中,將
SAPHanaCon
新增為未管理的叢集資源:cs_wait_for_idle -s 3 >/dev/null echo " # primitive rsc_SAPHanaCon_ABC_HDB00 ocf:suse:SAPHanaController op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op promote interval=0 timeout=900 op demote interval=0 timeout=320 op monitor interval=60 role=Promoted timeout=700 op monitor interval=61 role=Unpromoted timeout=700 params SID=ABC InstanceNumber=00 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true meta maintenance=true # clone mst_SAPHanaCon_ABC_HDB00 rsc_SAPHanaCon_ABC_HDB00 meta clone-node-max=1 promotable=true interleave=true maintenance=true # order ord_SAPHanaTop_first Optional: cln_SAPHanaTopology_ABC_HDB00 mst_SAPHanaCon_ABC_HDB00 # colocation col_SAPHanaCon_ip_ABC_HDB00 2000: g-primary:Started mst_SAPHanaCon_ABC_HDB00:Promoted # " | crm configure load update - crm configure show mst_SAPHanaCon_ABC_HDB00
驗證叢集狀態
如何查看 HA 叢集狀態:
crm status
輸出結果會與下列範例相似:
Cluster Summary: ... * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]: * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ] * Clone Set: mst_SAPHanaCon_ABC_HDB00 [rsc_SAPHanaCon_ABC_HDB00] (promotable, unmanaged): * rsc_SAPHanaCon_ABC_HDB00 (ocf::suse:SAPHanaController): Slave sles-sp4-angi-vm1 (unmanaged) * rsc_SAPHanaCon_ABC_HDB00 (ocf::suse:SAPHanaController): Slave sles-sp4-angi-vm2 (unmanaged)
從 HA 系統的任一執行個體新增 HANA 檔案系統叢集資源:
cs_wait_for_idle -s 3 >/dev/null echo " # primitive rsc_SAPHanaFil_ABC_HDB00 ocf:suse:SAPHanaFilesystem op start interval=0 timeout=10 op stop interval=0 timeout=20 on-fail=fence op monitor interval=120 timeout=120 params SID=ABC InstansceNumber=00 # clone cln_SAPHanaFil_ABC_HDB00 rsc_SAPHanaFil_ABC_HDB00 meta clone-node-max=1 interleave=true # " | crm configure load update - crm configure show cln_SAPHanaFil_ABC_HDB00 # clone cln_SAPHanaFil_ABC_HDB00 rsc_SAPHanaFil_ABC_HDB00 \ meta clone-node-max=1 interleave=true
驗證叢集狀態
如何查看 HA 叢集狀態:
crm status
輸出結果會與下列範例相似:
Cluster Summary: ... * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]: * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ] * Clone Set: mst_SAPHanaCon_ABC_HDB00 [rsc_SAPHanaCon_ABC_HDB00] (promotable, unmanaged): * rsc_SAPHanaCon_ABC_HDB00 (ocf::suse:SAPHanaController): Slave sles-sp4-angi-vm1 (unmanaged) * rsc_SAPHanaCon_ABC_HDB00 (ocf::suse:SAPHanaController): Slave sles-sp4-angi-vm2 (unmanaged) * Clone Set: cln_SAPHanaFil_ABC_HDB00 [rsc_SAPHanaFil_ABC_HDB00]: * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ]
從高可用性系統的任一執行個體中,將高可用性叢集從維護模式中移除:
cs_wait_for_idle -s 3 >/dev/null crm resource refresh cln_SAPHanaTopology_ABC_HDB00 cs_wait_for_idle -s 3 >/dev/null crm resource maintenance cln_SAPHanaTopology_ABC_HDB00 off cs_wait_for_idle -s 3 >/dev/null crm resource refresh mst_SAPHanaCon_ABC_HDB00 cs_wait_for_idle -s 3 >/dev/null crm resource maintenance mst_SAPHanaCon_ABC_HDB00 off cs_wait_for_idle -s 3 >/dev/null crm resource refresh cln_SAPHanaFil_ABC_HDB00 cs_wait_for_idle -s 3 >/dev/null crm resource maintenance cln_SAPHanaFil_ABC_HDB00 off cs_wait_for_idle -s 3 >/dev/null echo "property cib-bootstrap-options: stop-orphan-resources=true" | crm configure load update -
檢查高可用性叢集的狀態:
cs_wait_for_idle -s 3 >/dev/null crm_mon -1r --include=failcounts,fencing-pending;echo;SAPHanaSR-showAttr;cs_clusterstate -i|grep -v "#"
輸出結果會與下列範例相似:
Cluster Summary: * Stack: corosync * Current DC: sles-sp4-angi-vm1 (version 2.1.2+20211124.ada5c3b36-150400.4.20.1-2.1.2+20211124.ada5c3b36) - partition with quorum * Last updated: Wed Jan 29 05:21:05 2025 * Last change: Wed Jan 29 05:21:05 2025 by root via crm_attribute on sles-sp4-angi-vm1 * 2 nodes configured * 10 resource instances configured Node List: * Online: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ] Full List of Resources: * STONITH-sles-sp4-angi-vm1 (stonith:fence_gce): Started sles-sp4-angi-vm2 * STONITH-sles-sp4-angi-vm2 (stonith:fence_gce): Started sles-sp4-angi-vm1 * Resource Group: g-primary: * rsc_vip_int-primary (ocf::heartbeat:IPaddr2): Started sles-sp4-angi-vm1 * rsc_vip_hc-primary (ocf::heartbeat:anything): Started sles-sp4-angi-vm1 * Clone Set: cln_SAPHanaTopology_ABC_HDB00 [rsc_SAPHanaTop_ABC_HDB00]: * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ] * Clone Set: mst_SAPHanaCon_ABC_HDB00 [rsc_SAPHanaCon_ABC_HDB00] (promotable): * Masters: [ sles-sp4-angi-vm1 ] * Slaves: [ sles-sp4-angi-vm2 ] * Clone Set: cln_SAPHanaFil_ABC_HDB00 [rsc_SAPHanaFil_ABC_HDB00]: * Started: [ sles-sp4-angi-vm1 sles-sp4-angi-vm2 ] Migration Summary: Global cib-update dcid prim sec sid topology ------------------------------------------------------------------------ global 0.68471.0 1 sles-sp4-angi-vm1 sles-sp4-angi-vm2 ABC ScaleUp Resource maintenance promotable ----------------------------------------------------- mst_SAPHanaCon_ABC_HDB00 false true cln_SAPHanaTopology_ABC_HDB00 false Site lpt lss mns opMode srHook srMode srPoll srr --------------------------------------------------------------------------------------- sles-sp4-angi-vm1 1738128065 4 sles-sp4-angi-vm1 logreplay PRIM syncmem PRIM P sles-sp4-angi-vm2 30 4 sles-sp4-angi-vm2 logreplay syncmem SOK S Host clone_state roles score site srah version vhost ---------------------------------------------------------------------------------------------------------------------- sles-sp4-angi-vm1 PROMOTED master1:master:worker:master 150 sles-sp4-angi-vm1 - 2.00.073.00 sles-sp4-angi-vm1 sles-sp4-angi-vm2 DEMOTED master1:master:worker:master 100 sles-sp4-angi-vm2 - 2.00.073.00 sles-sp4-angi-vm2
驗證 HANA 叢集屬性的詳細資料
您可以從高可用性叢集的任一執行個體查看 HANA 叢集屬性的詳細資料:
SAPHanaSR-showAttr
輸出結果會與下列範例相似:
Global cib-update dcid prim sec sid topology ------------------------------------------------------------------------ global 0.98409.1 1 sles-sp4-angi-vm2 sles-sp4-angi-vm1 ABC ScaleUp Resource maintenance promotable ----------------------------------------------------- mst_SAPHanaCon_ABC_HDB00 false true cln_SAPHanaTopology_ABC_HDB00 false Site lpt lss mns opMode srHook srMode srPoll srr --------------------------------------------------------------------------------------- sles-sp4-angi-vm1 30 4 sles-sp4-angi-vm1 logreplay SOK syncmem SOK S sles-sp4-angi-vm2 1742448908 4 sles-sp4-angi-vm2 logreplay PRIM syncmem PRIM P Host clone_state roles score site srah version vhost ---------------------------------------------------------------------------------------------------------------------- sles-sp4-angi-vm1 DEMOTED master1:master:worker:master 100 sles-sp4-angi-vm1 - 2.00.073.00 sles-sp4-angi-vm1 sles-sp4-angi-vm2 PROMOTED master1:master:worker:master 150 sles-sp4-angi-vm2 - 2.00.073.00 sles-sp4-angi-vm2
取得支援
如果您已向 Compute Engine 購買 SLES for SAP OS 映像檔的 PAYG 授權,可以請 Cloud 客戶服務團隊協助您與 SUSE 聯絡。詳情請參閱「如何為 Compute Engine 上的即付即用 (PAYG) SLES 授權提供支援?」。
如要瞭解如何取得 Google Cloud支援,請參閱「在 Google Cloud取得 SAP 支援」一文。