1.29: Tersedia secara umum
1.28: Pratinjau
1.16: Tidak tersedia
Anda dapat memigrasikan jenis penyimpanan berikut:
Penyimpanan untuk komponen sistem yang dikelola oleh Google Distributed Cloud, termasuk:
Disk data (file VMDK) yang digunakan oleh node platform kontrol cluster admin
dan cluster pengguna Controlplane V2
Disk booting (file VMDK) yang digunakan oleh semua node cluster admin dan cluster pengguna
Volume vSphere yang diwakili oleh PV/PVC di cluster admin dan digunakan oleh
komponen platform kontrol dari
cluster pengguna
kubeception
Penyimpanan untuk workload yang Anda deploy di node pekerja cluster pengguna dengan PV/PVC
yang disediakan oleh plugin volume vSphere dalam hierarki atau
driver CSI vSphere
Prasyarat untuk cluster admin
Cluster admin harus memiliki bidang kontrol HA. Jika cluster admin Anda memiliki panel kontrol non-HA, migrasikan ke HA sebelum melanjutkan.
Pastikan cluster admin memiliki bidang kontrol HA:
kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get nodes
Ganti ADMIN_CLUSTER_KUBECONFIG dengan jalur file kubeconfig cluster admin.
Dalam output, pastikan Anda melihat tiga node kontrol-plane. Contoh:
CLUSTER_KUBECONFIG: jalur file kubeconfig cluster
(admin atau pengguna).
ADMIN_CLUSTER_KUBECONFIG: jalur file kubeconfig cluster admin
USER_CLUSTER_NAME: nama cluster pengguna
Dalam output, jika kolom datastore kosong dan kolom storagePolicyName
tidak kosong, cluster menggunakan SPBM.
Cluster tidak boleh menggunakan plugin volume dalam hierarki vSphere.
Jika cluster Anda diupgrade dari Google Distributed Cloud versi sebelumnya,
cluster tersebut mungkin memiliki PV/PVC yang disediakan oleh
plugin volume dalam hierarki vSphere.
Jenis volume ini mungkin digunakan oleh node bidang kontrol cluster pengguna
kubeception atau oleh workload yang Anda buat di node pekerja.
Cantumkan semua StorageClass dan lihat penyedia yang digunakannya:
kubectl --kubeconfig CLUSTER_KUBECONFIG get storageclass
Dalam output, jika kolom PROVISIONER adalah kubernetes.io/vsphere-volume,
maka PVC yang dibuat dengan StorageClass ini menggunakan plugin volume dalam hierarki vSphere. Untuk StatefulSet yang menggunakan PV/PVC ini,
migrasikan ke driver CSI vSphere.
Melakukan migrasi penyimpanan
Google Distributed Cloud mendukung dua kategori migrasi penyimpanan:
Storage vMotion untuk VM, yang memindahkan penyimpanan VM, termasuk volume CNS vSphere
terlampir yang digunakan oleh Pod yang berjalan di node, dan VMDK yang digunakan oleh volume CNS VM
ini yang terlampir ke node
Pemindahan volume CNS, yang memindahkan volume CNS vSphere yang ditentukan ke
datastore yang kompatibel tanpa melakukan vMotion penyimpanan untuk VM
Melakukan vMotion penyimpanan untuk VM
Migrasi melibatkan langkah-langkah yang Anda lakukan di lingkungan vSphere dan perintah
yang Anda jalankan di workstation admin:
Di lingkungan vSphere, tambahkan datastore target ke kebijakan penyimpanan.
Pastikan semua disk dari semua mesin di cluster telah dimigrasikan ke datastore target.
Di workstation admin, jalankan
gkectl diagnose
untuk memverifikasi bahwa cluster dalam kondisi baik.
Memanggil CNS Relocation API untuk memindahkan volume CNS
Jika hanya ingin memindahkan volume CNS yang disediakan oleh driver CSI vSphere, Anda
dapat mengikuti petunjuk di
Memigrasikan Volume Penampung di vSphere.
Hal ini mungkin lebih sederhana jika Anda hanya memiliki volume CNS di datastore lama.
Perbarui kebijakan penyimpanan Anda jika diperlukan
Di lingkungan vSphere, perbarui kebijakan penyimpanan untuk mengecualikan datastore lama. Jika tidak, volume baru dan VM yang dibuat ulang mungkin ditetapkan ke
datastore lama.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-31 UTC."],[],[],null,["This document shows how to migrate disks from one\n[vSphere datastore](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-D5AB2BAD-C69A-4B8D-B468-25D86B8D39CE.html)\nto another vSphere datastore with\n[Storage Policy Based Management (SPBM)](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/configure-storage-policy).\n\n1.29: Generally available \n\n1.28: Preview \n\n1.16: Not available\n\nYou can migrate the following kinds of storage:\n\n- Storage for system components managed by Google Distributed Cloud, including:\n\n - Data disks (VMDK files) used by the control-plane nodes of admin clusters\n and Controlplane V2 user clusters\n\n - Boot disks (VMDK files) used by all admin cluster and user cluster nodes\n\n - vSphere Volumes represented by PV/PVCs in the admin cluster and used by the\n control-plane components of\n [kubeception](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/plan-ip-addresses-kubeception)\n user clusters\n\n- Storage for workloads that you deploy on user cluster worker nodes with PV/PVCs\n provisioned by the in-tree vSphere volume plugin or the\n [vSphere CSI driver](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/using-vsphere-csi-driver)\n\nPrerequisites for an admin cluster\n\n1. The admin cluster must have an HA control plane. If your admin cluster has a\n non-HA control plane,\n [migrate to HA](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/migrate-admin-cluster-ha)\n before you continue.\n\n2. Verify that the admin cluster has an HA control plane:\n\n ```\n kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get nodes\n ```\n\n Replace \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e with the path of the admin cluster\n kubeconfig file.\n\n In the output, make sure that you see three control-plane nodes. For example: \n\n ```\n admin-cp-1 Ready control-plane,master ...\n admin-cp-2 Ready control-plane,master ...\n admin-cp-3 Ready control-plane,master ...\n ```\n\nPrerequisites for all clusters (admin and user)\n\n1. The cluster must have node auto repair disabled. If node auto repair is\n enabled,\n [disable node auto repair](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/node-auto-repair#disabling_node_repair_and_health_checking_for_a_user_cluster).\n\n2. The cluster must use\n [Storage Policy Based Management (SPBM)](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-720298C6-ED3A-4E80-87E8-076FFF02655A.html).\n If your cluster doesn't use SPBM,\n [create a storage policy](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/configure-storage-policy)\n before you continue.\n\n3. Verify that the cluster uses SPBM:\n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get onpremadmincluster --namespace kube-system \\\n -ojson | jq '{datastore: .items[0].spec.vCenter.datastore, storagePolicyName: .items[0].spec.vCenter.storagePolicyName}'\n ```\n\n (User cluster only) Verify that the node pools use SPBM: \n\n ```\n kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get onpremnodepools --namespace USER_CLUSTER_NAME-gke-onprem-mgmt \\\n -ojson | jq '.items[] | {name: .metadata.name, datastore: .spec.vsphere.datastore, storagePolicyName: .spec.vsphere.storagePolicyName}'\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the path of the cluster kubeconfig file\n (admin or user).\n\n - \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e: the path of the admin cluster\n kubeconfig file\n\n - \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e: the name of the user cluster\n\n In the output, if the `datastore` field is empty and the `storagePolicyName`\n field is non-empty, then the cluster is using SPBM.\n4. The cluster must not use the vSphere in-tree volume plugin.\n\n If your cluster was upgraded from an earlier version of Google Distributed Cloud,\n it might have PV/PVCs that were provisioned by the\n [vSphere in-tree volume plugin](/kubernetes-engine/distributed-cloud/vmware/docs/concepts/storage#kubernetes_in-tree_volume_plugins).\n This kind of volume might be in use by a control-plane node of a kubeception\n user cluster or by a workload that you created on a worker node.\n\n List of all PVCs and their StorageClasses: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get pvc --all-namespaces \\\n -ojson | jq '.items[] | {namespace: .metadata.namespace, name: .metadata.name, storageClassName: .spec.storageClassName}'\n ```\n\n List all StorageClasses and see what provisioners they are using: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get storageclass\n ```\n\n In the output, if the `PROVISIONER` column is `kubernetes.io/vsphere-volume`,\n then PVCs created with this StorageClass are using the vSphere in-tree volume\n plugin. For the StatefulSets using these PV/PVCs,\n [migrate them to the vSphere CSI driver](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/using-statefulset-csi-migration-tool).\n\nPerform the storage migration\n\nGoogle Distributed Cloud supports two categories of storage migration:\n\n- Storage vMotion for VMs, which moves VM storage, including attached vSphere\n CNS volumes used by Pods running on a node, and VMDKs used by these VM CNS\n volumes attached to the nodes\n\n- CNS volume relocation, which moves specified vSphere CNS volumes to a\n compatible datastore without performing storage vMotion for VMs\n\nPerform storage vMotion for VMs\n\nMigration involves steps that you do in your vSphere environment and commands\nthat you run on your admin workstation:\n\n1. In your vSphere environment, add your target datastores to your storage\n policy.\n\n2. In your vSphere environment, migrate cluster VMs using the old datastore to\n the new datastore. For instructions, see\n [Migrate a Virtual Machine to a New Compute Resource and Storage](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-23E67822-4559-4870-982A-BCE2BB88D491.html).\n\n3. On your admin workstation, verify that the VMs have been successfully\n migrated to the new datastore.\n\n Get the Machine objects in the cluster: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get machines --output yaml\n ```\n\n In the output, under `status.disks`, you can see the disks attached to the\n VMs. For example: \n\n ```\n status:\n addresses:\n – address: 172.16.20.2\n type: ExternalIP\n disks:\n – bootdisk: true\n datastore: pf-ds06\n filepath: me-xvz2ccv28bf9wdbx-2/me-xvz2ccv28bf9wdbx-2.vmdk\n uuid: 6000C29d-8edb-e742-babc-9c124013ba54\n – datastore: pf-ds06\n filepath: anthos/gke-admin-nc4rk/me/ci-bluecwang-head-2-data.vmdk\n uuid: 6000C29e-cb12-8ffd-1aed-27f0438bb9d9\n ```\n\n Verify that all the disks of all the machines in the cluster have been\n migrated to the target datastore.\n4. On your admin workstation, run\n [`gkectl diagnose`](/kubernetes-engine/distributed-cloud/vmware/docs/troubleshooting/diagnose)\n to verify that the cluster is healthy.\n\nCall CNS Relocation APIs for moving CNS volumes\n\nIf you only want to move CNS volumes provisioned by the vSphere CSI driver, you\ncan follow the instructions in\n[Migrating Container Volumes in vSphere](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-536DEB75-84F5-48DC-A425-3BF703B8F54E.html).\nThis might be simpler if you only have CNS volumes in the old datastore.\n\nUpdate your storage policy if needed\n\nIn your vSphere environment, update the storage policy to exclude the old\ndatastores. Otherwise, new volumes and re-created VMs might get assigned to\nan old datastore."]]