1.29: disponibilidade geral
1.28: pré-lançamento
1.16: não disponível
É possível migrar os seguintes tipos de armazenamento:
Armazenamento para componentes do sistema gerenciados pelo Google Distributed Cloud, incluindo:
Discos de dados (arquivos VMDK) usados pelos nós do plano de controle dos clusters de
administrador e de usuário do Controlplane V2
Discos de inicialização (arquivos VMDK) usados por todos os nós dos clusters de administrador e de usuário
Volumes do vSphere representados por PV/PVCs no cluster de administrador e usados pelos
componentes do plano de controle dos
clusters de usuário do
kubeception
Armazenamento para cargas de trabalho que você implanta em nós de trabalho do cluster de usuário
com PV/PVCs provisionados pelo plug-in de volume em árvore do vSphere ou pelo
driver CSI do vSphere
Pré-requisitos para um cluster de administrador
O cluster de administrador precisa ter um plano de controle de alta disponibilidade. Se o cluster de administrador tiver um
plano de controle que não é de alta disponibilidade,
migre para a alta disponibilidade
antes de continuar.
Verifique se o cluster de administrador tem um plano de controle de alta disponibilidade:
kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get nodes
Substitua ADMIN_CLUSTER_KUBECONFIG pelo caminho para o arquivo kubeconfig do
cluster de administrador.
Na saída, confira se há três nós do plano de controle. Exemplo:
CLUSTER_KUBECONFIG: o caminho do arquivo kubeconfig do cluster
(de administrador ou de usuário).
ADMIN_CLUSTER_KUBECONFIG: o caminho do arquivo kubeconfig
do cluster de administrador
USER_CLUSTER_NAME: o nome do cluster de usuário
Na saída, quando o campo datastore está vazio e o campo storagePolicyName
não está, isso significa que o cluster está usando o SPBM.
O cluster não pode usar o plug-in de volume em árvore do vSphere.
Se o cluster foi atualizado de uma versão antiga do Google Distributed Cloud,
ele pode ter PV/PVCs provisionados pelo
plug-in de volume em árvore do vSphere.
Esse tipo de volume pode estar em uso por um nó do plano de controle de um cluster de
usuário do kubeception ou por uma carga de trabalho que você criou em um nó de trabalho.
Lista de todos os PVCs e as respectivas StorageClasses:
Liste todas as StorageClasses e confira quais provisionadores elas estão usando:
kubectl --kubeconfig CLUSTER_KUBECONFIG get storageclass
Na saída, se a coluna PROVISIONER for kubernetes.io/vsphere-volume,
os PVCs criados com essa StorageClass vão usar o plug-in de volume em árvore
do vSphere. No caso dos StatefulSets que usam esses PV/PVCs,
migre-os para o driver CSI do vSphere.
Executar a migração do armazenamento
O Google Distributed Cloud dá suporte a duas categorias de migração de armazenamento:
vMotion de armazenamento para VMs, que move o armazenamento de VMs, incluindo os volumes CNS
do vSphere anexados usados por pods em execução em um nó, e os VMDKs usados por
esses volumes CNS de VM anexados aos nós
Realocação de volumes CNS, que move volumes CNS do vSphere especificados para um
repositório de dados compatível sem executar o vMotion de armazenamento para VMs
Executar o vMotion de armazenamento para VMs
A migração envolve etapas que você executa no ambiente do vSphere e comandos
que você executa na estação de trabalho de administrador:
No ambiente do vSphere, adicione os repositórios de dados de destino à política
de armazenamento.
Verifique se todos os discos de todas as máquinas no cluster foram
migrados para o repositório de dados de destino.
Na estação de trabalho do administrador, execute
gkectl diagnose
para verificar se o cluster está íntegro.
Chamar APIs de realocação de CNS para mover volumes de CNS
Para mover apenas volumes CNS provisionados pelo driver CSI do vSphere,
siga as instruções em
Como migrar volumes de contêiner no vSphere.
Isso pode ser mais simples quando você tem apenas volumes CNS no repositório de dados antigo.
Atualize a política de armazenamento, se necessário
No ambiente do vSphere, atualize a política de armazenamento para excluir os
repositórios de dados antigos. Caso contrário, novos volumes e VMs recriadas podem ser atribuídos a um
repositório de dados antigo.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2024-09-02 UTC."],[],[],null,["# Storage migration with Storage Policy Based Management\n\nThis document shows how to migrate disks from one\n[vSphere datastore](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-D5AB2BAD-C69A-4B8D-B468-25D86B8D39CE.html)\nto another vSphere datastore with\n[Storage Policy Based Management (SPBM)](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/configure-storage-policy).\n\n1.29: Generally available \n\n1.28: Preview \n\n1.16: Not available\n\nYou can migrate the following kinds of storage:\n\n- Storage for system components managed by Google Distributed Cloud, including:\n\n - Data disks (VMDK files) used by the control-plane nodes of admin clusters\n and Controlplane V2 user clusters\n\n - Boot disks (VMDK files) used by all admin cluster and user cluster nodes\n\n - vSphere Volumes represented by PV/PVCs in the admin cluster and used by the\n control-plane components of\n [kubeception](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/plan-ip-addresses-kubeception)\n user clusters\n\n- Storage for workloads that you deploy on user cluster worker nodes with PV/PVCs\n provisioned by the in-tree vSphere volume plugin or the\n [vSphere CSI driver](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/using-vsphere-csi-driver)\n\nPrerequisites for an admin cluster\n----------------------------------\n\n1. The admin cluster must have an HA control plane. If your admin cluster has a\n non-HA control plane,\n [migrate to HA](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/migrate-admin-cluster-ha)\n before you continue.\n\n2. Verify that the admin cluster has an HA control plane:\n\n ```\n kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get nodes\n ```\n\n Replace \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e with the path of the admin cluster\n kubeconfig file.\n\n In the output, make sure that you see three control-plane nodes. For example: \n\n ```\n admin-cp-1 Ready control-plane,master ...\n admin-cp-2 Ready control-plane,master ...\n admin-cp-3 Ready control-plane,master ...\n ```\n\nPrerequisites for all clusters (admin and user)\n-----------------------------------------------\n\n1. The cluster must have node auto repair disabled. If node auto repair is\n enabled,\n [disable node auto repair](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/node-auto-repair#disabling_node_repair_and_health_checking_for_a_user_cluster).\n\n2. The cluster must use\n [Storage Policy Based Management (SPBM)](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-720298C6-ED3A-4E80-87E8-076FFF02655A.html).\n If your cluster doesn't use SPBM,\n [create a storage policy](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/configure-storage-policy)\n before you continue.\n\n3. Verify that the cluster uses SPBM:\n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get onpremadmincluster --namespace kube-system \\\n -ojson | jq '{datastore: .items[0].spec.vCenter.datastore, storagePolicyName: .items[0].spec.vCenter.storagePolicyName}'\n ```\n\n (User cluster only) Verify that the node pools use SPBM: \n\n ```\n kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get onpremnodepools --namespace USER_CLUSTER_NAME-gke-onprem-mgmt \\\n -ojson | jq '.items[] | {name: .metadata.name, datastore: .spec.vsphere.datastore, storagePolicyName: .spec.vsphere.storagePolicyName}'\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the path of the cluster kubeconfig file\n (admin or user).\n\n - \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e: the path of the admin cluster\n kubeconfig file\n\n - \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e: the name of the user cluster\n\n In the output, if the `datastore` field is empty and the `storagePolicyName`\n field is non-empty, then the cluster is using SPBM.\n4. The cluster must not use the vSphere in-tree volume plugin.\n\n If your cluster was upgraded from an earlier version of Google Distributed Cloud,\n it might have PV/PVCs that were provisioned by the\n [vSphere in-tree volume plugin](/kubernetes-engine/distributed-cloud/vmware/docs/concepts/storage#kubernetes_in-tree_volume_plugins).\n This kind of volume might be in use by a control-plane node of a kubeception\n user cluster or by a workload that you created on a worker node.\n\n List of all PVCs and their StorageClasses: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get pvc --all-namespaces \\\n -ojson | jq '.items[] | {namespace: .metadata.namespace, name: .metadata.name, storageClassName: .spec.storageClassName}'\n ```\n\n List all StorageClasses and see what provisioners they are using: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get storageclass\n ```\n\n In the output, if the `PROVISIONER` column is `kubernetes.io/vsphere-volume`,\n then PVCs created with this StorageClass are using the vSphere in-tree volume\n plugin. For the StatefulSets using these PV/PVCs,\n [migrate them to the vSphere CSI driver](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/using-statefulset-csi-migration-tool).\n\nPerform the storage migration\n-----------------------------\n\nGoogle Distributed Cloud supports two categories of storage migration:\n\n- Storage vMotion for VMs, which moves VM storage, including attached vSphere\n CNS volumes used by Pods running on a node, and VMDKs used by these VM CNS\n volumes attached to the nodes\n\n- CNS volume relocation, which moves specified vSphere CNS volumes to a\n compatible datastore without performing storage vMotion for VMs\n\n### Perform storage vMotion for VMs\n\nMigration involves steps that you do in your vSphere environment and commands\nthat you run on your admin workstation:\n\n1. In your vSphere environment, add your target datastores to your storage\n policy.\n\n2. In your vSphere environment, migrate cluster VMs using the old datastore to\n the new datastore. For instructions, see\n [Migrate a Virtual Machine to a New Compute Resource and Storage](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-23E67822-4559-4870-982A-BCE2BB88D491.html).\n\n3. On your admin workstation, verify that the VMs have been successfully\n migrated to the new datastore.\n\n Get the Machine objects in the cluster: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get machines --output yaml\n ```\n\n In the output, under `status.disks`, you can see the disks attached to the\n VMs. For example: \n\n ```\n status:\n addresses:\n – address: 172.16.20.2\n type: ExternalIP\n disks:\n – bootdisk: true\n datastore: pf-ds06\n filepath: me-xvz2ccv28bf9wdbx-2/me-xvz2ccv28bf9wdbx-2.vmdk\n uuid: 6000C29d-8edb-e742-babc-9c124013ba54\n – datastore: pf-ds06\n filepath: anthos/gke-admin-nc4rk/me/ci-bluecwang-head-2-data.vmdk\n uuid: 6000C29e-cb12-8ffd-1aed-27f0438bb9d9\n ```\n\n Verify that all the disks of all the machines in the cluster have been\n migrated to the target datastore.\n4. On your admin workstation, run\n [`gkectl diagnose`](/kubernetes-engine/distributed-cloud/vmware/docs/troubleshooting/diagnose)\n to verify that the cluster is healthy.\n\n### Call CNS Relocation APIs for moving CNS volumes\n\nIf you only want to move CNS volumes provisioned by the vSphere CSI driver, you\ncan follow the instructions in\n[Migrating Container Volumes in vSphere](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-536DEB75-84F5-48DC-A425-3BF703B8F54E.html).\nThis might be simpler if you only have CNS volumes in the old datastore.\n\nUpdate your storage policy if needed\n------------------------------------\n\nIn your vSphere environment, update the storage policy to exclude the old\ndatastores. Otherwise, new volumes and re-created VMs might get assigned to\nan old datastore."]]