Este documento mostra como usar a clonagem de volume do Kubernetes para clonar
volumes permanentes
nos clusters do Google Kubernetes Engine (GKE).
Visão geral
Um clone é um novo volume independente que é uma cópia de um volume atual do Kubernetes. Um clone é semelhante a um snapshot de volume porque é uma cópia de um volume em um momento específico. No entanto, em vez de criar um objeto de snapshot a partir do volume de origem, a clonagem de volume provisiona o clone com todos os dados do volume de origem.
Requisitos
Para usar clones de volume no GKE, você precisa atender aos seguintes requisitos:
O PersistentVolumeClaim precisa estar no mesmo namespace que o PersistentVolumeClaim.
Usar um driver CSI compatível com clonagem de volume. O driver de disco permanente na árvore
não é compatível com a clonagem de volume.
É possível criar um clone de disco regional a partir de um disco zonal, mas lembre-se das restrições dessa abordagem.
A clonagem precisa ser feita em uma zona compatível. Use allowedTopologies para restringir a topologia dos volumes provisionados a zonas específicas. Como alternativa, os nodeSelectors ou Afinidade e antiafinidade podem ser usados para restringir um pod para que ele fique restrito a uma execução em um nó específico em uma zona compatível.
Para clonagem de zona para zona, a zona do clone precisa corresponder à zona do disco de origem.
Para clonagem zonal em regional, uma das zonas de réplica do clone precisa corresponder à zona do disco de origem.
Como usar a clonagem de volume
Para provisionar um clone de volume, adicione uma referência a um PersistentVolumeClaim existente no mesmo namespace ao campo dataSource de um novo PersistentVolumeClaim. O exercício a seguir mostra como provisionar
um volume de origem com dados, criar um clone de volume e consumir o clone.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2024-11-21 UTC."],[],[],null,["# Create clones of persistent volumes\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis document shows you how to use Kubernetes volume cloning to clone\n[persistent volumes](/kubernetes-engine/docs/concepts/persistent-volumes)\nin your Google Kubernetes Engine (GKE) clusters.\n\nOverview\n--------\n\nA clone is a new independent volume that is a duplicate of an existing\nKubernetes volume. A clone is similar to a [volume snapshot](/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots)\nin that it's a copy of a volume at a specific point in time. However,\nrather than creating a snapshot object from the source volume, volume cloning\nprovisions the clone with all the data from the source volume.\n\nRequirements\n------------\n\nTo use volume cloning on GKE, you must meet the following\nrequirements:\n\n- The source PersistentVolumeClaim must be in the same namespace as the destination PersistentVolumeClaim.\n- Use a CSI driver that supports volume cloning. The in-tree persistent disk driver does not support volume cloning.\n - The [Compute Engine persistent disk CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) version 1.4.0 and later supports volume cloning, and is installed by default on new Linux clusters running GKE version 1.22 or later. You can also [enable the Compute Engine persistent disk CSI Driver on an existing cluster](/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#enabling_the_on_an_existing_cluster).\n\nTo verify the Compute Engine persistent disk CSI Driver version, run the following command in the\ngcloud CLI: \n\n kubectl describe daemonsets pdcsi-node --namespace=kube-system | grep \"gke.gcr.io/gcp-compute-persistent-disk-csi-driver\"\n\nIf the output shows a version earlier than `1.4.0`,\n[manually upgrade your control plane](/kubernetes-engine/docs/how-to/upgrading-a-cluster#upgrade_cp)\nto get the latest version.\n\nLimitations\n-----------\n\n- Both volumes must use the same [volume mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-mode). By default, GKE sets the VolumeMode to `ext4`.\n- All [restrictions for creating a disk clone from an existing disk](/compute/docs/disks/create-disk-from-source#restrictions) on Compute Engine also apply to GKE.\n- You can create a regional disk clone from a zonal disk, but you should be aware of the [restrictions of this approach](/compute/docs/disks/create-disk-from-source#restrictions_2).\n- Cloning must be done in a compatible zone. Use [allowedTopologies](https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies) to restrict the topology of provisioned volumes to specific zones. Alternatively, [nodeSelectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) or [Affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) can be used to constrain a Pod so that it is restricted to run on particular node that runs in a compatible zone.\n - For zonal to zonal cloning, the clone zone must match the source disk zone.\n - For zonal to regional cloning, one of the replica zones of the clone must match the zone of the source disk.\n\nUsing volume cloning\n--------------------\n\nTo provision a volume clone, you add a reference to an existing\nPersistentVolumeClaim in the same namespace to the `dataSource` field of a\nnew PersistentVolumeClaim. The following exercise shows you how to provision\na source volume with data, create a volume clone, and consume the clone.\n\n### Create a source volume\n\nTo create a source volume, follow the instructions in\n[Using the Compute Engine persistent disk CSI Driver for Linux clusters](/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#using_the_for_linux_clusters)\nto create a StorageClass, a PersistentVolumeClaim, and a Pod to consume the\nnew volume. You'll use the PersistentVolumeClaim that you create as the source\nfor the volume clone.\n\n### Add a test file to the source volume\n\nAdd a test file to the source volume. You can look for this test file in the\nvolume clone to verify that cloning was successful.\n\n1. Create a test file in a Pod:\n\n kubectl exec \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e \\\n -- sh -c 'echo \"Hello World!\" \u003e /var/lib/www/html/hello.txt'\n\n Replace \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e with the name of a Pod that consumes\n the source volume. For example, if you followed the instructions in\n [Using the Compute Engine persistent disk CSI Driver for Linux clusters](/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#using_the_for_linux_clusters),\n replace \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e with `web-server`.\n2. Verify that the file exists:\n\n kubectl exec \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e \\\n -- sh -c 'cat /var/lib/www/html/hello.txt'\n\n The output is similar to the following: \n\n Hello World!\n\n### Clone the source volume\n\n1. Save the following manifest as `podpvc-clone.yaml`:\n\n kind: PersistentVolumeClaim\n apiVersion: v1\n metadata:\n name: podpvc-clone\n spec:\n dataSource:\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003ePVC_NAME\u003c/span\u003e\u003c/var\u003e\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n storageClassName: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eSTORAGE_CLASS_NAME\u003c/span\u003e\u003c/var\u003e\n resources:\n requests:\n storage: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eSTORAGE\u003c/span\u003e\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e: the name of the source PersistentVolumeClaim that you created in [Create a source volume](#create-source).\n - \u003cvar translate=\"no\"\u003eSTORAGE_CLASS_NAME\u003c/var\u003e: the name of the StorageClass to use, which must be the same as the StorageClass of the source PersistentVolumeClaim.\n - \u003cvar translate=\"no\"\u003eSTORAGE\u003c/var\u003e: the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim.\n2. Apply the manifest:\n\n kubectl apply -f podpvc-clone.yaml\n\n### Create a Pod that consumes the cloned volume\n\nThe following example creates a Pod that consumes the volume clone that\nyou created.\n\n1. Save the following manifest as `web-server-clone.yaml`:\n\n apiVersion: v1\n kind: Pod\n metadata:\n name: web-server-clone\n spec:\n containers:\n - name: web-server-clone\n image: nginx\n volumeMounts:\n - mountPath: /var/lib/www/html\n name: mypvc\n volumes:\n - name: mypvc\n persistentVolumeClaim:\n claimName: podpvc-clone\n readOnly: false\n\n2. Apply the manifest:\n\n kubectl apply -f web-server-clone.yaml\n\n3. Verify that the test file exists:\n\n kubectl exec web-server-clone \\\n -- sh -c 'cat /var/lib/www/html/hello.txt'\n\n The output is similar to the following: \n\n Hello World!\n\nClean up\n--------\n\nTo avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.\n\n1. Delete the `Pod` objects:\n\n kubectl delete pod \u003cvar translate=\"no\"\u003ePOD_NAME\u003c/var\u003e web-server-clone\n\n2. Delete the `PersistentVolumeClaim` objects:\n\n kubectl delete pvc podpvc podpvc-clone"]]