Esta documentação é referente à versão mais recente do GKE no Azure, lançada em novembro de 2021. Consulte as Notas de lançamento para mais informações.
NODE_POOL_NAME: o nome do pool de nós a ser excluído
CLUSTER_NAME
GOOGLE_CLOUD_LOCATION
Se o comando retornar um erro e a exclusão falhar, force a exclusão executando o comando novamente com a flag --ignore-errors.
Essa flag está disponível na versão 1.29 e posteriores.
Como o GKE no Azure protege as cargas de trabalho durante a exclusão do pool de nós
Durante a exclusão do pool de nós, o GKE no Azure executa um encerramento otimizado em
cada nó sem respeitar o PodDisruptionBudget. São necessárias as seguintes etapas:
Desative o escalonador automático de cluster, se estiver ativado.
Defina um prazo para o processo de drenagem. Depois desse prazo, mesmo que
ainda haja objetos de pod existentes, o GKE no Azure interrompe o consumo e
continua excluindo máquinas virtuais subjacentes. O prazo padrão é de cinco minutos. Para cada 10 nós, 5 minutos a mais são adicionados.
Restringir todos os nós no pool de nós.
Antes que o prazo seja alcançado, exclua os objetos do pod no pool de nós com os melhores esforços.
Exclua todos os recursos computacionais subjacentes.
A seguir
Teste o Guia de início rápido para
iniciar sua primeira carga de trabalho no GKE no Azure.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-07-15 UTC."],[],[],null,["# Delete a node pool\n==================\n\nThis page shows you how to delete node pools in GKE on Azure.\n\nDelete a node pool\n------------------\n\nTo delete a node pool, run the following command: \n\n### Console\n\n1. In the Google Cloud console, go to the **Google Kubernetes Engine clusters\n overview** page.\n\n [Go to GKE clusters](https://console.cloud.google.com/kubernetes/list/overview)\n2. Select the Google Cloud project that the cluster is in.\n\n3. In the cluster list, select the name of the cluster, and then select\n **View details** in the side panel.\n\n4. Select the **Nodes** tab to see a list of all the node pools.\n\n5. Select a node pool from the list.\n\n6. Near the top of the window, click delete **Delete**.\n\n If the delete fails, follow the steps in the `gcloud` tab and add the\n `--ignore-errors` flag to the `gcloud container azure node-pools delete`\n command.\n\n### gcloud\n\n1. Get a list of your node pools:\n\n gcloud container azure node-pools list \\\n --cluster \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --location \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_LOCATION\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of the cluster that the node pool is attached to\n - \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_LOCATION\u003c/var\u003e: the Google Cloud location hosting the node pool\n2. For each of your node pools, delete it with the following command:\n\n gcloud container azure node-pools delete \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e \\\n --cluster \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --location \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_LOCATION\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e: the name of the node pool to delete\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e\n - \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_LOCATION\u003c/var\u003e\n\n If the command returns an error and the delete fails, you can force the\n deletion by running the command again with the `--ignore-errors` flag.\n This flag is available in version 1.29 and later.\n | **Caution:** Adding the `--ignore-errors` flag to the delete command might result in orphaned Azure resources. If that happens, consult the Azure documentation on how to remove the orphaned resources.\n\n#### How GKE on Azure protects workloads during node pool deletion\n\nDuring node pool deletion, GKE on Azure performs graceful shut down on\neach node without honoring PodDisruptionBudget. It takes the following\nsteps:\n\n1. Disable cluster autoscaler if it was enabled.\n2. Set up a deadline for the draining process. After this deadline, even if there are still Pod objects existing, GKE on Azure stops draining and proceeds to deleting underlying virtual machines. The default deadline is 5 minutes. For every 10 more nodes, 5 more minutes is added.\n3. Cordon all the nodes in the node pool.\n4. Before deadline is met, delete Pod objects in the node pool with best efforts.\n5. Delete all the underlying compute resources.\n\nWhat's next\n-----------\n\n- Try the [Quickstart](/kubernetes-engine/multi-cloud/docs/azure/quickstart) to launch your first workload on GKE on Azure.\n- Learn about [Cluster autoscaler](/kubernetes-engine/multi-cloud/docs/azure/concepts/cluster-autoscaler)."]]