ERROR: (gcloud.compute.<INSTANCE_GROUP_TYPE>.<METHOD>) Could not
fetch resource:
- Exceeded limit 'MAX_INSTANCES_IN_INSTANCE_GROUP' on resource 'PROJECT_ID'.
Limit: NUMBER
ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:
‐ The resource 'projects/PROJECT/zones/ZONE/instanceGroupManagers/INSTANCE_GROUP_NAME' was not found
ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:
‐ The resource 'projects/PROJECT/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME' was not found
ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:
‐ The instance_group_manager resource 'projects/PROJECT/zones/ZONE/instanceGroupManagers/INSTANCE_GROUP_NAME is already being used by 'projects/PROJECT/global/backendServices/BACKEND_SERVICE
ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:
‐ The instance_group_manager resource 'projects/PROJECT/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME is already being used by 'projects/PROJECT/global/backendServices/BACKEND_SERVICE
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-19。"],[[["\u003cp\u003eManaged instance groups (MIGs) may fail to create or recreate VM instances due to issues like a lingering autoscaler after MIG deletion, or a pre-existing boot disk with the same name as the VM.\u003c/p\u003e\n"],["\u003cp\u003eInvalid instance template properties, such as non-existent resources, misspellings, or incorrect disk attachment modes for multi-VM groups, can prevent MIGs from successfully creating VM instances.\u003c/p\u003e\n"],["\u003cp\u003eExceeding the VM limit per MIG (2,000 for regional, 1,000 for zonal) results in creation failure, which can be resolved by using regional MIGs, splitting workloads across multiple MIGs, or requesting a limit increase.\u003c/p\u003e\n"],["\u003cp\u003eDeleting a MIG might fail if the incorrect region or zone is specified, or if the MIG is still being used by a load balancer's backend service, requiring removal from the backend service first.\u003c/p\u003e\n"],["\u003cp\u003eContinuous instance recreation in a MIG often indicates that health check probes cannot reach the instances due to a missing or misconfigured firewall rule, necessitating a review of the health check firewall settings.\u003c/p\u003e\n"]]],[],null,["# Troubleshooting managed instance groups\n\n*** ** * ** ***\n\nThere are several issues that can prevent a\n[managed instance group (MIG)](/compute/docs/instance-groups#managed_instance_groups)\nfrom successfully creating or recreating a VM instance.\n\nIf logs are generated for a deleted MIG\n---------------------------------------\n\nThe problem might be related to the following situations.\n\n### Attached autoscaler still exists\n\nIf you [deleted](/compute/docs/instance-groups/delete-mig#delete_a_mig)\na MIG using the Compute Engine API and you did not issue a separate request\nto delete the attached autoscaler, the Logs Explorer might show logs with\nthe following message. \n\n```\nThe resource 'projects/PROJECT/zones/ZONE/instanceGroupManagers/DELETED_INSTANCE_GROUP_NAME' was not found.\n```\n\n**Resolution**:\n\nTo resolve this issue, delete the attached autoscaler using the Compute Engine API\nmethods:\n\n- For an autoscaler of a zonal MIG, use the [`autoscalers.delete` method](/compute/docs/reference/rest/v1/autoscalers/delete).\n- For an autoscaler of a regional MIG, use the [`regionAutoscalers.delete` method](/compute/docs/reference/rest/v1/regionAutoscalers/delete).\n\nIf your MIG cannot create or recreate instances\n-----------------------------------------------\n\nThe problem might be related to the following situations.\n\n### The boot disk already exists\n\nBy default, a new boot persistent disk is created when you create an instance.\nThe name of the boot disk matches the name of the VM. If you name a VM\n`my-instance`, the disk is also named `my-instance`. If a persistent disk\nalready exists with that name, the request fails. To resolve this issue, you\ncan optionally [take a snapshot](/compute/docs/disks/create-snapshots), and\nthen delete the existing persistent disk.\n\n### The instance template is not valid\n\nIf you updated your instance template recently, there could be an invalid\nproperty that causes the MIG to fail VM creation. Examine the\nproperties for these common errors:\n\n- You specified a resource that doesn't exist, such as a source image.\n- You misspelled a resource name.\n- You tried to attach an existing non-boot persistent disk in read/write mode but your group contains more than one VM. For groups with more than one VM, any additional disks you want to share between all of the VMs in the group can be attached only in read-only mode.\n\n### Limit exceeded for resource type\n\nThe following error occurs when you try to create more than 2,000 VMs in a\nregional MIG or more than 1,000 VMs in a zonal MIG. You have reached the size\nlimit for your instance group.\n\n**Error message**: \n\n```\nERROR: (gcloud.compute.\u003cINSTANCE_GROUP_TYPE\u003e.\u003cMETHOD\u003e) Could not\nfetch resource:\n\n - Exceeded limit 'MAX_INSTANCES_IN_INSTANCE_GROUP' on resource 'PROJECT_ID'.\n Limit: NUMBER\n```\n\n**Resolution**:\n\nTo resolve this issue, try one of the following:\n\n- If you are using a zonal MIG, use a [regional MIG](/compute/docs/instance-groups#types_of_managed_instance_groups) instead.\n- Create multiple MIGs and split your workload across them---for example by adjusting your [load balancing](/compute/docs/instance-groups/adding-an-instance-group-to-a-load-balancer) configuration.\n- If you still need a bigger group, you can [increase the size limit of your MIG](/compute/docs/instance-groups/add-remove-vms-in-mig#increase_the_groups_size_limit) or [contact support](/support-hub) to make a request.\n\nIf you cannot delete your MIG or its instances\n----------------------------------------------\n\nThe problem might be related to the following situation.\n\n### Resource not found in zone or region\n\nThe following error occurs when you try to delete a regional MIG and you\nspecify the `--zone` flag, specify no region, or specify the wrong region.\nA similar error can occur if you try to delete a zonal MIG and you specify\nthe `--region` flag.\n\n**Error message**:\n\n-\n\n ```\n ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:\n ‐ The resource 'projects/PROJECT/zones/ZONE/instanceGroupManagers/INSTANCE_GROUP_NAME' was not found\n ```\n-\n\n ```\n ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:\n ‐ The resource 'projects/PROJECT/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME' was not found\n ```\n\n**Resolution**:\n\nTo resolve this issue, try one of the following:\n\n- Append the appropriate `--region` or `--zone` flag to your command\n- [Set a default region and zone](/compute/docs/gcloud-compute#default-region-zone)\n\n### Resource is used by a backend service\n\nYou cannot remove an instance group when it is used by a load balancer's backend service.\nYou must remove the instance from the backend service before you can delete the\ninstance group.\n\n**Error message**:\n\n-\n\n ```\n ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:\n ‐ The instance_group_manager resource 'projects/PROJECT/zones/ZONE/instanceGroupManagers/INSTANCE_GROUP_NAME is already being used by 'projects/PROJECT/global/backendServices/BACKEND_SERVICE\n ```\n-\n\n ```\n ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:\n ‐ The instance_group_manager resource 'projects/PROJECT/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME is already being used by 'projects/PROJECT/global/backendServices/BACKEND_SERVICE\n ```\n\n**Resolution**:\n\n1. Optional: Drain the backend instance group.\n\n - For [proxy load balancers only](/load-balancing/docs/choosing-load-balancer#lb-summary),\n you can set the [capacity scaler](/load-balancing/docs/backend-service#capacity_scaler)\n to `0.0` before removing the instance group from a backend service. You can\n set the capacity scaler to zero by using the\n [`gcloud compute backend-services edit` command](/sdk/gcloud/reference/compute/backend-services/edit).\n\n - For both proxy and pass-through load balancers, if you\n [enable connection draining](/load-balancing/docs/enabling-connection-draining)\n on the backend service, Google Cloud attempts to allow existing\n connections to persist, complete, and drain whenever an instance group is\n removed from a backend service.\n\n2. Remove the MIG from the regional or global backend service.\n\n - For a zonal MIG, run the following command:\n\n ```\n gcloud compute backend-services remove-backend BACKEND_SERVICE \\\n --instance-group=INSTANCE_GROUP_NAME \\\n --instance-group-zone=ZONE \\\n [--region=REGION | --global]\n ```\n - For a regional MIG, run the following command:\n\n ```\n gcloud compute backend-services remove-backend BACKEND_SERVICE \\\n --instance-group=INSTANCE_GROUP_NAME \\\n --instance-group-region=REGION \\\n [--region=REGION | --global]\n ```\n3. Delete the MIG:\n\n ```\n gcloud compute instance-groups managed delete INSTANCE_GROUP_NAME\n ```\n\nIf your MIG continually tries to recreate instances\n---------------------------------------------------\n\nThe problem might be related to the following situation.\n\n### Health check probes cannot reach the instance\n\nIf you configured an autohealing policy but you did not configure---or\nmisconfigured---the firewall rule that lets the health check probes reach your\napplication, then your VMs\n[appear unhealthy](/compute/docs/instance-groups/autohealing-instances-in-migs#checking_the_status),\nand the MIG continuously tries to recreate them. For information about how to\nconfigure a health check firewall rule, see\n[Example health check set up](/compute/docs/instance-groups/autohealing-instances-in-migs#example_health_check_set_up)."]]