Stay organized with collections
Save and categorize content based on your preferences.
Linux
Windows
This page describes how to troubleshoot some potential issues that might occur
while using sole-tenant nodes.
Node group size limitation
Problem: Size of a node group is limited to 100.
Solution: Create multiple node groups and use the same affinity label
for each node group. Then, when scheduling VMs on these node groups, use the
affinity label you assigned to the node groups.
VM scheduling failures
Problem: Can't schedule a VM on a sole-tenant node.
Solution:
You can't schedule a sole-tenant VM if there's no node in the zone
that matches the VM's affinity or anti-
affinity
specification. Check that you have specified the correct affinity
labels. Also, check that you have not specified any conflicting affinity
labels.
If you are using the restart in place maintenance policy, check that
the VM's OnHostMaintenance setting is set to terminate.
If you are using the migrate within node group maintenance policy,
check that you are scheduling VMs on a node group, not a specific node
or by using an affinity label.
Check that the specified node name matches the name of a node in the
zone.
Check that the specified node group name matches the name of a node
group in the zone.
You can't schedule a sole-tenant VM if the VM's minimum CPU platform
(--min-cpu-platform) is set to any value other than AUTOMATIC.
Because each sole-tenant node uses a specific CPU
platform, all VMs running on the node
cannot specify a minimum CPU platform. Before you can move a VM to a
sole-tenant node by updating its tenancy, you must set the VM's --min-cpu-platform
flag
to AUTOMATIC.
Autoscaling node groups
Problem: Can't enable the node group autoscaler.
Solution: You can only enable the node group autoscaler when you set
the node group maintenance policy to the Default maintenance policy.
Problem: Want to retain already reserved nodes with the migrate within
node group maintenance policy.
Solution: When using the Migrate within node group maintenance
policy, set the node group autoscaler to only scale out, which adds nodes to
the node group when it needs extra capacity.
Problem: No remaining CPU quota in the region.
Solution: Autoscaling might fail if you have no remaining CPU quota in
the region, the number of nodes in a group is at the maximum number allowed,
or there was a billing issue. Depending on the error, you might need to
request an increase in CPU quota or create a new sole-tenant node group.
Bringing your own licenses (BYOL)
Problem: Configuring the restart in-place maintenance policy.
Solution: If using the restart in-place maintenance policy, set the
VM's OnHostMaintenanceSetting to terminate.
Problem: Scheduling VMs on node groups with the migrate within node group
maintenance policy.
Solution:
Schedule VMs onto a node group, not on a specific node or by using a
customized affinity label.
Create 2 nodes and enable the autoscaler; otherwise, if you create a
node group of size 1, the node is reserved for holdback.
Capacity issues
Problem: Not enough capacity on a node or in a node group.
Solution:
If you reschedule a VM onto a node that is scheduling VMs in parallel,
in rare situations there might not be capacity.
If you reschedule a VM onto a node in a node group on which you
haven't enabled
autoscaling,
there might not be capacity.
If you reschedule a VM onto a node in a node group on which you have
enabled autoscaling but have exceeded your CPU quota, there might not be
capacity.
CPU overcommit
Problem: An error indicating that no sole-tenant node group was specified
when you set the value for the minimum number of CPUs:
Invalid value for field 'resource.scheduling.minNodeCpus': '2'. Node virtual
CPU count may only be specified for sole-tenant instances.
Solution: Specify a sole-tenant node group when setting the value for
the minimum number of CPUs
Problem: An error indicating that the total of the minimum number of CPUs
for all sole-tenant VMs on a node is greater than the CPU capacity of the node
type.
Node virtual CPU count must not be greater than the guest virtual CPU count.
No feasible nodes found for the instance given its node affinities and other
constraints.
Solution: Specify values for the minimum number of CPUs for each VM so
that the total for all VMs does not exceed the number of CPUs specified by
the sole-tenant node type.
Problem: An error indicating that the total number of CPUs specified by
the machine types for all VMs on a node is more than twice the minimum number of
CPUs specified for all VMs on a node.
Guest virtual CPU count must not be greater than [~2.0] times the node
virtual CPU count.
Solution: Increase the value for the minimum number of CPUs for VMs on
this node until the total minimum number of CPUs is greater than or equal to
half the value for the total number of CPUs determined by the machine types.
Problem: An error indicating that the value for the minimum number of CPUs
is not an even number greater than or equal to 2.
Invalid value for field 'resource.scheduling.minNodeCpus': '3'. Node virtual
CPU count must be even.
Solution: Specify a value for the minimum number of CPUs that is an
even number greater than or equal to 2.
GPUs
Problem: An error indicating that instance creation failed because of
node property incompatibility.
Instance could not be scheduled due to no matching node with property compatibility.
Solution: GPU-enabled sole-tenant nodes only support VMs that have GPUs
attached. To resolve this issue,
Provision a sole-tenant VM
with GPUs.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[[["\u003cp\u003eSole-tenant node groups are limited to a maximum size of 100 nodes, but you can work around this by creating multiple groups with the same affinity label and then scheduling VMs based on that label.\u003c/p\u003e\n"],["\u003cp\u003eIf you encounter VM scheduling failures on sole-tenant nodes, check for proper affinity label specifications, \u003ccode\u003eOnHostMaintenance\u003c/code\u003e settings, correct node and node group names, and ensure that the minimum CPU platform is set to \u003ccode\u003eAUTOMATIC\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eTo move a VM to a sole-tenant node, ensure that the VM's minimum CPU platform specification is removed by setting it to \u003ccode\u003eAUTOMATIC\u003c/code\u003e, as sole-tenant nodes cannot have this specification set.\u003c/p\u003e\n"],["\u003cp\u003eThe node group autoscaler can only be enabled when the node group maintenance policy is set to the \u003cstrong\u003eDefault\u003c/strong\u003e maintenance policy, and if using the \u003cstrong\u003eMigrate within node group\u003c/strong\u003e policy, the autoscaler should only be set to scale out to avoid issues with reserved nodes.\u003c/p\u003e\n"],["\u003cp\u003eWhen overcommitting CPUs, ensure that the minimum number of CPUs specified for each VM in a sole-tenant node group adheres to the rules, including being an even number greater than or equal to 2, and that their total does not exceed the total CPU capacity of the node type.\u003c/p\u003e\n"]]],[],null,["# Troubleshooting sole-tenancy\n\nLinux Windows\n\n*** ** * ** ***\n\nThis page describes how to troubleshoot some potential issues that might occur\nwhile using [sole-tenant nodes](/compute/docs/nodes/sole-tenant-nodes).\n\nNode group size limitation\n--------------------------\n\n- **Problem**: Size of a node group is limited to 100.\n\n - **Solution**: Create multiple node groups and use the same affinity label for each node group. Then, when scheduling VMs on these node groups, use the affinity label you assigned to the node groups.\n\nVM scheduling failures\n----------------------\n\n- **Problem**: Can't schedule a VM on a sole-tenant node.\n\n - **Solution**:\n\n - You can't schedule a sole-tenant VM if there's no node in the zone\n that matches the VM's [affinity or anti-\n affinity](/compute/docs/nodes#node_affinity_and_anti-affinity)\n specification. Check that you have specified the correct affinity\n labels. Also, check that you have not specified any conflicting affinity\n labels.\n\n - If you are using the restart in place maintenance policy, check that\n the VM's `OnHostMaintenance` setting is set to `terminate`.\n\n - If you are using the migrate within node group maintenance policy,\n check that you are scheduling VMs on a node group, not a specific node\n or by using an affinity label.\n\n - Check that the specified node name matches the name of a node in the\n zone.\n\n - Check that the specified node group name matches the name of a node\n group in the zone.\n\n - You can't schedule a sole-tenant VM if the VM's minimum CPU platform\n (`--min-cpu-platform`) is set to any value other than `AUTOMATIC`.\n\nVM tenancy\n----------\n\n- **Problem**: Can't move a VM to a sole-tenant node.\n\n - **Solution**:\n\n - A VM instance with a specified [minimum CPU\n platform](/compute/docs/instances/specify-min-cpu-platform) can't be\n moved to a sole-tenant node by updating VM tenancy. To move a VM to a\n sole-tenant node,\n [remove the minimum CPU platform specification by\n setting it to `automatic`](/compute/docs/instances/specify-min-cpu-platform).\n\n - Because each sole-tenant node uses a [specific CPU\n platform](/compute/docs/nodes#node_types), all VMs running on the node\n cannot specify a minimum CPU platform. Before you can move a VM to a\n sole-tenant node by [updating its tenancy](/compute/docs/nodes/updating-vm-tenancy), you must set the VM's [`--min-cpu-platform`\n flag](/sdk/gcloud/reference/compute/instances/update#--min-cpu-platform)\n to `AUTOMATIC`.\n\nAutoscaling node groups\n-----------------------\n\n- **Problem**: Can't enable the node group autoscaler.\n\n - **Solution** : You can only enable the node group autoscaler when you set the node group maintenance policy to the **Default** maintenance policy.\n- **Problem**: Want to retain already reserved nodes with the migrate within\n node group maintenance policy.\n\n - **Solution** : When using the **Migrate within node group** maintenance policy, set the node group autoscaler to only scale out, which adds nodes to the node group when it needs extra capacity.\n- **Problem**: No remaining CPU quota in the region.\n\n - **Solution**: Autoscaling might fail if you have no remaining CPU quota in the region, the number of nodes in a group is at the maximum number allowed, or there was a billing issue. Depending on the error, you might need to request an increase in CPU quota or create a new sole-tenant node group.\n\nBringing your own licenses (BYOL)\n---------------------------------\n\n- **Problem**: Configuring the restart in-place maintenance policy.\n\n - **Solution** : If using the restart in-place maintenance policy, set the VM's `OnHostMaintenanceSetting` to `terminate`.\n- **Problem**: Scheduling VMs on node groups with the migrate within node group\n maintenance policy.\n\n - **Solution**:\n\n - Schedule VMs onto a node group, not on a specific node or by using a\n customized affinity label.\n\n - Create 2 nodes and enable the autoscaler; otherwise, if you create a\n node group of size 1, the node is reserved for holdback.\n\nCapacity issues\n---------------\n\n- **Problem**: Not enough capacity on a node or in a node group.\n\n - **Solution**:\n\n - If you reschedule a VM onto a node that is scheduling VMs in parallel,\n in rare situations there might not be capacity.\n\n - If you reschedule a VM onto a node in a node group on which you\n haven't enabled\n [autoscaling](/compute/docs/nodes/autoscaling-node-groups),\n there might not be capacity.\n\n - If you reschedule a VM onto a node in a node group on which you have\n enabled autoscaling but have exceeded your CPU quota, there might not be\n capacity.\n\nCPU overcommit\n--------------\n\n- **Problem**: An error indicating that no sole-tenant node group was specified\n when you set the value for the minimum number of CPUs:\n\n ```\n Invalid value for field 'resource.scheduling.minNodeCpus': '2'. Node virtual\n CPU count may only be specified for sole-tenant instances.\n ```\n - **Solution**: Specify a sole-tenant node group when setting the value for the minimum number of CPUs\n- **Problem**: An error indicating that the total of the minimum number of CPUs\n for all sole-tenant VMs on a node is greater than the CPU capacity of the node\n type.\n\n ```\n Node virtual CPU count must not be greater than the guest virtual CPU count.\n ``` \n\n ```\n No feasible nodes found for the instance given its node affinities and other\n constraints.\n ```\n - **Solution**: Specify values for the minimum number of CPUs for each VM so that the total for all VMs does not exceed the number of CPUs specified by the sole-tenant node type.\n- **Problem**: An error indicating that the total number of CPUs specified by\n the machine types for all VMs on a node is more than twice the minimum number of\n CPUs specified for all VMs on a node.\n\n ```\n Guest virtual CPU count must not be greater than [~2.0] times the node\n virtual CPU count.\n ```\n - **Solution**: Increase the value for the minimum number of CPUs for VMs on this node until the total minimum number of CPUs is greater than or equal to half the value for the total number of CPUs determined by the machine types.\n- **Problem**: An error indicating that the value for the minimum number of CPUs\n is not an even number greater than or equal to 2.\n\n ```\n Invalid value for field 'resource.scheduling.minNodeCpus': '3'. Node virtual\n CPU count must be even.\n ```\n - **Solution**: Specify a value for the minimum number of CPUs that is an even number greater than or equal to 2.\n\nGPUs\n----\n\n- **Problem**: An error indicating that instance creation failed because of\n node property incompatibility.\n\n ```\n Instance could not be scheduled due to no matching node with property compatibility.\n ```\n - **Solution** : GPU-enabled sole-tenant nodes only support VMs that have GPUs attached. To resolve this issue, [Provision a sole-tenant VM](/compute/docs/nodes/provisioning-sole-tenant-vms#provision_a_sole-tenant_vm) with GPUs.\n\nWhat's next\n-----------\n\n- [Provisioning VMs on sole-tenant nodes](/compute/docs/nodes/provisioning-sole-tenant-vms)\n\n- [Bringing your own licenses](/compute/docs/nodes/bringing-your-own-licenses)\n\n- [Analyzing sole-tenant node usage](/compute/docs/nodes/determining-server-usage)\n\n- [Autoscaling node groups](/compute/docs/nodes/autoscaling-node-groups)\n\n- [Overcommitting CPUs on sole-tenant VMs](/compute/docs/nodes/overcommitting-cpus-sole-tenant-vms)\n\n- [Updating VM tenancy](/compute/docs/nodes/updating-vm-tenancy)"]]