Google Cloud accounts for bandwidth per compute instance, not per virtual network interface (vNIC) or IP address. An instance's machine type defines its maximum possible egress rate; however, you can only achieve that maximum possible egress rate in specific situations.
This page outlines expectations, which are useful when planning your deployments. It categorizes bandwidth using two dimensions:
- Egress or ingress: As used on this page, egress and ingress are always
from the perspective of a Google Cloud instance:
- Packets sent from a Google Cloud instance compose its egress (outbound) traffic.
- Packets sent to a Google Cloud instance compose its ingress (inbound) traffic.
- How the packet is routed: A packet can be routed from a sending instance or to a receiving instance using routes whose next hops are within a VPC network or routes outside of a VPC network.
Neither additional virtual network interfaces (vNICs) nor additional IP addresses per vNIC increase ingress or egress bandwidth for a compute instance. For example, a C3 VM with 22 vCPUs is limited to 23 Gbps total egress bandwidth. If you configure the C3 VM with two vNICs, the VM is still limited to 23 Gbps total egress bandwidth, not 23 Gbps bandwidth per vNIC.
All of the information on this page is applicable to Compute Engine compute instances, as well as products that depend on Compute Engine instances. For example, a Google Kubernetes Engine node is a Compute Engine instance.
Bandwidth summary
The following table illustrates bandwidth expectations based on whether a packet is sent from (egress) or received by (ingress) a compute instance and the packet routing method.
Egress
Bandwidth expectations | |
---|---|
Routing within a VPC network |
|
Routing outside a VPC network |
|
Ingress
Bandwidth expectations | |
---|---|
Routing within a VPC network |
|
Routing outside a VPC network |
|
Egress bandwidth
Google Cloud limits outbound (egress) bandwidth using per-instance maximum egress rates. These rates are based the machine type of the compute instance that is sending the packet and whether the packet's destination is accessible using routes within a VPC network or routes outside of a VPC network. Outbound bandwidth includes packets emitted by all of the instance's NICs and data transferred to all Hyperdisk and Persistent Disk volumes connected to the instance.
Per-instance maximum egress bandwidth
Per-instance maximum egress bandwidth is generally 2 Gbps per vCPU, but there are some differences and exceptions, depending on the machine series. The following table shows the range of maximum limits for egress bandwidth for traffic routed within a VPC network for standard networking tier only, not per VM Tier_1 networking performance.
Machine series | Lowest per-instance maximum egress limit for standard | Highest per-instance maximum egress limit for standard |
---|---|---|
C4 and C4A | 10 Gbps | 100 Gbps |
C3 | 23 Gbps | 100 Gbps |
C3D | 20 Gbps | 100 Gbps |
C2 and C2D | 10 Gbps | 32 Gbps |
E2 | 1 Gbps | 16 Gbps |
H3 | N/A | 200 Gbps |
M3 and M1 | 32 Gbps | 32 Gbps |
M2 | 32 Gbps | 32 Gbps on Intel Cascade Lake CPU platform 16 Gbps on other CPU platforms |
N4 | 10 Gbps | 50 Gbps |
N2 and N2D | 10 Gbps | 32 Gbps |
N1 (excluding VMs with 1 vCPU) | 10 Gbps | 32 Gbps on Intel Skylake CPU platform 16 Gbps on CPU platforms older than Intel Skylake |
N1 machine types with 1 vCPU, f1-micro, and g1-small | 2 Gbps | 2 Gbps |
T2D | 10 Gbps | 32 Gbps |
X4 | N/A | 100 Gbps |
Z3 | 23 Gbps | 100 Gbps |
You can find the per-instance maximum egress bandwidth for every machine type listed on its specific machine family page:
- C, E, N, and T series: General-purpose machine family
- Z series: Storage-optimized machine family
- C2 and H series: Compute-optimized machine family
- M and X series: Memory-optimized machine family
Per-instance maximum egress bandwidth is not a guarantee. The actual egress bandwidth can be lowered according to factors such as the following non-exhaustive list:
- Using VirtIO instead of gVNIC with compute instances that support both
- Packet size
- Protocol overhead
- The number of flows
- Ethernet driver settings of the compute instance's guest OS, such as checksum offload and TCP segmentation offload (TSO)
- Network congestion
- In a situation where Persistent Disk I/Os compete with other network egress traffic, 60% of the maximum network bandwidth is given to Persistent Disk writes, leaving 40% for other network egress traffic. See Factors that affect disk performance for more details.
To get the largest possible per-instance maximum egress bandwidth:
- Enable per VM Tier_1 networking performance with larger machine types.
- Use the largest VPC network maximum transmission unit (MTU) supported by your network topology. Larger MTUs can reduce packet-header overhead and increase payload data throughput.
- Use the latest gVNIC driver version.
- Use third generation or later machine series that use Titanium to offload network processing from the host CPU.
Egress to destinations routable within a VPC network
From the perspective of a sending instance and for destination IP addresses accessible by means of routes within a VPC network, Google Cloud limits outbound traffic using these rules:
- Per-VM maximum egress bandwidth: The per-instance maximum egress bandwidth described in the Per-instance maximum egress bandwidth section.
- Per-project inter-regional egress bandwidth: If a sending instance and an internal destination or its next hop are in different regions, Google Cloud enforces a maximum inter-regional egress bandwidth limit. Most customers are unlikely to reach this limit. For questions about this limit, file a support case.
- Cloud VPN and Cloud Interconnect limits: When sending
traffic from an instance to an internal IP address destination routable by a
next hop Cloud VPN tunnel or Cloud Interconnect VLAN
attachment, egress bandwidth is limited by:
- Maximum packet rate and bandwidth per Cloud VPN tunnel
- Maximum packet rate and bandwidth per VLAN attachment
- To fully use the bandwidth of multiple next hop Cloud VPN tunnels or Cloud Interconnect VLAN attachments using ECMP routing, you must use multiple TCP connections (unique 5-tuples).
Destinations routable within a VPC network include all of the following destinations, each of which is accessible from the perspective of the sending instance by a route whose next hop is not the default internet gateway:
- Regional internal IPv4 addresses in
subnet primary IPv4 and subnet secondary IPv4 address ranges,
including private IPv4 address ranges and privately used public IPv4 address
ranges, used by these destination resources:
- The primary internal IPv4 address of a receiving instance's network interface (vNIC). (When a sending instance connects to another instance's vNIC external IPv4 address, packets are routed using a next hop default internet gateway, so Egress to destinations outside of a VPC network applies instead.)
- An internal IPv4 address in an alias IP range of a receiving instance's vNIC.
- An internal IPv4 address of an internal forwarding rule for either protocol forwarding or for an internal passthrough Network Load Balancer.
- Global internal IPv4 addresses for these destination resources:
- Internal IPv6 subnet address ranges used by
these destination resources:
- An IPv6 address from the
/96
IPv6 address range assigned to a dual-stack or IPv6-only (Preview) receiving instance's vNIC. - An IPv6 address from the
/96
IPv6 address range of an internal forwarding rule for either protocol forwarding or for an internal passthrough Network Load Balancer.
- An IPv6 address from the
- External IPv6 subnet address ranges used by
these destination resources when packets are routed using subnet routes
or peering subnet routes within the VPC network or by custom
routes within the VPC network that do not use the default
internet gateway next hop:
- An IPv6 address from the
/96
IPv6 address range assigned to a dual-stack or IPv6-only (Preview) receiving instance's vNIC. - An IPv6 address from the
/96
IPv6 address range of an external forwarding rule for either protocol forwarding or for an external passthrough Network Load Balancer.
- An IPv6 address from the
- Other destinations accessible using the following VPC network
routes:
- Dynamic routes
- Static routes except those that use a default internet gateway next hop
- Peering custom routes
The following list ranks traffic from sending instances to internal destinations, from highest possible bandwidth to lowest:
- Between compute instances in the same zone
- Between compute instances in different zones of the same region
- Between compute instances in different regions
- From a compute instance to Google Cloud APIs and services using Private Google Access or accessing Google APIs from an instance's external IP address. This includes Private Service Connect endpoints for Google APIs.
Egress to destinations outside of a VPC network
From the perspective of a sending instance and for destination IP addresses outside of a VPC network, Google Cloud limits outbound traffic to whichever of the following rates is reached first:
Per-instance egress bandwidth: The maximum bandwidth for all connections from a compute instance to destinations outside of a VPC network is the smaller of the Per-instance maximum egress bandwidth and one of these rates:
- 25 Gbps, if Tier_1 networking is enabled
- 7 Gbps, if Tier_1 networking isn't enabled
- 1 Gbps for H3 instances
- 7 Gbps per physical NIC for machine series that support multiple physical NICs, such as A3.
For example, even though an
c3-standard-44
instance has a per-VM maximum egress bandwidth of 32 Gbps, the per-VM egress bandwidth from ac3-standard-44
VM to external destinations is either 25 Gbps or 7 Gbps, depending on whether Tier_1 networking is enabled.Per-flow maximum egress rate: The maximum bandwidth for each unique 5-tuple connection, from a compute instance to a destination outside of a VPC network is 3 Gbps, except on H3, where it is 1 Gbps.
Per-project internet egress bandwidth: The maximum bandwidth for all connections from compute instances in each region of a project to destinations outside of a VPC network is defined by the project's Internet egress bandwidth quotas.
Destinations outside of a VPC network include all of the following destinations, each of which is accessible by a route in the sending instance's VPC network whose next hop is the default internet gateway:
- Global external IPv4 and IPv6 addresses for external proxy Network Load Balancers and external Application Load Balancers
- Regional external IPv4 addresses for Google Cloud resources, including VM vNIC external IPv4 addresses, external IPv4 addresses for external protocol forwarding, external passthrough Network Load Balancers, and response packets to Cloud NAT gateways.
- Regional external IPv6 addresses in dual-stack or IPv6-only subnets (Preview) with external IPv6 address ranges used by external IPv6 addresses of dual-stack or IPv6-only instances (Preview), external protocol forwarding, and external passthrough Network Load Balancers. The subnet must be located in a separate, non-peered VPC network. The destination IPv6 address range must be accessible using a route in the sending instance's VPC network whose next hop is the default internet gateway. If a dual-stack or IPv6-only subnet with an external IPv6 address range is located in the same VPC network or in a peered VPC network, see Egress to destinations routable within a VPC network instead.
- Other external destinations accessible using a static route in the sending instance's VPC network provided that the next hop for the route is the default internet gateway.
For details about which Google Cloud resources use what types of external IP addresses, see External IP addresses.
Ingress bandwidth
Google Cloud handles inbound (ingress) bandwidth depending on how the incoming packet is routed to a receiving compute instance.
Ingress to destinations routable within a VPC network
A receiving instance can handle as many incoming packets as its machine type, operating system, and other network conditions permit. Google Cloud does not implement any purposeful bandwidth restriction on incoming packets delivered to an instance if the incoming packet is delivered using routes within a VPC network:
- Subnet routes in the receiving instance's VPC network
- Peering subnet routes in a peered VPC network
- Routes in another network whose next hops are Cloud VPN tunnels, Cloud Interconnect (VLAN) attachments, or Router appliance instances located in the receiving instance's VPC network
Destinations for packets that are routed within a VPC network include:
- The primary internal IPv4 address of the receiving instance's network interface (NIC). Primary internal IPv4 addresses are regional internal IPv4 addresses that come from a subnet's primary IPv4 address range.
- An internal IPv4 address from an alias IP range of the receiving instance's NIC. Alias IP ranges can come from either a subnet's primary IPv4 address range or one of its secondary IPv4 address ranges.
- An IPv6 address from the
/96
IPv6 address range assigned to a dual-stack or IPv6-only (Preview) receiving instance's NIC. Compute instance IPv6 ranges can come from these subnet IPv6 ranges:- An internal IPv6 address range.
- An external IPv6 address range when the incoming packet is routed internally to the receiving instance using one of the VPC network routes listed previously in this section.
- An internal IPv4 address of a forwarding rule used by internal protocol forwarding to the receiving instance or internal passthrough Network Load Balancer where the receiving instance is a backend of the load balancer. Internal forwarding rule IPv4 addresses come from a subnet's primary IPv4 address range.
- An internal IPv6 address from the
/96
IPv6 range of a forwarding rule used by internal protocol forwarding to the receiving instance or internal passthrough Network Load Balancer where the receiving instance is a backend of the load balancer. Internal forwarding rule IPv6 addresses come from a subnet's internal IPv6 address range. - An external IPv6 address from the
/96
IPv6 range of a forwarding rule used by external protocol forwarding to the receiving instance or external passthrough Network Load Balancer. The receiving instance is a backend of the load balancer when the incoming packet is routed within the VPC network using one of the routes listed previously in this section. External forwarding rule IPv6 addresses come from a subnet's external IPv6 address range. - An IP address within the destination range of a custom static route that uses
the receiving instance as a next hop instance (
next-hop-instance
ornext-hop-address
). - An IP address within the destination range of a custom static route using an
internal passthrough Network Load Balancer (
next-hop-ilb
) next hop, if the receiving instance is a backend for that load balancer.
Ingress to destinations outside of a VPC network
Google Cloud implements the following bandwidth limits for incoming packets delivered to a receiving instance using routes outside a VPC network. When load balancing is involved, the bandwidth limits are applied individually to each receiving instance.
For machine series that don't support multiple physical NICs, the applicable inbound bandwidth restriction applies collectively to all virtual network interfaces (vNICs). The limit is the first of the following rates encountered:
- 1,800,000 packets per second
- 30 Gbps
For machine series that support multiple physical NICs, such as A3, the applicable inbound bandwidth restriction applies individually to each physical NIC. The limit is the first of the following rates encountered:
- 1,800,000 packets per second per physical NIC
- 30 Gbps per physical NIC
Destinations for packets that are routed using routes outside of a VPC network include:
- An external IPv4 address assigned in a one-to-one NAT access configuration on one of the receiving instance's network interfaces (NICs).
- An external IPv6 address from the
/96
IPv6 address range assigned to a vNIC of a dual-stack or IPv6-only (Preview) receiving instance when the incoming packet is routed using a route outside of the receiving instance's VPC network. - An external IPv4 address of a forwarding rule used by external protocol forwarding to the receiving instance or external passthrough Network Load Balancer where the receiving instance is a backend of the load balancer.
- An external IPv6 address from the
/96
IPv6 range of a forwarding rule used by external protocol forwarding to the receiving instance or external passthrough Network Load Balancer. The receiving instance must be a backend of the load balancer when the incoming packet is routed using a route outside of a VPC network. - Established inbound responses processed by Cloud NAT.
Jumbo frames
To receive and send jumbo frames, configure the VPC network used by your compute instances; set the maximum transmission unit (MTU) to a larger value, up to 8896.
Higher MTU values increase the packet size and reduce the packet-header overhead, which increases payload data throughput.
You can use jumbo frames with the gVNIC driver version 1.3 or later on VM instances, or with the IDPF driver on bare metal instances. Not all Google Cloud public images include these drivers. For more information about operating system support for jumbo frames, see the Networking features tab on the Operating system details page.
If you are using an OS image that doesn't have full support for jumbo frames,
you can manually install gVNIC driver version v1.3.0 or later. Google
recommends installing the gVNIC driver version marked Latest
to benefit from
additional features and bug fixes. You can download the gVNIC drivers from
GitHub.
To manually update the gVNIC driver version in your guest OS, see Use on non-supported operating systems.
Receive and transmit queues
Each NIC or vNIC for a compute instance is assigned a number of receive and transmit queues for processing packets from the network.
- Receive Queue (RX): Queue to receive packets. When the NIC receives a packet from the network, the NIC selects the descriptor for an incoming packet from the queue, processes it and hands the packet to the guest OS over a packet queue attached to a vCPU core using an interrupt. If the RX queue is full and there is no buffer available to place a packet, then the packet is dropped. This can typically happen if an application is over-utilizing a vCPU core that is also attached to the selected packet queue.
- Transmit Queue (TX): Queue to transmit packets. When the guest OS sends a packet, a descriptor is allocated and placed in the TX queue. The NIC then processes the descriptor and transmits the packet.
Default queue allocation
Unless you explicitly assign queue counts for NICs, you can model the algorithm Google Cloud uses to assign a fixed number of RX and TX queues per NIC in this way:
- Bare metal instances
- For bare metal instances, there is only one NIC, so the maximum queue count is 16.
- VM instances that use the gVNIC network interface
For C4 instances, to improve performance, the following configurations use a fixed number of queues:
- For Linux instances with 2 vCPUs, the queue count is 1.
- For Linux instances with 4 vCPUs, the queue count is 2.
For the other machine series, the queue count depends on whether the machine series uses Titanium or not.
For third generation (excluding M3) and later instances that use Titanium:
Divide the number of vCPUs by the number of vNICs (
num_vcpus/num_vnics
) and discard any remainder.For first and second generation VMs that don't use Titanium:
Divide the number of vCPUs by the number of vNICs, and then divide the result by 2 (
num_vcpus/num_vnics/2
). Discard any remainder.
To finish the default queue count calculation:
If the calculated number is less than 1, assign each vNIC one queue instead.
Determine if the calculated number is greater than the maximum number of queues per vNIC, which is
16
. If the calculated number is greater than16
, ignore the calculated number, and assign each vNIC 16 queues instead.
- VM instances using the VirtIO network interface or a custom driver
Divide the number of vCPUs by the number of vNICs, and discard any remainder —
[number of vCPUs/number of vNICs]
.If the calculated number is less than 1, assign each vNIC one queue instead.
Determine if the calculated number is greater than the maximum number of queues per vNIC, which is
32
. If the calculated number is greater than32
, ignore the calculated number, and assign each vNIC 32 queues instead.
Examples
The following examples show how to calculate the default number of queues for a VM instance:
If a VM instance uses VirtIO and has 16 vCPUs and 4 vNICs, the calculated number is
[16/4] = 4
. Google Cloud assigns each vNIC four queues.If a VM instance uses gVNIC and has 128 vCPUs and two vNICs, the calculated number is
[128/2/2] = 32
. Google Cloud assigns each vNIC the maximum number of queues per vNIC possible. Google Cloud assigns16
queues per vNIC.
On Linux systems, you can use ethtool
to configure a vNIC with fewer queues
than the number of queues Google Cloud assigns per vNIC.
Custom queue allocation for VM instances
Instead of the default queue allocation, you can assign a custom queue count (total of both RX and TX) to each vNIC when you create a new compute instance by using the Compute Engine API.
The number of custom queues you specify must adhere to the following rules:
The minimum queue count you can assign per vNIC is one.
The maximum queue count you can assign to each vNIC of a VM instance is the lower of the vCPU count or the per vNIC maximum queue count, based on the driver type:
- Using virtIO
or a custom driver, the maximum queue count is
32
. - Using gVNIC, the maximum queue
count is
16
, except for the following, where the maximum queue count is 32:- A2 or G2 instances
- TPU instances
- C2, C2D, N2, or N2D instances with Tier_1 networking enabled
For the following Confidential VM configurations, the maximum queue count is
8
:- AMD SEV on C2D and N2D machine types
- AMD SEV-SNP on N2D machine types
- Using virtIO
or a custom driver, the maximum queue count is
If you assign custom queue counts to all NICs of the compute instance, the sum of your queue count assignments must be less than or equal to the number of vCPUs assigned to the instance.
You can oversubscribe the custom queue count for your vNICs. In other words, you can have a sum of the queue counts assigned to all NICs for your VM instance that is greater than the number of vCPUs for your instance. To oversubscribe the custom queue count, the VM instance must satisfy the following conditions:
- Use gVNIC as the vNIC type for all NICs configured for the instance.
- Uses a machine type that supports Tier_1 networking.
- Has Tier_1 networking enabled.
- Specified a custom queue count for all NICs configured for the instance.
With queue oversubscription, the maximum queue count for the VM instance is 16 times the number of NICs. So, if you have 6 NICs configured for an instance with 30 vCPUs, you can configure a maximum of (16 * 6), or 96 custom queues for your instance.
Examples
If a VM instance has 8 vCPUs and 3 vNICs, the maximum queue count for the instance is the number of vCPUS, or 8. You can assign 1 queue to
nic0
, 4 queues tonic1
, and 3 queues tonic2
. In this example, you can't subsequently assign 4 queues tonic2
while keeping the other two vNIC queue assignments because the sum of assigned queues cannot exceed the number of vCPUs.If you have a N2 VM with 96 vCPUs and 2 vNICs, you can assign both vNICs up to 32 queues each when using the virtIO driver, or up to 16 queues each when using the gVNIC driver. If you enable Tier_1 networking for the N2 VM, then you can assign up to 32 queues to each vNIC. In this example, the sum of assigned queues is always less than or equal to the number of vCPUs.
It's also possible to assign a custom queue count for only some NICs, letting Google Cloud assign queues to the remaining NICs. The number of queues you can assign per vNIC is still subject to rules mentioned previously. You can model the feasibility of your configuration, and, if your configuration is possible, the number of queues that Google Cloud assigns to the remaining vNICs with this process:
Calculate the sum of queues for the vNICs using custom queue assignment. For an example VM with 20 vCPUs and 6 vNICs, suppose you assign
nic0
5 queues,nic1
6 queues,nic2
4 queues, and let Google Cloud assign queues fornic3
,nic4
, andnic5
. In this example, the sum of custom-assigned queues is5+6+4 = 15
.Subtract the sum of custom-assigned queues from the number of vCPUs. If the difference is less than the number of remaining vNICs for which Google Cloud must assign queues, Google Cloud returns an error because each vNIC must have at least one queue.
Continuing the example with a VM that has 20 vCPUs and a sum of
15
custom-assigned queues, Google Cloud has20-15 = 5
queues left to assign to the remaining vNICs (nic3
,nic4
,nic5
).Divide the difference from the previous step by the number of remaining vNICs and discard any remainder —
⌊(number of vCPUs - sum of assigned queues)/(number of remaining vNICs)⌋
. This calculation always results in a whole number (not a fraction) that is at least equal to one because of the constraint explained in the previous step. Google Cloud assigns each remaining vNIC a queue count matching the calculated number as long as the calculated number is not greater than the maximum number of queues per vNIC. The maximum number of queues per vNIC depends on the driver type:
- Using virtIO or a custom driver, if the calculated number of queues for
each remaining vNIC is greater than
32
, Google Cloud assigns each remaining vNIC32
queues. - Using gVNIC, if the calculated number of queues for each remaining vNIC
is greater than the limit of
16
or32
(depending on the VM configuration), Google Cloud assigns each remaining vNIC16
queues.
Configure custom queue counts
To create a compute instance that uses a custom queue count for one or more NICs or vNICs, complete the following steps.
In the following code examples, the VM is created with the network interface
type set to GVNIC
and per VM Tier_1 networking performance enabled. You can use these code
examples to specify the maximum queue counts and queue oversubscription that is
available for the supported machine types.
gcloud
- If you don't already have a VPC network with a subnet for each vNIC interface you plan to configure, create them.
- Use the
gcloud compute instances create
command to create the compute instance. Repeat the--network-interface
flag for each vNIC that you want to configure for the instance, and include thequeue-count
option.
gcloud compute instances create INSTANCE_NAME \ --zone=ZONE \ --machine-type=MACHINE_TYPE \ --network-performance-configs=total-egress-bandwidth-tier=TIER_1 \ --network-interface=network=NETWORK_NAME_1,subnet=SUBNET_1,nic-type=GVNIC,queue-count=QUEUE_SIZE_1 \ --network-interface=network=NETWORK_NAME_2,subnet=SUBNET_2,nic-type=GVNIC,queue-count=QUEUE_SIZE_2
Replace the following:
INSTANCE_NAME
: a name for the new compute instanceZONE
: the zone to create the instance inMACHINE_TYPE
: the machine type of the instance. To oversubscribe the queue count, the machine type you specify must support gVNIC and Tier_1 networking.NETWORK_NAME
: the name of the network created previouslySUBNET_*
: the name of one of the subnets created previouslyQUEUE_SIZE
: the number of queues for the vNIC, subject to the rules discussed in Custom queue allocation.
Terraform
- If you don't already have a VPC network with a subnet for each vNIC interface you plan to configure, create them.
Create a compute instance with specific queue counts for vNICs using the
google_compute_instance
resource. Repeat the--network-interface
parameter for each vNIC you want to configure for the compute instance, and include thequeue-count
parameter.# Queue oversubscription instance resource "google_compute_instance" "VM_NAME" { project = "PROJECT_ID" boot_disk { auto_delete = true device_name = "DEVICE_NAME" initialize_params { image="IMAGE_NAME" size = DISK_SIZE type = "DISK_TYPE" } } machine_type = "MACHINE_TYPE" name = "VM_NAME" zone = "ZONE" network_performance_config { total_egress_bandwidth_tier = "TIER_1" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_1 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_1" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_2 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_2" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_3 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_3"" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_4 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_4"" } }
Replace the following:
VM_NAME
: a name for the new compute instancePROJECT_ID
: ID of the project to create the instance in. Unless you are using a Shared VPC network, the project you specify must be the same one in which all the subnets and networks were created in.DEVICE_NAME
: The name to associate with the boot disk in the guest OSIMAGE_NAME
: the name of an image,for example,"projects/debian-cloud/global/images/debian-11-bullseye-v20231010"
.DISK_SIZE
: the size of the boot disk, in GiBDISK_TYPE
: the type of disk to use for the boot disk, for example,pd-standard
MACHINE_TYPE
: the machine type of the instance. To oversubscribe the queue count, the machine type you specify must support gVNIC and Tier_1 networking.ZONE
: the zone to create the instance inQUEUE_COUNT
: the number of queues for the vNIC, subject to the rules discussed in Custom queue allocation.SUBNET_*
: the name of the subnet that the network interface connects to
REST
- If you don't already have a VPC network with a subnet for each vNIC interface you plan to configure, create them.
Create a compute instance with specific queue counts for NICs using the
instances.insert
method. Repeat thenetworkInterfaces
property to configure multiple network interfaces.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "name": "VM_NAME", "machineType": "machineTypes/MACHINE_TYPE", "networkPerformanceConfig": { "totalEgressBandwidthTier": TIER_1 }, "networkInterfaces": [ { "nicType": gVNIC, "subnetwork":"regions/region/subnetworks/SUBNET_1", "queueCount": "QUEUE_COUNT_1" } ], "networkInterfaces": [ { "nicType": gVNIC, "subnetwork":"regions/region/subnetworks/SUBNET_2", "queueCount": "QUEUE_COUNT_2" } ], }
Replace the following:
PROJECT_ID
: ID of the project to create the compute instance inZONE
: zone to create the compute instance inVM_NAME
: name of the new compute instanceMACHINE_TYPE
: machine type, predefined or custom, for the new compute instance. To oversubscribe the queue count, the machine type must support gVNIC and Tier_1 networking.SUBNET_*
: the name of the subnet that the network interface connects toQUEUE_COUNT
: Number of queues for the vNIC, subject to the rules discussed in Custom queue allocation.
Queue allocations and changing the machine type
Compute instances are created with a default queue allocation, or you can assign a custom queue count to each virtual network interface card (vNIC) when you create a new compute instance by using the Compute Engine API. The default or custom vNIC queue assignments are only set when creating a compute instance. If your instance has vNICs that use default queue counts, you can change its machine type. If the machine type that you are changing to has a different number of vCPUs, the default queue counts for your instance are recalculated based on the new machine type.
If your VM has vNICs which use custom, non-default queue counts, then you can change the machine type by using the Google Cloud CLI or Compute Engine API to update the instance properties. The conversion succeeds if the resulting VM supports the same queue count per vNIC as the original instance. For VMs that use the VirtIO-Net interface and have a custom queue count that is higher than 16 per vNIC, you can't change the machine type to a third generation or later machine type, because they use only gVNIC. Instead, you can migrate your VM to a third generation or later machine type by following the instructions in Move your workload to a new compute instance.
What's next
- Learn more about machine types.
- Learn more about Virtual machine instances.
- Create and start a VM instance.
- Configure per VM Tier_1 networking performance for a compute instance.
- Complete the quickstart tutorial Create a Linux VM instance in Compute Engine.
- Complete the quickstart tutorial Create a Windows Server VM instance in Compute Engine.