Multiple network interfaces
This page provides an overview of multiple network interfaces for Compute Engine VM instances. Instances with multiple network interfaces are referred to as multi-NIC instances.
An instance always has at least one virtual network interface (vNIC). Depending on the machine type, you can configure additional network interfaces.
Use cases
Multi-NIC instances are useful in the following scenarios:
To connect to resources in separate VPC networks: multi-NIC instances can connect to resources located in different VPC networks that aren't connected to each other through VPC Network Peering or Network Connectivity Center.
Because each interface of a multi-NIC instance is in a separate VPC network, you can use each interface for a unique purpose. For example, you can use some interfaces to route packets between VPC networks that carry production traffic and another interface for management or configuration purposes.
Within the guest OS of each multi-NIC instance, you must configure route policies and local route tables.
Routing packets between VPC networks: multi-NIC instances can be used as next hops for routes to connect two or more VPC networks.
Software running within the guest OS of a multi-NIC instance can perform packet inspection, network address translation (NAT), or another network security function.
When connecting VPC networks using multi-NIC instances, it's a best practice to configure two or more multi-NIC instances, using them as backends for an internal passthrough Network Load Balancer in each VPC network. For more information, see Use cases in the Internal passthrough Network Load Balancers as next hops documentation.
You can also use multi-NIC instances with Private Service Connect interfaces to connect service producer and consumer networks in different projects.
Network interface types
Google Cloud supports the following types of network interfaces:
vNICs: the virtual network interfaces of Compute Engine VM instances. Each instance must have at least one vNIC. vNICs in regular VPC networks can be either
GVNIC
orVIRTIO_NET
. You can only configure vNICs when creating an instance.Dynamic NICs (Preview): a child interface of a parent vNIC. You can configure Dynamic NICs when you create an instance, or add them later. For more information, see Dynamic NICs.
You can also configure multi-NIC instances using machine types that
include RDMA network interfaces (MRDMA
), which must be
attached to a VPC network with an
RDMA network profile. Other
network interface types, including Dynamic NICs, aren't
supported in VPC networks with an RDMA network profile.
Specifications
The following specifications apply to instances with multiple network interfaces:
Instances and network interfaces: every instance has a
nic0
interface. The maximum number of network interfaces varies depending on the instance's machine type.- Each interface has an associated stack type, which determines the supported subnet stack types and IP address versions. For more information, see Stack type and IP addresses.
Unique network for each network interface: except for VPC networks that are created with an RDMA network profile, each network interface must use a subnet in a unique VPC network.
For VPC networks created with an RDMA network profile, multiple RDMA NICs can use the same VPC network, as long as each RDMA NIC uses a unique subnet.
A VPC network and subnet must exist before you can create an instance whose network interface uses the network and subnet. For more information about creating networks and subnets, see Create and manage VPC networks.
Project of instance and subnets: for multi-NIC instances in standalone projects, each network interface must use a subnet located in the same project as the instance.
For instances in Shared VPC host or service projects, see Shared VPC .
Private Service Connect interfaces provide a way for a multi-NIC instance to have network interfaces in subnets in different projects. For more information, see About network attachments.
IP forwarding, MTU, and routing considerations: multi-NIC instances require careful planning for the following instance and interface specific configuration options:
The IP forwarding option is configurable on a per instance basis, applying to all network interfaces. For more information, see Enable IP forwarding for instances.
Each network interface can use a unique maximum transmission unit (MTU), matching the MTU of the associated VPC network. For more information, see Maximum transmission unit.
Each instance receives a default route using DHCP Option 121, as defined by RFC 3442. The default route is associated with
nic0
. Unless manually configured otherwise, any traffic leaving an instance for any destination other than a directly connected subnet will leave the instance using the default route onnic0
.On Linux systems, you can configure custom rules and routes within the guest OS using the
/etc/iproute2/rt_tables
file and theip rule
andip route
commands. For more information, consult the guest OS documentation. For an example, see the following tutorial: Configure routing for an additional interface.
Dynamic NICs
Dynamic NICs are useful in the following scenarios:
You need to add or remove network interfaces to or from existing instances. Adding or removing Dynamic NICs doesn't require restarting or recreating the instance.
You need more network interfaces. The maximum number of vNICs for most machine types in Google Cloud is 10; however, you can configure up to 16 total interfaces by using Dynamic NICs. For more information, see Maximum number of network interfaces.
You need to configure multi-NIC Compute Engine bare metal instances, which only have one vNIC.
Properties of Dynamic NICs
See the following information about the properties of Dynamic NICs:
Dynamic NICs are VLAN interfaces that use the IEEE 802.1Q standard packet format. See the following considerations:
- The VLAN ID of a Dynamic NIC must be an integer from 2 to 255.
- The VLAN ID of a Dynamic NIC must be unique within a parent vNIC. However, Dynamic NICs that belong to different parent vNICs can use the same VLAN ID.
Google Cloud uses the following format for the name of a Dynamic NIC:
nicNUMBER.VLAN_ID
, wherenicNUMBER
is the name of the parent vNIC, such asnic0
.VLAN_ID
is the VLAN ID that you set, such as4
.
An example Dynamic NIC name is
nic0.4
.Creating an instance with Dynamic NICs or adding Dynamic NICs to an existing instance requires additional steps to install and manage the corresponding VLAN interfaces in the guest OS. You can use one of the following methods:
- Configure automatic management of Dynamic NICs by using the Google guest agent.
- Configure the guest OS manually.
For more information, see Configure the guest OS for Dynamic NICs.
Dynamic NICs share the bandwidth of their parent vNIC, and there is no traffic isolation within a parent vNIC. To prevent any of the network interfaces from consuming all of the bandwidth, you must create an application-specific traffic policy in the guest OS to prioritize or distribute traffic, such as by using Linux Traffic Control (TC).
Dynamic NICs share the same receive and transmit queues as their parent vNIC.
Limitations of Dynamic NICs
See the following limitations of Dynamic NICs:
You can't modify the following properties of a Dynamic NIC after it is created:
- The parent vNIC to which the Dynamic NIC belongs.
- The VLAN ID of the Dynamic NIC.
Dynamic NICs don't support the following:
- Advanced network DDoS protection and network edge security policies for Google Cloud Armor
- Per-instance configurations for MIGs
- IPv6-only interfaces (Preview)
- Features that rely on packet intercept, such as firewall endpoints
- Windows operating systems (OS)
A Dynamic NIC with a parent vNIC whose type is
GVNIC
might experience packet loss with some custom MTU sizes. To avoid packet loss, don't use the following MTU sizes: 1986 bytes, 3986 bytes, 5986 bytes, and 7986 bytes.For third generation VMs, a Dynamic NIC with a VLAN ID of
255
can't access the Metadata server IP address. If you need to access the metadata server, ensure that you use a different VLAN ID.For third generation VMs, deleting and adding a Dynamic NIC that has the same VLAN ID might allow unauthorized access across different VPC networks. For more information, see Known issues.
Stack types and IP addresses
When you create a vNIC, you specify one of the following interface stack types:
- IPv4-only
- Dual-stack
IPv6-only (Preview)
The following table describes supported subnet stack types and IP address details for each interface stack type:
Interface | IPv4-only subnet | Dual-stack subnet | IPv6-only subnet (Preview) | IP address details |
---|---|---|---|---|
IPv4-only (single-stack) | IPv4 addresses only. See IPv4 address details. | |||
IPv4 and IPv6 (dual-stack) | Both IPv4 and IPv6 addresses. See IPv4 address details and IPv6 address details | |||
IPv6-only (single-stack) (Preview) | IPv6 addresses only. See IPv6 address details. |
Changing network interface stack type
You can change the stack type of a network interface as follows:
You can convert an IPv4-only interface to dual-stack if the interface's subnet is a dual-stack subnet or if you stop the instance and assign the interface to a dual-stack subnet.
You can convert a dual-stack interface to IPv4-only.
You can't change the stack type of an IPv6-only interface. IPv6-only interfaces (Preview) are only supported when creating instances.
IPv4 address details
Each IPv4-only or dual-stack network interface receives a primary internal IPv4 address. Each interface optionally supports alias IP ranges and an external IPv4 address. The following are the IPv4 specifications and requirements:
Primary internal IPv4 address: Compute Engine assigns the network interface a primary internal IPv4 address from the primary IPv4 address range of the interface's subnet. The primary internal IPv4 address is allocated by DHCP.
You can control which primary internal IPv4 address is assigned by configuring a static internal IPv4 address or by specifying a custom ephemeral internal IPv4 address.
Within a VPC network, the primary internal IPv4 address of each VM network interface is unique.
Alias IP ranges: optionally, you can assign the interface one or more alias IP ranges. Each alias IP range can come from either the primary IPv4 address range or a secondary IPv4 address range of the interface's subnet.
- Within a VPC network, each interface's alias IP range must be unique.
External IPv4 address: optionally, you can assign the interface an ephemeral or reserved external IPv4 address. Google Cloud ensures the uniqueness of each external IPv4 address.
IPv6 address details
Compute Engine assigns each dual-stack or IPv6-only network interface
(Preview) a /96
IPv6 address range from the
/64
IPv6 address range of the interface's subnet:
Whether the
/96
IPv6 address range is internal or external depends on the IPv6 access type of the interface's subnet. Google Cloud ensures the uniqueness of each internal and external IPv6 address range. For more information, see IPv6 specifications.- If an instance needs both an internal IPv6 address range and an external IPv6 address range: you must configure two dual-stack interfaces, two IPv6-only interfaces, or one dual-stack interface and one IPv6-only interface. The subnet used by one interface must have an external IPv6 address range, and the subnet used by the other interface must have an internal IPv6 address range.
The first IPv6 address (
/128
) is configured on the interface by DHCP. For more information, see IPv6 address assignment.You can control which
/96
IPv6 address range is assigned by configuring a static internal or external IPv6 address range. For internal IPv6 addresses, you can specify a custom ephemeral internal IPv6 address.
If you are connecting an instance to multiple networks by using IPv6
addresses, install google-guest-agent
version
20220603.00
or later. For more information, see I can't connect to a secondary interface's
IPv6 address.
Maximum number of network interfaces
For most machine types, the maximum number of network interfaces that you can attach to an instance scales with the number of vCPUs as described in the following tables.
The following are machine-specific exceptions:
Compute Engine bare metal instances support a single vNIC.
The maximum number of vNICs is different for some accelerator optimized machine types, such as A3, A4, and A4X. For more information, see Accelerator-optimized machine family.
Max interface numbers
Use the following table to determine how many network interfaces can be attached to an instance.
Number of vCPU | Maximum number of vNICs | Maximum number of Dynamic NICs | Maximum number of network interfaces (vNICs + Dynamic NICs) |
---|---|---|---|
2 or fewer | 2 | 1 | 2 |
4 | 4 | 3 | 4 |
6 | 6 | 5 | 6 |
8 | 8 | 7 | 8 |
10 | 10 | 9 | 10 |
12 | 10 | 10 | 11 |
14 | 10 | 11 | 12 |
16 | 10 | 12 | 13 |
18 | 10 | 13 | 14 |
20 | 10 | 14 | 15 |
22 or more | 10 | 15 | 16 |
Reference formulas
The following table provides the formulas used to calculate the maximum number of network interfaces for an instance. The formula depends on the number of vCPU.
Number of vCPU (X) | Maximum number of vNICs | Maximum number of Dynamic NICs | Maximum number of network interfaces (vNICs + Dynamic NICs) |
---|---|---|---|
X=1 |
2 |
1 |
2 |
2 ≤ X ≤ 10 |
X |
(X-1) |
X |
X ≥ 12 |
10 |
min(15, (X-10)/2 + 9) |
min(16, (X-10)/2 + 10) |
Example distributions of Dynamic NICs
You don't have to distribute Dynamic NICs evenly across vNICs. However, you might want an even distribution because Dynamic NICs share the bandwidth of their parent vNIC.
An instance must have at least one vNIC. For example, an instance that has 2 vCPUs can have one of the following configurations:
- 1 vNIC
- 2 vNICs
- 1 vNIC and 1 Dynamic NIC
The following tables provide example configurations that evenly distribute Dynamic NICs across vNICs while using the maximum number of network interfaces for a given number of vCPU.
2 vCPUs, 2 NICs
The following table provides examples for an instance with 2 vCPUs that show how many Dynamic NICs you can have for a given number of vNICs.
Number of vCPU | Number of vNICs | Number of Dynamic NICs per vNIC | Total number of network interfaces (vNICs + Dynamic NICs) |
---|---|---|---|
2 | 1 | 1 | 2 |
2 | 0 |
4 vCPUs, 4 NICs
The following table provides examples for an instance with 4 vCPUs that show how many Dynamic NICs you can have for a given number of vNICs.
Number of vCPU | Number of vNICs | Number Dynamic NICs per vNIC | Total number of network interfaces (vNICs + Dynamic NICs) |
---|---|---|---|
4 | 1 | 3 | 4 |
2 | 1 | ||
4 | 0 |
8 vCPUs, 8 NICs
The following table provides examples for an instance with 8 vCPUs that show how many Dynamic NICs you can have for a given number of vNICs.
Number of vCPU | Number of vNICs | Number of Dynamic NICs per vNIC | Total number of network interfaces (vNICs + Dynamic NICs) |
---|---|---|---|
8 | 1 | 7 | 8 |
2 | 3 | ||
4 | 1 | ||
8 | 0 |
14 vCPUs, 12 NICs
The following table provides examples for an instance with 12 vCPUs that show how many Dynamic NICs you can have for a given number of vNICs.
Number of vCPU | Number of vNICs | Number of Dynamic NICs per vNIC | Total number of network interfaces (vNICs + Dynamic NICs) |
---|---|---|---|
14 | 1 | 11 | 12 |
2 | 5 | ||
4 | 2 | ||
6 | 1 |
22 vCPUs, 16 NICs
The following table provides examples for an instance with 22 vCPUs that show how many Dynamic NICs you can have for a given number of vNICs.
Number of vCPU | Number of vNICs | Number of Dynamic NICs per vNIC | Total number of network interfaces (vNICs + Dynamic NICs) |
---|---|---|---|
22 | 1 | 15 | 16 |
2 | 7 | ||
4 | 3 | ||
8 | 1 |
Product interactions
This section describes interactions between multi-NIC instances and other products and features in Google Cloud.
Shared VPC
Except for Private Service Connect interfaces, the subnet and project relationship of a multi-NIC instance in a Shared VPC host or service project is as follows:
Each network interface of a multi-NIC instance located in a Shared VPC host project must use a subnet of a Shared VPC network in the host project.
Each network interface of a multi-NIC instance located in a Shared VPC service project can use either of the following:
- A subnet of a VPC network in the service project.
- A subnet of a Shared VPC network in the host project.
For more information about Shared VPC, see:
Compute Engine internal DNS
Compute Engine creates internal DNS name A and PTR records only for
the primary internal IPv4 address of the nic0
network interface of an
instance. Compute Engine doesn't create internal DNS records for any
IPv4 or IPv6 address associated with a network interface different from nic0
.
For more information, see Compute Engine internal DNS.
Static routes
Static routes can be scoped to specific instances by using network tags. When a network tag is associated with an instance, the tag applies to all network interfaces of the instance. Consequently, adding a network tag to or removing a network tag from an instance might change which static routes apply to any of the instance's network interfaces.
Load balancers
Instance group backends and zonal NEG backends each have an associated VPC network as follows:
For managed instance groups (MIGs), the VPC network for the instance group is the VPC network assigned to the
nic0
interface in the instance template.For unmanaged instance groups, the VPC network for the instance group is the VPC network used by the
nic0
network interface of the first instance that you add to the unmanaged instance group.
The following table shows which backends support distributing connections or requests to any network interface.
Load balancer | Instance groups | GCE_VM_IP NEGs |
GCE_VM_IP_PORT NEGs |
---|---|---|---|
Backend service-based External passthrough Network Load Balancer The backend service isn't associated with a VPC network. For more information, see Backend services and VPC networks. |
nic0 only |
Any NIC | N/A |
Internal passthrough Network Load Balancer The backend service is associated with a VPC network. For more information, see Backend service network specification and Backend service network rules. |
Any NIC | Any NIC | N/A |
External proxy Network Load Balancer For more information about backend service and network requirements, see Backends and VPC networks. |
nic0 only |
N/A | Any NIC |
Internal proxy Network Load Balancer For more information about backend service and network requirements, see Backends and VPC networks. |
nic0 only |
N/A | Any NIC |
External Application Load Balancer For more information about backend service and network requirements, see Backends and VPC networks. |
nic0 only |
N/A | Any NIC |
Internal Application Load Balancer For more information about backend service and network requirements, see Backends and VPC networks. |
nic0 only |
N/A | Any NIC |
Target pool-based External passthrough Network Load Balancers don't use instance groups or NEGs and
only support load balancing to nic0
network interfaces.
Firewall rules
The set of firewall rules—from hierarchical firewall policies, global network firewall policies, regional network firewall policies, and VPC firewall rules—are unique to each network interface. Ensure that each network has appropriate firewall rules to allow the traffic that you want to allow to and from a multi-NIC instance. To determine which firewall rules apply to a network interface, and the source for each rule, see Get effective firewall rules for a VM interface.
Firewall rules can be scoped to specific VM instances by using network tags or secure tags, both of which apply to all network interfaces of an instance. For more information, see Comparison of secure tags and network tags.
Known issues
This section describes known issues related to using multiple network interfaces in Google Cloud.
Firewall interactions when reusing a VLAN ID with Dynamic NICs
For third generation VMs, deleting and adding a Dynamic NIC that has the same VLAN ID might allow unauthorized access across different VPC networks.
Consider the following scenario that includes two networks (network-1
and
network-2
) and a VLAN ID A
:
- You delete a Dynamic NIC with VLAN ID
A
fromnetwork-1
. - Within the 10-minute Cloud NGFW connection tracking period,
you create a new Dynamic NIC with the same VLAN ID
A
innetwork-2
. - Traffic originating from the new Dynamic NIC in
network-2
might match an existing connection tracking entry that was previously created by the deleted Dynamic NIC innetwork-1
.
If this happens, the traffic sent from or received by the new
Dynamic NIC in network-2
might be allowed if it matches an
entry in the Cloud NGFW connection tracking table, where the entry
was created for a connection used by the deleted Dynamic NIC in network-1
. To avoid this issue, see the following workaround.
Workaround:
To avoid this issue, do one of the following:
- After you delete a Dynamic NIC, don't reuse its VLAN ID when creating a new Dynamic NIC.
- After you delete a Dynamic NIC, wait at least 10 minutes to create a new Dynamic NIC that uses the same VLAN ID.
For more information about connection tracking and firewalls rules, see Specifications in the Cloud Next Generation Firewall documentation.