This page explains concepts about how external passthrough Network Load Balancers distribute traffic.
Backend selection and connection tracking
Backend selection and connection tracking work together to balance multiple connections across different backends and to route all packets for each connection to the same backend. This is accomplished with a two-part strategy. First, a backend is selected using consistent hashing. Then, this selection is recorded in a connection tracking table.
The following steps capture the backend selection and connection tracking process.
1. Check for a connection tracking table entry to use a previously selected backend.
For an existing connection, the load balancer uses the connection tracking table to identify the previously selected backend for that connection.
The load balancer attempts to match each load-balanced packet with an entry in its connection tracking table using the following process:
If the packet is a TCP packet with the
SYN
flag:If the load balancer's connection tracking mode is
PER_CONNECTION
, continue to the Identify eligible backends step. InPER_CONNECTION
tracking mode, a TCP packet with theSYN
flag always represents a new connection, regardless of the configured session affinity.If the load balancer's connection tracking mode is
PER_SESSION
and the session affinity is eitherNONE
orCLIENT_IP_PORT_PROTO
, continue to the Identify eligible backends step. InPER_SESSION
tracking mode, a TCP packet with theSYN
flag represents a new connection only when using one of the 5-tuple session affinity options (NONE
orCLIENT_IP_PORT_PROTO
).
If the configured session affinity doesn't support connection tracking for the packet's protocol, continue to the Identify eligible backends step. For information about which protocols are connection trackable, see the table in the Connection tracking mode section.
For all other packets, the load balancer checks if the packet matches an existing connection tracking table entry. The connection tuple (a set of packet characteristics) used to compare the packet to existing connection tracking table entries depends on the connection tracking mode and session affinity you configured. For information about which connection tuple is used for connection tracking, see the table in the Connection tracking mode section.
If the packet matches a connection tracking table entry, the load balancer sends the packet to the previously selected backend.
If the packet doesn't match a connection tracking table entry, continue to the Identify eligible backends step.
For information about how long a connection tracking table entry persists and under what conditions it persists, see the Create a connection tracking table entry step.
2. Select an eligible backend for a new connection.
For a new connection, the load balancer uses the consistent hashing algorithm to select a backend from among the eligible backends.
The following steps outline the process to select an eligible backend for a new connection and then record that connection in a connection tracking table.
2.1 Identify eligible backends.
This step models which backends are candidates to receive new connections, taking into consideration health, weighted load balancing, and failover policy configuration.
The following table outlines how failover policy and weighted load balancing influence which backends are eligible backends.
Failover policy | Weighted load balancing | Eligible backends |
---|---|---|
See No failover policy, weighted load balancing disabled | ||
See No failover policy, weighted load balancing enabled | ||
See Failover policy configured, weighted load balancing disabled | ||
See Failover policy configured, weighted load balancing enabled |
No failover policy, weighted load balancing disabled
The set of eligible backends depends only on health checks:
When at least one backend is healthy, the set of eligible backends consists of all healthy backends.
When all backends are unhealthy, the set of eligible backends consists of all backends.
No failover policy, weighted load balancing enabled
The set of eligible backends depends both on health checks and weights, and consists of the first of the following that isn't empty:
- All healthy, non-zero weight backends
- All unhealthy, non-zero weight backends
- All healthy, zero weight backends
- All unhealthy, zero weight backends
Failover policy configured, weighted load balancing disabled
The set of eligible backends depends on health checks and failover policy configuration:
When at least one backend is healthy, the set of eligible backends is defined as follows, using the first condition that is true from this ordered list:
- If there are no healthy primary backends, the eligible backends are all healthy failover backends.
- If there are no healthy failover backends, the eligible backends are all healthy primary backends.
- If the failover ratio is set to
0.0
(the default value), the eligible backends are all healthy primary backends. - If the ratio of the number of healthy primary backends compared to the total number of primary backends is greater than or equal to the configured failover ratio, the eligible backends consists of all healthy primary backends.
- The eligible backends consist of all healthy failover backends.
When there are no healthy backends, the set of eligible backends is defined as follows:
- If the load balancer's failover policy is configured to drop new connections when no backends are healthy, the set of eligible backends is empty. The load balancer drops the packets for the connection.
- If the load balancer's failover policy is not configured to drop new connections when no backends are healthy, health checks aren't relevant. Eligible backends consist of all primary backends.
Failover policy configured, weighted load balancing enabled
The set of eligible backends depends on health checks, weights, and failover policy configuration:
When at least one backend is healthy and has a non-zero weight, the set of eligible backends is defined as follows, using the first condition that is true from this ordered list:
- If there are no healthy, non-zero weight primary backends, the eligible backends are all healthy, non-zero weight failover backends.
- If there are no healthy, non-zero weight failover backends, the eligible backends are all healthy, non-zero weight primary backends.
- If the failover ratio is set to
0.0
(the default value), the eligible backends are all healthy, non-zero weight primary backends. - If the ratio of the number of healthy, non-zero weight primary backends compared to the total number of primary backends is greater than or equal to the configured failover ratio, the eligible backends consists of all healthy, non-zero weight primary backends.
- The eligible backends consist of all healthy, non-zero weight failover backends.
When there are no healthy, non-zero weight backends, the set of eligible backends is defined as follows:
- If the load balancer's failover policy is configured to drop new connections for this situation, the set of eligible backends is empty. The load balancer drops the packets for the connection.
If the load balancer's failover policy is not configured to drop new connections for this situation, health checks aren't relevant. The set of eligible backends is the first of the following that isn't empty:
- All unhealthy, non-zero weight primary backends
- All unhealthy, non-zero weight failover backends
- All healthy, zero weight primary backends
- All healthy, zero weight failover backends
- All unhealthy, zero weight primary backends
- All unhealthy, zero weight failover backends
2.2 Select an eligible backend.
The load balancer uses consistent hashing to select an eligible backend. The load balancer maintains hashes of eligible backends, mapped to a unit circle. Weighted load balancing alters how eligible backends are mapped to the circle such that backends with higher weights are more likely to be selected, proportional to their weights.
When processing a packet for a connection that's not in the connection tracking table, the load balancer computes a hash of the packet characteristics and maps that hash to the same unit circle, selecting an eligible backend on the circle's circumference. The set of packet characteristics used to calculate the packet hash is defined by the session affinity setting.
If a session affinity isn't explicitly configured, the
NONE
session affinity is the default.The following two examples show how weighted load balancing affects the selection of an eligible backend:
If the backend service has two eligible backends—the first having weight
1
and the second having weight4
—the first eligible backend has a 20% (1
÷(1+4)
) selection probability, and the second eligible backend has an 80% (4
÷(1+4)
) selection probability.If the backend service has three eligible backends—eligible backend
a
having weight0
, eligible backendb
having weight2
, and eligible backendc
having weight6
—backenda
has a 0% (0
÷(0+2+6)
) selection probability, backendb
has a 25% (2
÷(0+2+6)
) selection probability, and backendc
has a 75% (6
÷(0+2+6)
) selection probability.
The load balancer assigns new connections to eligible backends in a way that is as consistent as possible even if the number of eligible backends or their weights change. The following benefits of consistent hashing show how the load balancer selects eligible backends for possible new connections that do not have connection tracking table entries:
The load balancer selects the same backend for all possible new connections that have identical packet characteristics, as defined by session affinity, in the following situations:
If weighted load balancing isn't configured, when the set of eligible backends does not change.
If weighted load balancing is configured, when the set of eligible backends does not change, and the weight of each eligible backend remains constant.
The load balancer distributes possible new connections among eligible backends as fairly as possible:
If weighted load balancing isn't configured, approximately
1/N
possible new connections map to each eligible backend, whereN
is the count of eligible backends.If weighted load balancing is configured, the ratio of possible new connections that map to each eligible backend is approximately: the weight of an eligible backend divided by the sum of all eligible backend weights.
If an eligible backend is added, removed, or has its weight change, consistent hashing aims to minimize the disruption of mappings to the other eligible backends—that is, most connections that map to other eligible backends continue to map to the same eligible backend.
2.3 Create a connection tracking table entry.
After selecting a backend, the load balancer creates a connection tracking table entry if the configured session affinity supports connection tracking for the packet's protocol.
If the configured session affinity doesn't support connection tracking for the packet's protocol, skip this step.
If the configured session affinity supports connection tracking for the packet's protocol, the connection tracking table entry that's created maps packet characteristics to the selected backend. The packet header fields used for this mapping depend on the connection tracking mode and session affinity you configured.
For information about which protocols are connection trackable based on your configuration choices, and what packet characteristics are used for the hash, see the Connection tracking mode section.
The load balancer removes connection tracking table entries according to the following rules:
A connection tracking table entry is removed after the connection has been idle for 60 seconds. For more information, see Idle timeout.
Connection tracking table entries are not removed when a TCP connection is closed with a
FIN
orRST
packet. Any new TCP connection always carries theSYN
flag and is processed as described in the Check for a connection tracking table entry step.If a failover policy is configured and the connection draining on failover and failback setting is disabled, the load balancer removes all entries in the connection tracking table when the eligible backends switch from primary to failover backends (failover), or switch from failover to primary backends (failback). For more information, see Connection draining on failover and failback.
Entries in the connection tracking table can be removed if a backend becomes unhealthy. This behavior depends on the connection tracking mode, the protocol, and the connection persistence on unhealthy backends setting. For more information, see Connection persistence on unhealthy backends.
Entries in the connection tracking table are removed after the connection draining timeout that occurs following an event like deleting a backend VM or removing a backend VM from an instance group or NEG. For more information, see Enable connection draining.
Session affinity
Session affinity controls the distribution of new connections from clients to the load balancer's backends. External passthrough Network Load Balancers use session affinity to select a backend from a set of eligible backends as described in the Identify eligible backends and Select an eligible backend steps in the Backend selection and connection tracking section. You configure session affinity on the backend service, not on each backend instance group or NEG.
External passthrough Network Load Balancers support the following session affinity settings. Each session affinity setting uses consistent hashing to select an eligible backend. The session affinity setting determines which fields from the IP header and TCP/UDP headers are used to calculate the hash.
Hash method for backend selection | Session affinity setting1 |
---|---|
5-tuple hash (consists of source IP address, source port, protocol, destination IP address, and destination port) for non-fragmented packets that include port information such as TCP packets and non-fragmented UDP packets OR3-tuple hash (consists of source IP address, destination IP address, and protocol) for fragmented UDP packets and packets of all other protocols |
NONE 2 |
5-tuple hash (consists of source IP address, source port, protocol, destination IP address, and destination port) for non-fragmented packets that include port information such as TCP packets and non-fragmented UDP packets OR3-tuple hash (consists of source IP address, destination IP address, and protocol) for fragmented UDP packets and packets of all other protocols |
CLIENT_IP_PORT_PROTO |
3-tuple hash (consists of source IP address, destination IP address, and protocol) |
CLIENT_IP_PROTO |
2-tuple hash (consists of source IP address and destination IP address) |
CLIENT_IP |
session affinity
parameter on each target pool.
2 A session affinity setting of NONE
does not
mean that there is no session affinity. It means that no session affinity option
is explicitly configured.
Hashing is always performed to select a backend. And a session
affinity setting of NONE
means that the load balancer
uses a 5-tuple hash or a 3-tuple hash to select backends—functionally
the same behavior as when CLIENT_IP_PORT_PROTO
is set.
For information about how the different session affinity settings affect the backend selection and connection tracking methods, see the table in the Connection tracking mode section.
Connection tracking policy
This section describes the settings that control the connection tracking behavior of external passthrough Network Load Balancers. A connection tracking policy includes the following settings:
Connection tracking mode
When connection tracking is possible, the load balancer's connection tracking table maps connection tuples to previously selected backends in a hash table. The set of packet characteristics that compose each connection tuple depends on the connection tracking mode and session affinity.
External passthrough Network Load Balancers support connection tracking based on protocol and session affinity options:
TCP connections are always connection trackable, for all session affinity options.
UDP, ESP, and GRE connections are connection trackable for all session affinity options except for
NONE
.All other protocols, such as ICMP and ICMPv6, aren't connection trackable.
The connection tracking mode refers to the granularity of the each connection
tuple in the load balancer's connection tracking table. The connection tuple
can be 5-tuple or 3-tuple (PER_CONNECTION
mode), or it can match the session
affinity setting (PER_SESSION
mode).
PER_CONNECTION
. This is the default connection tracking mode. This connection tracking mode uses a 5-tuple hash or a 3-tuple hash. Non-fragmented packets that include port information such as TCP packets and non-fragmented UDP packets are tracked with 5-tuple hashes. All other packets are tracked with 3-tuple hashes.PER_SESSION
. This connection tracking mode uses a hash consisting of the same packet characteristics used by the session affinity hash. Depending on the chosen session affinity,PER_SESSION
can result in connections that more frequently match an existing connection tracking table entry, reducing the frequency that a backend needs to be selected by the session affinity hash.
The following table summarizes how connection tracking mode and session affinity work together to route all packets for each connection to the same backend.
Backend selection using session affinity | Connection tracking mode | ||
---|---|---|---|
Session affinity setting | Hash method for backend selection | PER_CONNECTION (default) |
PER_SESSION |
Default
( |
TCP and unfragmented UDP: 5-tuple hash Fragmented UDP and all other protocols: 3-tuple hash |
|
|
Client IP, Destination IP
( |
All protocols: 2-tuple hash |
|
|
Client IP, Destination IP, Protocol
( |
All protocols: 3-tuple hash |
|
|
Client IP, Client Port, Destination IP, Destination Port, Protocol
( |
TCP and unfragmented UDP: 5-tuple hash Fragmented UDP and all other protocols: 3-tuple hash |
|
|
To learn how to change the connection tracking mode, see Configure a connection tracking policy.
Connection persistence on unhealthy backends
The connection persistence settings control whether an existing connection persists on a selected backend VM or endpoint after that backend becomes unhealthy, as long as the backend remains in the load balancer's configured backend group (in an instance group or a NEG).
The following connection persistence options are available:
DEFAULT_FOR_PROTOCOL
(default)NEVER_PERSIST
ALWAYS_PERSIST
The following table summarizes connection persistence options and how connections persist for different protocols, session affinity options, and tracking modes.
Connection persistence on unhealthy backends option | Connection tracking mode | |
---|---|---|
PER_CONNECTION |
PER_SESSION |
|
DEFAULT_FOR_PROTOCOL |
TCP: connections persist on unhealthy backends (all session affinities) All other protocols: connections never persist on unhealthy backends |
TCP: connections persist on unhealthy backends if
session affinity is All other protocols: connections never persist on unhealthy backends |
NEVER_PERSIST |
All protocols: connections never persist on unhealthy backends | |
ALWAYS_PERSIST |
TCP: connections persist on unhealthy backends (all session affinities) ESP, GRE, UDP: connections persist on unhealthy
backends if session affinity is not ICMP, ICMPv6: not applicable because they are not connection-trackable This option should only be used for advanced use cases. |
Configuration not possible |
TCP connection persistence behavior on unhealthy backends
Whenever a TCP connection with 5-tuple tracking persists on an unhealthy backend:
- If the unhealthy backend continues to respond to packets, the connection continues until it is reset or closed (by either the unhealthy backend or the client).
- If the unhealthy backend sends a TCP reset (RST) packet or does not respond to packets, then the client might retry with a new connection, letting the load balancer select a different, healthy backend. TCP SYN packets always select a new, healthy backend.
To learn how to change connection persistence behavior, see Configure a connection tracking policy.
Idle timeout
Entries in connection tracking tables expire 60 seconds after the load balancer processes the last packet that matched the entry. This idle timeout value can't be modified.
Weighted load balancing
Weighted load balancing for external passthrough Network Load Balancers uses weight information reported by an HTTP health check. Weighted load balancing requires that you configure all of the following:
- The backend service's load balancer locality policy (
localityLbPolicy
) must beWEIGHTED_MAGLEV
. - The backend service must use an HTTP health check.
- Health check responses from each backend VM or backend endpoint must contain
a custom HTTP response header. The response header field name is
X-Load-Balancing-Endpoint-Weight
, and valid field values range from0
to1000
.
If you need to use the same instance group or NEG as a backend for two or more backend services, we recommend the following strategy so that each of the common backend instances or endpoints can provide unique weight information for each backend service:
- Use a unique HTTP health check for each backend service. For example,
use a unique
request-path
health check parameter. - Configure backend instances or endpoints to respond with appropriate weight information for each health check.
When using weighted load balancing, the load balancer ranks backend VMs or endpoints, first based on having a weight that's greater than zero or equal to zero, then based on health checks. Health check status is determined by the success criteria for HTTP, HTTPS, and HTTP/2 health checks.
Weight | Healthy or unhealthy | Rank |
---|---|---|
Weight greater than zero | Healthy | First choice |
Weight greater than zero | Unhealthy | Second choice |
Weight equals zero | Healthy | Third choice |
Weight equals zero | Unhealthy | Fourth (last) choice |
For more information about how weighted load balancing influences which backends are eligible backends, see the Identify eligible backends and Select and eligible backend steps in the Backend selection and connection tracking section.
Weighted load balancing can be used in the following scenarios:
If some connections process more data than others, or some connections live longer than others, the backend load distribution might get uneven. By signaling a lower per-instance weight, an instance with high load can reduce its share of new connections, while it keeps servicing existing connections.
If a backend is overloaded and assigning more connections might break existing connections, such backends assign zero weight to itself. By signaling zero weight, a backend instance stops servicing new connections, but continues to service existing ones.
If a backend is draining existing connections before maintenance, it assigns zero weight to itself. By signaling zero weight, the backend instance stops servicing new connections, but continues to service existing ones.
For more information, see Configure weighted load balancing.
Connection draining
Connection draining provides a configurable amount of additional time for established connections to persist in the load balancer's connection tracking table when one of the following actions takes place:
- A virtual machine (VM) instance is removed from a backend instance group (this includes abandoning an instance in a backend managed instance group)
- A VM is stopped or deleted (this includes automatic actions like rolling updates or scaling down a backend managed instance group)
- An endpoint is removed from a backend network endpoint group (NEG)
By default, connection draining for the aforementioned actions is disabled. For information about how connection draining is triggered and how to enable connection draining, see Enabling connection draining.
UDP fragmentation
Backend service-based external passthrough Network Load Balancers can process both fragmented and unfragmented UDP packets. If your application uses fragmented UDP packets, keep the following in mind:
- UDP packets might become fragmented before reaching a Google Cloud VPC network.
- Google Cloud VPC networks forward UDP fragments as they arrive (without waiting for all fragments to arrive).
- Non-Google Cloud networks and on-premises network equipment might forward UDP fragments as they arrive, delay fragmented UDP packets until all fragments have arrived, or discard fragmented UDP packets. For details, see the documentation for the network provider or network equipment.
If you expect fragmented UDP packets and need to route them to the same backends, use the following forwarding rule and backend service configuration parameters:
Forwarding rule configuration: Use only one
UDP
orL3_DEFAULT
forwarding rule per load-balanced IP address, and configure the forwarding rule to accept traffic on all ports. This ensures that all fragments arrive at the same forwarding rule. Even though the fragmented packets (other than the first fragment) lack a destination port, configuring the forwarding rule to process traffic for all ports also configures it to receive UDP fragments that have no port information. To configure all ports, either use the Google Cloud CLI to set--ports=ALL
or use the API to setallPorts
toTrue
.Backend service configuration: Set the backend service's session affinity to
CLIENT_IP
(2-tuple hash) orCLIENT_IP_PROTO
(3-tuple hash) so that the same backend is selected for UDP packets that include port information and UDP fragments (other than the first fragment) that lack port information. Set the backend service's connection tracking mode toPER_SESSION
so that the connection tracking table entries are built by using the same 2-tuple or 3-tuple hashes.
Failover
You can configure an external passthrough Network Load Balancer to distribute connections among VM instances or endpoints in primary backends (instance groups or NEGs), and then switch, if needed, to using failover backends. Failover provides yet another method of increasing availability, while also giving you greater control over how to manage your workload when your primary backends aren't healthy.
By default, when you add a backend to an external passthrough Network Load Balancer's backend service, that backend is a primary backend. You can designate a backend to be a failover backend when you add it to the load balancer's backend service, or by editing the backend service later.
For more information about how failover is used for backend selection and connection tracking, see the Identify eligible backends and Create a connection tracking table entry steps in the Backend selection and connection tracking section.
For more more information about how failover works, see Failover overview for external passthrough Network Load Balancers.
What's next
- To configure an external passthrough Network Load Balancer with a backend service for TCP or UDP traffic only (supporting IPv4 and IPv6 traffic), see Set up an external passthrough Network Load Balancer with a backend service.
- To configure an external passthrough Network Load Balancer for multiple IP protocols (supporting IPv4 and IPv6 traffic), see Set up an external passthrough Network Load Balancer for multiple IP protocols.
- To configure an external passthrough Network Load Balancer with a zonal NEG backend, see Set up an external passthrough Network Load Balancer with zonal NEGs
- To learn how to transition an external passthrough Network Load Balancer from a target pool backend to a regional backend service, see Transitioning an external passthrough Network Load Balancer from a target pool to a backend service.