Cloud NAT product interactions
This page describes the important interactions between Cloud NAT and other Google Cloud products.
Routes interactions
A
Public NAT
gateway can only use routes whose next hops are the
default internet gateway. Each Virtual Private Cloud (VPC) network starts with a default
route whose destination is 0.0.0.0/0
and whose next hop is the default
internet gateway. For important background information, see the
routes overview.
The following examples illustrate situations that could cause Public NAT gateways to become inoperable:
If you create a static route with next hops set to any other type of static route next hop, then packets with destination IP addresses matching the destination of the route are sent to that next hop instead of to the default internet gateway. For example, if you use virtual machine (VM) instances running a NAT gateway, firewall, or proxy software, you create static routes to direct traffic to those VMs as the next hop. The next-hop VMs require external IP addresses. Thus, traffic from the VMs that rely upon the next-hop VMs or the next-hop VMs themselves cannot use Public NAT gateways.
If you create a custom static route whose next hop is a Cloud VPN tunnel, Public NAT does not use that route. For example, a static route with destination
0.0.0.0/0
and a next hop Cloud VPN tunnel directs traffic to that tunnel, not to the default internet gateway. Therefore, Public NAT gateways cannot use that route. Similarly, Public NAT gateways cannot use static routes with more specific destinations, including0.0.0.0/1
and128.0.0.0/1
.If an on-premises router advertises a dynamic route to a Cloud Router managing a Cloud VPN tunnel or VLAN attachment, Public NAT gateways cannot use that route. For example, if your on-premises router advertises a dynamic route with destination
0.0.0.0/0
,0.0.0.0/0
would be directed to the Cloud VPN tunnel or VLAN attachment. This behavior holds true even for more specific destinations, including0.0.0.0/1
and128.0.0.0/1
.
Private NAT uses the following routes:
- For Network Connectivity Center spokes, Private NAT uses subnet
routes and dynamic routes:
- For traffic between two VPC spokes attached to a Network Connectivity Center hub that contains only VPC spokes, Private NAT uses the subnet routes exchanged by the attached VPC spokes. For information about VPC spokes, see VPC spokes overview.
- If a Network Connectivity Center hub contains both VPC spokes and hybrid spokes such as VLAN attachments for Cloud Interconnect, Cloud VPN tunnels, or Router appliance VMs, Private NAT uses the dynamic routes learned by the hybrid spokes through BGP (Preview) and subnet routes exchanged by the attached VPC spokes. For information about hybrid spokes, see Hybrid spokes.
- For Hybrid NAT, Private NAT uses dynamic routes learned by Cloud Router over Cloud Interconnect or Cloud VPN.
Private Google Access interaction
A Public NAT gateway never performs NAT for traffic sent to the select external IP addresses for Google APIs and services. Instead, Google Cloud automatically enables Private Google Access for a subnet IP address range when you configure a Public NAT gateway to apply to that subnet range, either primary or secondary. As long as the gateway provides NAT for a subnet's range, Private Google Access is in effect for that range and cannot be disabled manually.
A Public NAT gateway does not change the way that Private Google Access works. For more information, see Private Google Access.
Private NAT gateways don't apply to Private Google Access.
Shared VPC interaction
Shared VPC enables multiple service projects in a single organization to use a common, Shared VPC network in a host project. To provide NAT for VMs in service projects that use a Shared VPC network, you must create Cloud NAT gateways in the host project.
VPC Network Peering interaction
Cloud NAT gateways are associated with subnet IP address ranges in a single region and a single VPC network. A Cloud NAT gateway created in one VPC network cannot provide NAT to VMs in other VPC networks that are connected by using VPC Network Peering, even if the VMs in peered networks are in the same region as the gateway.
GKE interaction
A Public NAT gateway can perform NAT for nodes and Pods in a private cluster, which is a type of VPC-native cluster. The Public NAT gateway must be configured to apply to at least the following subnet IP address ranges for the subnet that your cluster uses:
- Subnet primary IP address range (used by nodes)
- Subnet secondary IP address range used for Pods in the cluster
- Subnet secondary IP address range used for Services in the cluster
The simplest way to provide NAT for an entire private cluster is to configure a Public NAT gateway to apply to all the subnet IP address ranges of the cluster's subnet.
For background information about how VPC-native clusters utilize subnet IP address ranges, see IP ranges for VPC-native clusters.
When a Public NAT gateway is configured to provide NAT for a private cluster, it reserves NAT source IP addresses and source ports for each node VM. Those NAT source IP addresses and source ports are usable by Pods because Pod IP addresses are implemented as alias IP ranges assigned to each node VM.
Google Kubernetes Engine (GKE) VPC-native clusters always assign each
node an alias IP range that contains more than one IP address (netmask smaller
than /32
).
If static port allocation is configured, the Public NAT port reservation procedure reserves at least 1,024 source ports per node. If the specified value for minimum ports per VM is greater than 1,024, that value is used.
If dynamic port allocation is configured, the specified value for minimum ports per VM is initially allocated per node. The number of allocated ports subsequently varies between the specified values for minimum and maximum ports per VM, based on demand.
For information about Pod IP address ranges and VPC-native clusters, see Subnet secondary IP address range for Pods.
Independent of Public NAT , Google Kubernetes Engine performs source network address translation (source NAT or SNAT) by using software running on each node when Pods send packets to the internet, unless you've changed the cluster's IP masquerade configuration. If you need granular control over egress traffic from Pods, you can use a network policy.
Under certain circumstances, Public NAT can be useful to non-private VPC-native clusters as well. Because the nodes in a non-private cluster have external IP addresses, packets sent from the node's primary internal IP address are never processed by Cloud NAT. However, if both of the following are true, packets sent from Pods in a non-private cluster can be processed by a Public NAT gateway:
For VPC-native clusters, the Public NAT gateway is configured to apply to the secondary IP address range for the cluster's Pods.
The cluster's IP masquerade configuration is not configured to perform SNAT within the cluster for packets sent from Pods to the internet.
The following example shows the interaction of Public NAT with GKE:
In this example, you want your containers to be NAT translated. To enable NAT
for all the containers and the GKE node, you must choose all the
IP address ranges of Subnet 1
as the NAT candidate:
- Subnet primary IP address range:
10.240.0.0/24
- Subnet secondary IP address range used for Pods:
10.0.0.0/16
It is not possible to enable NAT for only Pod1
or Pod2
.
A Private NAT gateway can perform NAT for nodes and Pods in a private cluster and in a non-private cluster. The Private NAT gateway automatically applies to all the subnet IP address ranges for the private subnet that your cluster uses.
Direct VPC egress interactions
Public NAT gateways can perform NAT for Cloud Run services or jobs that are configured with Direct VPC egress. To enable Cloud Run to use a Public NAT gateway, configure your Public NAT gateway with the following settings:
- To configure which subnets and subnet IP address ranges associated with your
Cloud Run instances can use the
Public NAT
gateway,
specify the
--nat-all-subnet-ip-ranges
or--nat-custom-subnet-ip-ranges
flag:- To let all IP address ranges of all subnets in the region use the
Public NAT
gateway, specify the
--nat-all-subnet-ip-ranges
flag. - To let only specific subnets and subnet IP address ranges use the
Public NAT
gateway, specify them with the
--nat-custom-subnet-ip-ranges
flag.
- To let all IP address ranges of all subnets in the region use the
Public NAT
gateway, specify the
- Set the value of the
--endpoint-types
flag toENDPOINT_TYPE_VM
. This value ensures that only VMs and Direct VPC egress VM endpoints can use the Public NAT gateway. - In case of static port allocation, set the value of the
--min-ports-per-vm
flag to four times the number of ports required by a single Cloud Run instance. - In case of manual NAT IP address allocation, assign an appropriate number of IP addresses to your Public NAT gateway to account for the summation of the number of Google Cloud instances and the number of Cloud Run instances that are deployed in your VPC network.
In addition to the gateway configuration, to send egress traffic from a Cloud Run
service or job, you must set the --vpc-egress
flag to all-traffic
when you
create the service or job.
If you have configured a Cloud Run service or job that has
the --vpc-egress
flag set to private-ranges-only
, then the service or job sends
traffic only to internal IP addresses. You don't need a
Public NAT
gateway for routing traffic to internal destinations.
To prevent Cloud Run services or jobs that
have the --vpc-egress
flag set to private-ranges-only
from using a
Public NAT
gateway, do the following:
- Configure the
Public NAT
gateway with the
--nat-custom-subnet-ip-ranges
flag. - Set the value of the
--nat-custom-subnet-ip-ranges
flag to the subnet names where you have deployed Cloud Run services or jobs with the--vpc-egress
flag set toall-traffic
.
The following limitations apply to Cloud Run services and jobs that use Public NAT gateways:
- Cloud NAT metrics for Direct VPC egress endpoints aren't exported to Cloud Monitoring.
- Cloud NAT logs for Direct VPC egress don't display the name of a Cloud Run service, revision, or job.
You can't use Private NAT gateways with Direct VPC egress endpoints.
Connectivity Tests interactions
You can use Connectivity Tests to check connectivity between network endpoints that use Cloud NAT configurations. You can run Connectivity Tests on networks that use either Public NAT gateways or Private NAT gateways, or both.
View the NAT configuration details in the Configuration analysis trace pane on the Connectivity test details page.
Cloud Load Balancing interactions
Google Cloud regional internal Application Load Balancers and regional external Application Load Balancers communicate with multiple regional internet network endpoint group (NEG) backends. By configuring Cloud NAT gateways for the regional internet NEGs, you can allocate your own set of external IP address ranges from where the Google Cloud traffic should originate. The health checks and data plane traffic are sourced from the NAT IP addresses that you allocate.
Other Google Cloud external load balancers and health check systems communicate with VMs by using special routing paths. Backend VMs don't require external IP addresses nor does a Cloud NAT gateway manage communication for load balancers and health checks. For more information, see Cloud Load Balancing overview and Health checks overview.
What's next
- Learn about Cloud NAT addresses and ports.
- Set up a Public NAT gateway.
- Configure Cloud NAT rules.
- Set up a Private NAT gateway.