Create a VPC-native cluster


This page explains how to configure VPC-native clusters in Google Kubernetes Engine (GKE).

To learn more about the benefits and requirements of VPC-native clusters, see the overview for VPC-native clusters.

For GKE Autopilot clusters, VPC-native networks are enabled by default and cannot be overridden.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

Limitations

  • You cannot convert a VPC-native cluster into a routes-based cluster, and you cannot convert a routes-based cluster into a VPC-native cluster.
  • VPC-native clusters require VPC networks. Legacy networks are not supported.
  • As with any GKE cluster, Service (ClusterIP) addresses are only available from within the cluster. If you need to access a Kubernetes Service from VM instances outside of the cluster, but within the cluster's VPC network and region, create an internal passthrough Network Load Balancer.
  • If you use all of the Pod IP addresses in a subnet, you cannot replace the subnet's secondary IP address range without putting the cluster into an unstable state. However, you can create additional Pod IP address ranges using discontiguous multi-Pod CIDR.

Create a cluster

This section shows you how to complete the following tasks at cluster creation time:

  • Create a cluster and subnet simultaneously.
  • Create a cluster in an existing subnet.
  • Create a cluster and select the control plane IP address range.
  • Create a cluster with dual-stack networking in a new subnet (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later).
  • Create a dual-stack cluster and a dual-stack subnet simultaneously (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later).

Create a cluster and subnet simultaneously

The following directions demonstrate how to create a VPC-native GKE cluster and subnet at the same time. The secondary range assignment method is managed by GKE when you perform these two steps with one command.

gcloud

To create a VPC-native cluster and subnet simultaneously:

gcloud container clusters create CLUSTER_NAME \
    --location=COMPUTE_LOCATION \
    --enable-ip-alias \
    --create-subnetwork name=SUBNET_NAME,range=NODE_IP_RANGE \
    --cluster-ipv4-cidr=POD_IP_RANGE \
    --services-ipv4-cidr=SERVICES_IP_RANGE

Replace the following:

  • CLUSTER_NAME: the name of the GKE cluster.
  • COMPUTE_LOCATION: the Compute Engine location for the cluster.
  • SUBNET_NAME: the name of the subnet to create. The subnet's region is the same region as the cluster (or the region containing the zonal cluster). Use an empty string (name="") if you want GKE to generate a name for you.
  • NODE_IP_RANGE: an IP address range in CIDR notation, such as 10.5.0.0/20, or the size of a CIDR block's subnet mask, such as /20. This is used to create the subnet's primary IP address range for nodes. If omitted, GKE chooses an available IP range in the VPC with a size of /20.
  • POD_IP_RANGE: an IP address range in CIDR notation, such as 10.0.0.0/14, or the size of a CIDR block's subnet mask, such as /14. This is used to create the subnet's secondary IP address range for Pods. If omitted, GKE uses a randomly chosen /14 range containing 218 addresses. The automatically chosen range is randomly selected from 10.0.0.0/8 (a range of 224 addresses) and does not include IP address ranges allocated to VMs, existing routes, or ranges allocated to other clusters. The automatically chosen range might conflict with reserved IP addresses, dynamic routes, or routes within VPCs that peer with this cluster. If you use these any of these, you should specify --cluster-ipv4-cidr to prevent conflicts.
  • SERVICES_IP_RANGE: an IP address range in CIDR notation, such as 10.4.0.0/19, or the size of a CIDR block's subnet mask, such as /19. This is used to create the subnet's secondary IP address range for Services. If omitted, GKE uses /20, the default Services IP address range size.

Console

You cannot create a cluster and subnet simultaneously using the Google Cloud console. Instead, first create a subnet then create the cluster in an existing subnet.

API

To create a VPC-native cluster, define an IPAllocationPolicy object in your cluster resource:

{
  "name": CLUSTER_NAME,
  "description": DESCRIPTION,
  ...
  "ipAllocationPolicy": {
    "useIpAliases": true,
    "createSubnetwork": true,
    "subnetworkName": SUBNET_NAME
  },
  ...
}

The createSubnetwork field automatically creates and provisions a subnetwork for the cluster. The subnetworkName field is optional; if left empty, a name is automatically chosen for the subnetwork.

Create a cluster in an existing subnet

The following instructions demonstrate how to create a VPC-native GKE cluster in an existing subnet with your choice of secondary range assignment method.

gcloud

  • To use a secondary range assignment method of managed by GKE:

    gcloud container clusters create CLUSTER_NAME \
        --location=COMPUTE_LOCATION \
        --enable-ip-alias \
        --subnetwork=SUBNET_NAME \
        --cluster-ipv4-cidr=POD_IP_RANGE \
        --services-ipv4-cidr=SERVICES_IP_RANGE
    
  • To use a secondary range assignment method of user-managed:

    gcloud container clusters create CLUSTER_NAME \
        --location=COMPUTE_LOCATION \
        --enable-ip-alias \
        --subnetwork=SUBNET_NAME \
        --cluster-secondary-range-name=SECONDARY_RANGE_PODS \
        --services-secondary-range-name=SECONDARY_RANGE_SERVICES
    

Replace the following:

  • CLUSTER_NAME: the name of the GKE cluster.
  • COMPUTE_LOCATION: the Compute Engine location for the cluster.
  • SUBNET_NAME: the name of an existing subnet. The subnet's primary IP address range is used for nodes. The subnet must exist in the same region as the one used by the cluster. If omitted, GKE attempts to use a subnet in the default VPC network in the cluster's region.
  • If the secondary range assignment method is managed by GKE:
    • POD_IP_RANGE: an IP address range in CIDR notation, such as 10.0.0.0/14, or the size of a CIDR block's subnet mask, such as /14. This is used to create the subnet's secondary IP address range for Pods. If you omit the --cluster-ipv4-cidr option, GKE chooses a /14 range (218 addresses) automatically. The automatically chosen range is randomly selected from 10.0.0.0/8 (a range of 224 addresses) and won't include IP address ranges allocated to VMs, existing routes, or ranges allocated to other clusters. The automatically chosen range might conflict with reserved IP addresses, dynamic routes, or routes within VPCs that peer with this cluster. If you use these any of these, you should specify --cluster-ipv4-cidr to prevent conflicts.
    • SERVICES_IP_RANGE: an IP address range in CIDR notation (for example, 10.4.0.0/19) or the size of a CIDR block's subnet mask (for example, /19). This is used to create the subnet's secondary IP address range for Services.
  • If the secondary range assignment method is user-managed:
    • SECONDARY_RANGE_PODS: the name of an existing secondary IP address range in the specified SUBNET_NAME. GKE uses the entire subnet secondary IP address range for the cluster's Pods.
    • SECONDARY_RANGE_SERVICES: the name of an existing secondary IP address range in the specified

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create then in the Standard or Autopilot section, click Configure.

  3. From the navigation pane, under Cluster, click Networking.

  4. In the Network drop-down list, select a VPC.

  5. In the Node subnet drop-down list, select a subnet for the cluster.

  6. Ensure the Enable VPC-native traffic routing (uses alias IP) checkbox is selected.

  7. Select the Automatically create secondary ranges checkbox if you want the secondary range assignment method to be managed by GKE. Clear this checkbox if you have already created secondary ranges for the chosen subnet and would like the secondary range assignment method to be user-managed.

  8. In the Pod address range field, enter a pod range, such as 10.0.0.0/14.

  9. In the Service address range field, enter a service range, such as 10.4.0.0/19.

  10. Configure your cluster.

  11. Click Create.

Terraform

You can create a VPC-native cluster with Terraform using a Terraform module.

For example, you can add the following block to your Terraform configuration:

module "gke" {
  source  = "terraform-google-modules/kubernetes-engine/google"
  version = "~> 12.0"

  project_id        = "PROJECT_ID"
  name              = "CLUSTER_NAME"
  region            = "COMPUTE_LOCATION"
  network           = "NETWORK_NAME"
  subnetwork        = "SUBNET_NAME"
  ip_range_pods     = "SECONDARY_RANGE_PODS"
  ip_range_services = "SECONDARY_RANGE_SERVICES"
}

Replace the following:

  • PROJECT_ID: your project ID.
  • CLUSTER_NAME: the name of the GKE cluster.
  • COMPUTE_LOCATION: the Compute Engine location for the cluster. For Terraform, the Compute Engine region.
  • NETWORK_NAME: the name of an existing network.
  • SUBNET_NAME: the name of an existing subnet. The subnet's primary IP address range is used for nodes. The subnet must exist in the same region as the one used by the cluster.
  • SECONDARY_RANGE_PODS: the name of an existing secondary IP address range in the specified
  • SECONDARY_RANGE_SERVICES: the name of an existing secondary IP address range in the specified

API

When you create a VPC-native cluster, you define an IPAllocationPolicy object. You can reference existing subnet secondary IP address ranges or you can specify CIDR blocks. Reference existing subnet secondary IP address ranges to create a cluster whose secondary range assignment method is user-managed. Provide CIDR blocks if you want the range assignment method to be managed by GKE.

{
  "name": CLUSTER_NAME,
  "description": DESCRIPTION,
  ...
  "ipAllocationPolicy": {
    "useIpAliases": true,
    "clusterIpv4CidrBlock"      : string,
    "servicesIpv4CidrBlock"     : string,
    "clusterSecondaryRangeName" : string,
    "servicesSecondaryRangeName": string,

  },
  ...
}

This command includes the following values:

  • "clusterIpv4CidrBlock": the CIDR range for Pods. This determines the size of the secondary range for Pods, and can be in CIDR notation, such as 10.0.0.0/14. An empty space with the given size is chosen from the available space in your VPC. If left blank, a valid range is found and created with a default size.
  • "servicesIpv4CidrBlock": the CIDR range for Services. See description of "clusterIpv4CidrBlock".
  • "clusterSecondaryRangeName": the name of the secondary range for Pods. The secondary range must already exist and belong to the subnetwork associated with the cluster.
  • "serviceSecondaryRangeName": the name of the secondary range for Services. The secondary range must already exist and belong to the subnetwork associated with the cluster.

Create a cluster and select the control plane IP address range

By default, clusters with Private Service Connect use the primary subnet range to provision the internal IP address assigned to the control plane endpoint. You can override this default setting by selecting a different subnet range during the cluster creation time only. The following sections show you how to create a cluster with Private Service Connect and override the subnet range.

gcloud

Create cluster with Private Service Connect defined as public

gcloud container clusters create CLUSTER_NAME \
    --private-endpoint-subnetwork=SUBNET_NAME \
    --location=COMPUTE_LOCATION

Add the --enable-private-nodes flag to create the Private Service Connect cluster as private.

Replace the following:

  • CLUSTER_NAME: the name of the GKE cluster.
  • SUBNET_NAME: the name of an existing subnet.
  • COMPUTE_LOCATION: the Compute Engine location for the cluster.

GKE creates a cluster with Private Service Connect.

Create a cluster defined as private:

In GKE version 1.29 and later, you can create a cluster with Private Service Connect. When you create a subnet in a private cluster using Private Service Connect, the --master-ipv4-cidr parameter is optional. To create a private cluster:

gcloud container clusters create CLUSTER_NAME --enable-ip-alias \
    --enable-private-nodes  \
    --private-endpoint-subnetwork=SUBNET_NAME \
    --region=COMPUTE_REGION

Replace the following:

  • CLUSTER_NAME: the name of the GKE cluster.
  • SUBNET_NAME: the name of an existing subnet. If you don't provide a value for the private-endpoint-subnetworkflag, but you use the master-ipv4-cidr, GKE creates a new subnet that uses the values that you defined in master-ipv4-cidr. GKE uses the new subnet to provision the internal IP address for the control plane.
  • COMPUTE_LOCATION: the Compute Engine location for the cluster.

Console

Create a cluster defined as public:

To assign a subnet to the control plane of a new cluster, you must add a subnet first. Complete the following steps:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. In the Standard or Autopilot section, click Configure.

  4. For the Name, enter your cluster name.

  5. For Standard clusters, from the navigation pane, under Cluster, click Networking.

  6. In the IPv4 network access section, do the following:

    1. To create a GKE cluster as public, select Public cluster.
    2. To create a GKE cluster as private, select Private cluster.

    In both cases, you can later change the cluster isolation mode when editing the cluster configuration.

  7. In the Advanced networking options section, select the Override control plane's default private endpoint subnet checkbox.

  8. In the Private endpoint subnet list, select your created subnet.

  9. Click Done. Add additional authorized networks as needed.

Create a cluster with dual-stack networking

You can create a cluster with IPv4/IPv6 dual-stack networking on a new or existing dual-stack subnet. Dual-stack subnet is available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later. Dual-stack subnet is not supported with Windows Server node pools.

Before setting up dual-stack clusters, we recommend that you complete the following actions:

In this section, you create a dual-stack subnet first and use this subnet to create a cluster.

  1. To create a dual-stack subnet, run the following command:

    gcloud compute networks subnets create SUBNET_NAME \
        --stack-type=ipv4-ipv6 \
        --ipv6-access-type=ACCESS_TYPE \
        --network=NETWORK_NAME \
        --range=PRIMARY_RANGE \
        --region=COMPUTE_REGION
    

    Replace the following:

    • SUBNET_NAME: the name of the subnet that you choose.
    • ACCESS_TYPE: the routability to the public internet. Use INTERNAL for private clusters and EXTERNAL for public clusters. If --ipv6-access-type is not specified, the default access type is EXTERNAL.
    • NETWORK_NAME: the name of the network that will contain the new subnet. This network must meet the following conditions:
    • PRIMARY_RANGE: the primary IPv4 IP address range for the new subnet, in CIDR notation. For more information, see Subnet ranges.
    • COMPUTE_REGION: the compute region for the cluster.
  2. To create a cluster with a dual-stack subnet, either use the gcloud CLI or the Google Cloud console:

gcloud

  • For Autopilot clusters, run the following command:

      gcloud container clusters create-auto CLUSTER_NAME \
          --location=COMPUTE_LOCATION \
          --network=NETWORK_NAME \
          --subnetwork=SUBNET_NAME
    

    Replace the following:

    • CLUSTER_NAME: the name of your new Autopilot cluster.
    • COMPUTE_LOCATION: the Compute Engine location for the cluster.
    • NETWORK_NAME: the name of a VPC network that contains the subnet. This VPC network must be a custom mode VPC network. For more information, see how to switch a VPC network from auto mode to custom mode.
    • SUBNET_NAME: the name of the dual-stack subnet.

      GKE Autopilot clusters default to a dual-stack cluster when you use a dual-stack subnet. After cluster creation, you can update the Autopilot cluster to be IPv4-only.

  • For Standard clusters, run the following command:

    gcloud container clusters create CLUSTER_NAME \
        --enable-ip-alias \
        --enable-dataplane-v2 \
        --stack-type=ipv4-ipv6 \
        --network=NETWORK_NAME \
        --subnetwork=SUBNET_NAME \
        --location=COMPUTE_LOCATION
    

    Replace the following:

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. In the Standard or Autopilot section, click Configure.

  4. Configure your cluster as needed.

  5. From the navigation pane, under Cluster, click Networking.

  6. In the Network list, select the name of your network.

  7. In the Node subnet list, select the name of your dual-stack subnet.

  8. For Standard clusters, select the IPv4 and IPv6 (dual stack) radio button. This option is available only if you selected a dual-stack subnet.

    Autopilot clusters default to a dual-stack cluster when you use a dual-stack subnet.

  9. Click Create.

Create a dual-stack cluster and a subnet simultaneously

You can create a subnet and a dual-stack cluster simultaneously. GKE creates an IPv6 subnet and assigns an external IPv6 primary range to the subnet.

If using Shared VPC, you cannot simultaneously create the cluster and subnet. Instead, a Network Admin in the Shared VPC host project must create the dual-stack subnet first.

  • For Autopilot clusters, run the following command:

    gcloud container clusters create-auto CLUSTER_NAME \
        --location=COMPUTE_LOCATION \
        --network=NETWORK_NAME \
        --create-subnetwork name=SUBNET_NAME
    

    Replace the following:

    • CLUSTER_NAME: the name of your new Autopilot cluster.
    • COMPUTE_LOCATION: the Compute Engine location for the cluster.
    • NETWORK_NAME: the name of a VPC network that contains the subnet. This VPC network must be a custom mode VPC network that uses Unique Local IPv6 Unicast Addresses (ULA). For more information, see how to switch a VPC network from auto mode to custom mode.
    • SUBNET_NAME: the name of the new subnet. GKE can create the subnet based on your organization policies:
      • If your organization policies allow dual-stack, and the network is custom mode, GKE creates a dual-stack subnet and assigns an external IPv6 primary range to the subnet .
      • If your organization policies don't allow dual-stack, or if the network is in auto mode, GKE creates a single stack (IPv4) subnet.
  • For Standard clusters, run the following command:

    gcloud container clusters create CLUSTER_NAME \
        --enable-ip-alias \
        --stack-type=ipv4-ipv6 \
        --ipv6-access-type=ACCESS_TYPE \
        --network=NETWORK_NAME \
        --create-subnetwork name=SUBNET_NAME,range=PRIMARY_RANGE \
        --location=COMPUTE_LOCATION
    

    Replace the following:

    • CLUSTER_NAME: the name of the new cluster that you choose.
    • ACCESS_TYPE: the routability to the public internet. Use INTERNAL for private clusters and EXTERNAL for public clusters. If --ipv6-access-type is not specified, the default access type is EXTERNAL.
    • NETWORK_NAME: the name of the network that will contain the new subnet. This network must meet the following conditions:
    • SUBNET_NAME: the name of the new subnet that you choose.
    • PRIMARY_RANGE: the primary IPv4 address range for the new subnet, in CIDR notation. For more information, see Subnet ranges.
    • COMPUTE_LOCATION: the Compute Engine location for the cluster.

Update the stack type

You can change the stack type of an existing cluster or a update an existing subnet to a dual-stack subnet.

Update the stack type on an existing cluster

Before you change the stack type on an existing cluster, consider the following limitations:

  • Changing the stack type is supported in new GKE clusters running version 1.25 or later. GKE clusters that have been upgraded from versions 1.24 to versions 1.25 or 1.26 might get validation errors when enabling dual-stack network. In case of errors, contact the Google Cloud support team.

  • Changing the stack type is a disruptive operation because GKE restarts components in both the control plane and nodes.

  • GKE respects your configured maintenance windows when recreating nodes. This means that the cluster stack type won't be operational on the cluster until the next maintenance window occurs. If you prefer not to wait, you can manually upgrade the node pool by setting the --cluster-version flag to the same GKE version the control plane is already running. You must use the gcloud CLI if you use this workaround. For more information, see caveats for maintenance windows.

  • Changing the stack type does not automatically change the IP family of existing Services. The following conditions apply:

    • If you change a single stack to dual-stack, the existing Services remain single stack.
    • If you change a dual-stack to single stack, the existing Services with IPv6 addresses get into an error state. Delete the Service and create one with the correct ipFamilies. To learn more, see an example of how to set up a Deployment.

To update an existing VPC-native cluster, you can use gcloud CLI or the Google Cloud console:

gcloud

Run the following command:

  gcloud container clusters update CLUSTER_NAME \
      --stack-type=STACK_TYPE \
      --location=COMPUTE_LOCATION

Replace the following:

  • CLUSTER_NAME: the name of the cluster you want to update.
  • STACK_TYPE: the stack type. Replace with one of the following values:
    • ipv4: to update a dual-stack cluster to IPv4 only cluster. GKE uses the primary IPv4 address range of the cluster's subnet.
    • ipv4-ipv6: to update an existing IPv4 cluster to dual-stack. You can only change a cluster to dual-stack if the underlying subnet supports dual-stack. To learn more, see Update an existing subnet to a dual-stack subnet.
  • COMPUTE_LOCATION: the Compute Engine location for the cluster.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Next to the cluster you want to edit, click Actions, then click Edit.

  3. In the Networking section, next to Stack type, click Edit.

  4. In the Edit stack type dialog, select the checkbox for the cluster stack type you need.

  5. Click Save Changes.

Update an existing subnet to a dual-stack subnet (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later).

Update an existing subnet to a dual-stack subnet

To update an existing subnet to a dual-stack subnet, run the following command. Updating a subnet does not affect any existing IPv4 clusters in the subnet.

gcloud compute networks subnets update SUBNET_NAME \
    --stack-type=ipv4-ipv6 \
    --ipv6-access-type=ACCESS_TYPE \
    --region=COMPUTE_REGION

Replace the following:

  • SUBNET_NAME: the name of the subnet.
  • ACCESS_TYPE: the routability to the public internet. Use INTERNAL for private clusters and EXTERNAL for public clusters. If --ipv6-access-type is not specified, the default access type is EXTERNAL.
  • COMPUTE_REGION: the compute region for the cluster.

Verify the stack type, Pod, and Service IP address ranges

After you create a VPC-native cluster, you can verify its Pod and Service ranges.

gcloud

To verify the cluster, run the following command:

gcloud container clusters describe CLUSTER_NAME

The output has an ipAllocationPolicy block. The stackType field describes the type of network definition. For each type, you can see the following network information:

  • IPv4 network information:

    • clusterIpv4Cidr is the secondary range for Pods.
    • servicesIpv4Cidr is the secondary range for Services.
  • IPv6 network information (if a cluster has dual-stack networking):

    • ipv6AccessType: The routability to the public internet. INTERNAL for private IPv6 addresses and EXTERNAL for public IPv6 addresses.
    • subnetIpv6CidrBlock: The secondary IPv6 address range for the new subnet.
    • servicesIpv6CidrBlock: The address range assigned for the IPv6 Services on the dual-stack cluster.

Console

To verify the cluster, perform the following steps:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to inspect.

The secondary ranges are displayed in the Networking section:

  • Pod address range is the secondary range for Pods
  • Service address range is the secondary range for Services

Advanced configuration for internal IP addresses

The following sections show how to use non-RFC 1918 private IP address ranges and how to enable privately used public IP address ranges.

Use non-RFC 1918 IP address ranges

GKE clusters can use IP address ranges outside of the RFC 1918 ranges for nodes, Pods, and Services. See valid ranges in the VPC network documentation for a list of non-RFC 1918 private ranges that can be used as internal IP addresses for subnet ranges.

This feature is not supported with Windows Server node pools.

Non-RFC 1918 IP address ranges are compatible with both private clusters and non-private clusters.

Non-RFC 1918 private ranges are subnet ranges — you can use them exclusively or in conjunction with RFC 1918 subnet ranges. Nodes, Pods, and Services continue to use subnet ranges as described in IP ranges for VPC-native clusters. If you use non-RFC 1918 ranges, keep the following in mind:

  • Subnet ranges, even those using non-RFC 1918 ranges, must be assigned manually or by GKE before the cluster's nodes are created. You cannot switch to or cease using non-RFC 1918 subnet ranges unless you replace the cluster.

  • Internal passthrough Network Load Balancers only use IP addresses from the subnet's primary IP address range. To create an internal passthrough Network Load Balancer with a non-RFC 1918 address, your subnet's primary IP address range must be non-RFC 1918.

Destinations outside your cluster might have difficulties receiving traffic from private, non-RFC 1918 ranges. For example, RFC 1112 (class E) private ranges are typically used as multicast addresses. If a destination outside of your cluster cannot process packets whose sources are private IP addresses outside of the RFC 1918 range, you can do the following:

  • Use an RFC 1918 range for the subnet's primary IP address range. This way, nodes in the cluster use RFC 1918 addresses.
  • Ensure that your cluster is running the IP masquerade agent and that the destinations are not in the nonMasqueradeCIDRs list. This way, packets sent from Pods have their sources changed (SNAT) to node addresses, which are RFC 1918.

Enable privately used external IP address ranges

GKE clusters can privately use certain external IP address ranges as internal, subnet IP address ranges. You can privately use any external IP address except for certain restricted ranges as described the VPC network documentation. This feature is not supported with Windows Server node pools.

Your cluster must be a VPC-native cluster in order to use privately used external IP address ranges. Routes-based clusters are not supported.

Privately used external ranges are subnet ranges. You can use them exclusively or in conjunction with other subnet ranges that use private addresses. Nodes, Pods, and Services continue to use subnet ranges as described in IP ranges for VPC-native clusters. Keep the following in mind when re-using external IP addresses privately:

  • When you use a external IP address range as a subnet range, your cluster can no longer communicate with systems on the Internet that use that external range. The range becomes an internal IP address range in the cluster's VPC network.

  • Subnet ranges, even those that privately use external IP address ranges, must be assigned manually or by GKE before the cluster's nodes are created. You cannot switch to or cease using privately used external IP addresses unless you replace the cluster.

GKE by default implements SNAT on the nodes to external IP destinations. If you have configured the Pod CIDR to use external IP addresses, the SNAT rules apply to Pod-to-Pod traffic. To avoid this you have 2 options:

For Standard clusters, if the cluster version is 1.14 or later, both options will work. If your cluster version is earlier than 1.14, you can only use the second option (configuring ip-masq-agent).

What's next