Ingress for internal Application Load Balancers

This page explains how Ingress for internal Application Load Balancers works in Google Kubernetes Engine (GKE). You can also learn how to set up and use Ingress for internal Application Load Balancers.

In GKE, the internal Application Load Balancer is a proxy-based, regional, Layer 7 load balancer that enables you to run and scale your services behind an internal load balancing IP address. GKE Ingress objects support the internal Application Load Balancer natively through the creation of Ingress objects on GKE clusters.

For general information about using Ingress for load balancing in GKE, see HTTP(S) load balancing with Ingress.

Benefits of using Ingress for internal Application Load Balancers

Using GKE Ingress for internal Application Load Balancers provides the following benefits:

  • A highly available, GKE-managed Ingress controller.
  • Load balancing for internal, service-to-service communication.
  • Container-native load balancing with Network Endpoint Groups (NEG).
  • Application routing with HTTP and HTTPS support.
  • High-fidelity Compute Engine health checks for resilient services.
  • Envoy-based proxies that are deployed on-demand to meet traffic capacity needs.

Support for Google Cloud features

Ingress for internal Application Load Balancers supports a variety of additional features.

Required networking environment for internal Application Load Balancers

The internal Application Load Balancer provides a pool of proxies for your network. The proxies evaluate where each HTTP(S) request should go based on factors such as the URL map, the BackendService's session affinity, and the balancing mode of each backend NEG.

A region's internal Application Load Balancer uses the proxy-only subnet for that region in your VPC network to assign internal IP addresses to each proxy created by Google Cloud.

By default, the IP address assigned to a load balancer's forwarding rule comes from the node's subnet range assigned by GKE instead of from the proxy-only subnet. You can also manually specify an IP address for the forwarding rule from any subnet when you create the rule.

The following diagram provides an overview of the traffic flow for an internal Application Load Balancer, as described in the preceding paragraph.


Here's how the internal Application Load Balancer works:

  1. A client makes a connection to the IP address and port of the load balancer's forwarding rule.
  2. A proxy receives and terminates the client's network connection.
  3. The proxy establishes a connection to the appropriate endpoint (Pod) in a NEG, as determined by the load balancer's URL map, and backend services.

Each proxy listens on the IP address and port specified by the corresponding load balancer's forwarding rule. The source IP address of each packet sent from a proxy to an endpoint is the internal IP address assigned to that proxy from the proxy-only subnet.

HTTPS (TLS) between load balancer and your application

An internal Application Load Balancer acts as a proxy between your clients and your application. Clients can use HTTP or HTTPS to communicate with the load balancer proxy. The connection from the load balancer proxy to your application uses HTTP by default. However, if your application runs in a GKE Pod and can receive HTTPS requests, you can configure the load balancer to use HTTPS when it forwards requests to your application.

To configure the protocol used between the load balancer and your application, use the annotation in your Service manifest.

The following Service manifest specifies two ports. The annotation specifies that an internal Application Load Balancer should use HTTP when it targets port 80 of the Service, And use HTTPS when it targets port 443 of the Service.

You must use the port's name field in the annotation. Do not use a different field such as targetPort.

apiVersion: v1
kind: Service
  name: my-service
  annotations: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
  type: NodePort
    app: metrics
    department: sales
  - name: my-https-port
    port: 443
    targetPort: 8443
  - name: my-http-port
    port: 80
    targetPort: 50001

What's next