External load balancers (ELB) expose services to access from outside the organization from a pool's IP addresses assigned to the organization from the larger instance-external IP pool.
ELB Virtual IP (VIP) addresses don't conflict between organizations and are unique across all organizations. For this reason, you must use ELB services only for services that clients outside the organization necessarily have to access.
Workloads running inside the organization can access ELB services as long as you enable the workloads to exit the organization. This traffic pattern effectively requires outbound traffic from the organization before returning to the internal service.
Before you begin
To configure ELB services, you must have the following:
- Own the project you are configuring the load balancer for. For more information, see Create a project.
- A customized
ProjectNetworkPolicy(PNP) ingress policy to allow traffic to this ELB service. For more information, see Configure PNP to allow traffic to ELB. The necessary identity and access roles:
- Project NetworkPolicy Admin: has access to manage project network policies
in the project namespace Ask your Organization IAM Admin to grant you the
Project NetworkPolicy Admin (
project-networkpolicy-admin) role. - Load Balancer Admin: Ask your Organization IAM Admin to grant you the Load Balancer Admin (
load-balancer-admin) role. - Global Load Balancer Admin: For global ELBs, ask your Organization IAM Admin to grant you the Global Load Balancer Admin (
global-load-balancer-admin) role. For more information, see Predefined role descriptions.
- Project NetworkPolicy Admin: has access to manage project network policies
in the project namespace Ask your Organization IAM Admin to grant you the
Project NetworkPolicy Admin (
Configure PNP to allow traffic to ELB
For ELB services to function, you must configure and apply your own customized
ProjectNetworkPolicy ingress policy to allow traffic to the workloads of this ELB service.
Network policies control access to your workloads, not the load balancer itself.
ELBs expose workloads to your customer network, requiring explicit network
policies to allow external traffic to the workload port, such as 8080.
Specify the external CIDR address to allow traffic to the workloads of this ELB:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
namespace: PROJECT
name: allow-inbound-traffic-from-external
spec:
policyType: Ingress
subject:
subjectType: UserWorkload
ingress:
- from:
- ipBlock:
cidr: CIDR
ports:
- protocol: TCP
port: PORT
EOF
Replace the following:
MANAGEMENT_API_SERVER: the kubeconfig path of the Management API server's kubeconfig path. If you have not yet generated a kubeconfig file for the API server in your targeted zone, see Sign in for details.PROJECT: the name of your GDC project.CIDR: the external CIDR that the ELB needs to be accessed from. This policy is required as the external load balancer uses Direct Server Return (DSR), which preserves the source external IP address and bypasses the load balancer on the return path. For more information, see Create a global ingress firewall rule for organization-external traffic.PORT: the backend port on the pods behind the load balancer. This value is found in the.spec.ports[].targetPortfield of the manifest for theServiceresource. This field is optional.
Create an external load balancer
You can create global or zonal ELBs. The scope of global ELBs span across a GDC universe. The scope of zonal ELBs is limited to the zone specified at the time of creation. For more information, see Global and zonal load balancers.
Create ELBs using three different methods in GDC:
- Use the gdcloud CLI to create global or zonal ELBs.
- Use the Networking Kubernetes Resource Model (KRM) API to create global or zonal ELBs.
- Use the Kubernetes Service directly in the Kubernetes cluster. This method is only available for zonal ELBs.
You can target pod or VM workloads using the KRM API and gdcloud CLI. You can only target workloads in the cluster where the Service object is created when you use the Kubernetes Service directly in Kubernetes cluster.
Create a zonal ELB
Create a zonal ELB using the gdcloud CLI, the KRM API, or the Kubernetes Service in the Kubernetes cluster:
gdcloud
Create an ELB that targets pod or VM workloads using the gdcloud CLI.
This ELB targets all of the workloads in the project matching the
label defined in the Backend object.
To create an ELB using the gdcloud CLI, follow these steps:
Create a
Backendresource to define the endpoint for the ELB:gdcloud compute backends create BACKEND_NAME \ --labels=LABELS \ --project=PROJECT_NAME \ --zone=ZONE \ --cluster=CLUSTER_NAMEReplace the following:
BACKEND_NAME: your chosen name for the backend resource, such asmy-backend.LABELS: A selector defining which endpoints between pods and VMs to use for this backend resource. For example,app=web.PROJECT_NAME: the name of your project.ZONE: the zone to use for this invocation. To preset the zone flag for all commands that require it, run:gdcloud config set core/zone ZONE. The zone flag is available only in multi-zone environments. This field is optional.CLUSTER_NAME: the cluster to which the scope of the defined selectors is limited to. If this field is not specified, all of the endpoints with the given label are selected. This field is optional.
Skip this step if this ELB is for pod workloads. If you are configuring an ELB for VM workloads, define a health check for the ELB:
gdcloud compute health-checks create tcp HEALTH_CHECK_NAME \ --check-interval=CHECK_INTERVAL \ --healthy-threshold=HEALTHY_THRESHOLD \ --timeout=TIMEOUT \ --unhealthy-threshold=UNHEALTHY_THRESHOLD \ --port=PORT \ --zone=ZONEReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5. This field is optional.HEALTHY_THRESHOLD: the time to wait before claiming failure. The default value is to5. This field is optional.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5. This field is optional.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2. This field is optional.PORT: the port on which the health check is performed. The default value is to80. This field is optional.ZONE: the zone you are creating this ELB in.
Create a
BackendServiceresource and add to it the previously createdBackendresource:gdcloud compute backend-services create BACKEND_SERVICE_NAME \ --project=PROJECT_NAME \ --target-ports=TARGET_PORTS \ --zone=ZONE \ --health-check=HEALTH_CHECK_NAMEReplace the following:
BACKEND_SERVICE_NAME: the chosen name for this backend service.TARGET_PORT: a comma-separated list of target ports that this backend service translates, where each target port specifies the protocol, the port on the forwarding rule, and the port on the backend instance. You can specify multiple target ports. This field must be in the formatprotocol:port:targetport, such asTCP:80:8080. This field is optional.HEALTH_CHECK_NAME: the name of the health check resource. This field is optional. Only include this field if you are configuring an ELB for VM workloads.
Add the
BackendServiceresource to the previously createdBackendresource:gdcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --backend=BACKEND_NAME \ --project=PROJECT_NAME \ --zone=ZONEOptional: Use session affinity for ELBs to ensure that requests from the same client are consistently routed to the same backend. To enable session affinity for load balancers, create a backend service policy using
gdcloud compute load-balancer-policy createcommand:gdcloud compute load-balancer-policy create POLICY_NAME --session-affinity=MODE --selectors=RESOURCE_LABELReplace the following:
POLICY_NAME: your chosen name for the backend service policy.MODE: the session affinity mode. Two modes are supported:NONE: Session affinity is disabled. Requests are routed to any available backend. This is the default mode.CLIENT_IP_DST_PORT_PROTO: Requests from the same four-tuple (source IP address, destination IP address, destination port, and protocol) are routed to the same backend.
RESOURCE_LABEL: the label selector that selects which backend service theBackendServicePolicyresource is applied to in the project namespace. If multipleBackendServicePolicyresources match the same backend service, and at least one of these policies has session affinity enabled, then session affinity for thisBackendServiceresource becomes enabled.
Create an external
ForwardingRuleresource that defines the VIP the service is available at:gdcloud compute forwarding-rules create FORWARDING_RULE_EXTERNAL_NAME \ --backend-service=BACKEND_SERVICE_NAME \ --cidr=CIDR \ --ip-protocol-port=PROTOCOL_PORT \ --load-balancing-scheme=EXTERNAL \ --zone=ZONE \ --project=PROJECT_NAMEReplace the following:
BACKEND_SERVICE_NAME: the name of your backend service.FORWARDING_RULE_EXTERNAL_NAME: your chosen name for the forwarding rule.CIDR: this field is optional. If not specified, anIPv4/32CIDR is automatically reserved from the zonal IP pool. Specify the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a zonal subnet. For more information onSubnetresources, see Example custom resources.PROTOCOL_PORT: the protocol and port to expose on the forwarding rule. This field must be in the formatip-protocol=TCP:80. The exposed port must be the same as what the actual application is exposing inside of the container.
To verify the configured ELB, confirm the
Readycondition on each of the created objects. To get the assigned IP address of the load balancer, describe the forwarding rule:gdcloud compute forwarding-rules describe FORWARDING_RULE_EXTERNAL_NAMETo validate the configured ELB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the assigned VIP, describe the forwarding rule:
gdcloud compute forwarding-rules describe FORWARDING_RULE_EXTERNAL_NAMEVerify the traffic with a
curlrequest to the VIP at the port specified in thePROTOCOL_PORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace the following:
FORWARDING_RULE_VIP: the VIP of the forwarding rule.PORT: the port number from of thePROTOCOL_PORTfield in the forwarding rule.
API
Create an ELB that targets pod or VM workloads using the KRM API.
This ELB targets all of the workloads in the project matching the
label defined in the Backend object.
To create a zonal ELB using the KRM API, follow these steps:
Create a
Backendresource to define the endpoints for the ELB. CreateBackendresources for each zone the workloads are placed in:kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: Backend metadata: namespace: PROJECT_NAME name: BACKEND_NAME spec: clusterName: CLUSTER_NAME endpointsLabels: matchLabels: app: server EOFReplace the following:
MANAGEMENT_API_SERVER: the kubeconfig path of the zonal Management API server's kubeconfig path. For more information, see Switch to a zonal context.PROJECT_NAME: the name of your project.BACKEND_NAME: the name of theBackendresource.CLUSTER_NAME: This is an optional field. This field specifies the cluster to which the scope of the defined selectors are limited to. This field does not apply to VM workloads. If aBackendresource doesn't have theclusterNamefield included, the specified labels apply to all of the workloads in the project.
Skip this step if this ELB is for pod workloads. If you are configuring an ELB for VM workloads, define a health check for the ELB:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: HealthCheck metadata: namespace: PROJECT_NAME name: HEALTH_CHECK_NAME spec: tcpHealthCheck: port: PORT timeoutSec: TIMEOUT checkIntervalSec: CHECK_INTERVAL healthyThreshold: HEALTHY_THRESHOLD unhealthyThreshold: UNHEALTHY_THRESHOLD EOFReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.PORT: the port on which the health check is performed. The default value is to80.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5.HEALTHY_THRESHOLD: the number of sequential probes that must pass for the endpoint to be considered healthy. The default value is to2.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2.
Create a
BackendServiceobject using the previously createdBackendresource. If you are configuring an ELB for VM workloads, include theHealthCheckresource.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: BackendService metadata: namespace: PROJECT_NAME name: BACKEND_SERVICE_NAME spec: backendRefs: - name: BACKEND_NAME healthCheckName: HEALTH_CHECK_NAME EOFReplace the following:
BACKEND_SERVICE_NAME: the chosen name for yourBackendServiceresource.HEALTH_CHECK_NAME: the name of your previously createdHealthCheckresource. Don't include this field if you are configuring an ELB for pod workloads.
Optional: Use session affinity for ELBs to ensure that requests from the same client are consistently routed to the same backend. To enable session affinity for load balancers, create a
BackendServicePolicyresource. This resource defines session affinity settings and applies theBackendServicePolicyresource to theBackendServiceresource. Create and apply theBackendServicePolicyresource:kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.global.gdc.goog/v1 kind: BackendServicePolicy metadata: namespace: PROJECT_NAME name: POLICY_NAME spec: sessionAffinity: MODE selector: matchLabels: RESOURCE_LABELReplace the following:
POLICY_NAME: your chosen name for the backend service policy.MODE: the session affinity mode. Two modes are supported:NONE: Session affinity is disabled. Requests are routed to any available backend. This is the default mode.CLIENT_IP_DST_PORT_PROTO: Requests from the same four-tuple (source IP address, destination IP address, destination port, and protocol) are routed to the same backend.
RESOURCE_LABEL: the label selector that selects which backend service theBackendServicePolicyresource is applied to in the project namespace. If multipleBackendServicePolicyresources match the same backend service, and at least one of these policies has session affinity enabled, then session affinity for thisBackendServiceresource becomes enabled.
Create an external
ForwardingRuleresource defining the VIP the service is available at.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: ForwardingRuleExternal metadata: namespace: PROJECT_NAME Name: FORWARDING_RULE_EXTERNAL_NAME spec: cidrRef: CIDR ports: - port: PORT Protocol: PROTOCOL backendServiceRef: name: BACKEND_SERVICE_NAME EOFReplace the following:
BACKEND_SERVICE_NAME: the name of yourBackendServiceresource.FORWARDING_RULE_EXTERNAL_NAME: your chosen name for yourForwardingRuleExternalresource.CIDR: this field is optional. If not specified, anIPv4/32CIDR is automatically reserved from the zonal IP pool. Specify the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a zonal subnet. For more information onSubnetresources, see Example custom resources.PORT: Use theportsfield to specify an array of L4 ports for which packets are forwarded to the backends configured with this forwarding rule. At least one port has to be specified. Use theportfield to specify a port number. The exposed port must be the same as what the actual application is exposing inside of the container.PROTOCOL: the protocol to use for the forwarding rule, such asTCP. An entry in theportsarray must look like the following:ports: - port: 80 protocol: TCP
To validate the configured ELB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the VIP, use
kubectl get:kubectl get forwardingruleexternal -n PROJECT_NAMEThe output looks like the following:
NAME BACKENDSERVICE CIDR READY elb-name BACKEND_SERVICE_NAME 10.200.32.59/32 TrueVerify the traffic with a
curlrequest to the VIP at the port specified in thePORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace
FORWARDING_RULE_VIPwith the VIP of the forwarding rule.
Kubernetes Service
You can create ELBs in GDC by creating a
Kubernetes Service of type LoadBalancer in a Kubernetes cluster.
To create an ELB service, do the following:
Create a YAML file for the
Servicedefinition of typeLoadBalancer.The following
Serviceobject is an example of an ELB service:apiVersion: v1 kind: Service metadata: name: ELB_SERVICE_NAME namespace: PROJECT_NAME spec: ports: - port: 1235 protocol: TCP targetPort: 1235 selector: k8s-app: my-app type: LoadBalancerReplace the following:
ELB_SERVICE_NAME: the name of the ELB service.PROJECT_NAME: the namespace of your project that contains the backend workloads.
The
portfield configures the frontend port you expose on the VIP address. ThetargetPortfield configures the backend port to which you want to forward the traffic on the backend workloads. The load balancer supports Network Address Translation (NAT). The frontend and backend ports can be different.On the
selectorfield of theServicedefinition, specify pods or virtual machines as the backend workloads.The selector defines which workloads to take as backend workloads for this service, based on matching the labels you specify with labels on the workloads. The
Servicecan only select backend workloads in the same project and same cluster where you define theService.For more information about service selection, see https://kubernetes.io/docs/concepts/services-networking/service/.
Save the
Servicedefinition file in the same project as the backend workloads.Apply the
Servicedefinition file to the cluster:kubectl apply -f ELB_FILEReplace
ELB_FILEwith the name of theServicedefinition file for the ELB service.When you create an ELB, the service gets two IP addresses. One is an internal IP address accessible only from within the same cluster. The other is the external IP address, accessible from inside and outside the organization. You can obtain the IP addresses of the ELB service by viewing the service status:
kubectl -n PROJECT_NAME get svc ELB_SERVICE_NAMEReplace the following:
PROJECT_NAME: the namespace of your project that contains the backend workloads.ELB_SERVICE_NAME: the name of the ELB service.
You must obtain an output similar to the following example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elb-service LoadBalancer 10.0.0.1 20.12.1.11 1235:31931/TCP 22hThe
EXTERNAL-IPis the IP address of the service that is accessible from outside the organization.If you don't obtain an output, ensure that you created the ELB service successfully.
Create a global ELB
Create a global ELB using the gdcloud CLI or the KRM API.
gdcloud
Create an ELB that targets pod or VM workloads using the gdcloud CLI.
This ELB targets all of the workloads in the project matching the
label defined in the Backend object. The Backend custom resource must be scoped to a zone.
To create an ELB using the gdcloud CLI, follow these steps:
Create a
Backendresource to define the endpoint for the ELB:gdcloud compute backends create BACKEND_NAME \ --labels=LABELS \ --project=PROJECT_NAME \ --cluster=CLUSTER_NAME \ --zone=ZONEReplace the following:
BACKEND_NAME: your chosen name for the backend resource, such asmy-backend.LABELS: A selector defining which endpoints between pods and VMs to use for this backend resource. For example,app=web.PROJECT_NAME: the name of your project.CLUSTER_NAME: the cluster to which the scope of the defined selectors is limited to. If this field is not specified, all of the endpoints with the given label are selected. This field is optional.ZONE: the zone to use for this invocation. To preset the zone flag for all commands that require it, run:gdcloud config set core/zone ZONE. The zone flag is available only in multi-zone environments. This field is optional.
Skip this step if this ELB is for pod workloads. If you are configuring an ELB for VM workloads, define a health check for the ELB:
gdcloud compute health-checks create tcp HEALTH_CHECK_NAME \ --check-interval=CHECK_INTERVAL \ --healthy-threshold=HEALTHY_THRESHOLD \ --timeout=TIMEOUT \ --unhealthy-threshold=UNHEALTHY_THRESHOLD \ --port=PORT \ --globalReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5. This field is optional.HEALTHY_THRESHOLD: the time to wait before claiming failure. The default value is to5. This field is optional.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5. This field is optional.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2. This field is optional.PORT: the port on which the health check is performed. The default value is to80. This field is optional.
Create a
BackendServiceresource and add to it the previously createdBackendresource:gdcloud compute backend-services create BACKEND_SERVICE_NAME \ --project=PROJECT_NAME \ --target-ports=TARGET_PORTS \ --health-check=HEALTH_CHECK_NAME \ --globalReplace the following:
BACKEND_SERVICE_NAME: the chosen name for this backend service.TARGET_PORTS: a comma-separated list of target ports that this backend service translates, where each target port specifies the protocol, the port on the forwarding rule, and the port on the backend instance. You can specify multiple target ports. This field must be in the formatprotocol:port:targetport, such asTCP:80:8080. This field is optional.HEALTH_CHECK_NAME: the name of the health check resource. This field is optional. Only include this field if you are configuring an ELB for VM workloads.
Add the
BackendServiceresource to the previously createdBackendresource:gdcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --backend=BACKEND_NAME \ --backend-zone BACKEND_ZONE \ --project=PROJECT_NAME \ --globalOptional: Use session affinity for ELBs to ensure that requests from the same client are consistently routed to the same backend. To enable session affinity for load balancers, create a backend service policy using
gdcloud compute load-balancer-policy createcommand:gdcloud compute load-balancer-policy create POLICY_NAME --session-affinity=MODE --selectors=RESOURCE_LABELReplace the following:
POLICY_NAME: your chosen name for the backend service policy.MODE: the session affinity mode. Two modes are supported:NONE: Session affinity is disabled. Requests are routed to any available backend. This is the default mode.CLIENT_IP_DST_PORT_PROTO: Requests from the same four-tuple (source IP address, destination IP address, destination port, and protocol) are routed to the same backend.
RESOURCE_LABEL: the label selector that selects which backend service theBackendServicePolicyresource is applied to in the project namespace. If multipleBackendServicePolicyresources match the same backend service, and at least one of these policies has session affinity enabled, then session affinity for thisBackendServiceresource becomes enabled.
Create an external
ForwardingRuleresource that defines the VIP the service is available at:gdcloud compute forwarding-rules create FORWARDING_RULE_EXTERNAL_NAME \ --backend-service=BACKEND_SERVICE_NAME \ --cidr=CIDR \ --ip-protocol-port=PROTOCOL_PORT \ --load-balancing-scheme=EXTERNAL \ --project=PROJECT_NAME \ --globalReplace the following:
BACKEND_SERVICE_NAME: the name of your backend service.FORWARDING_RULE_EXTERNAL_NAME: your chosen name for the forwarding rule.CIDR: this field is optional. If not specified, anIPv4/32CIDR is automatically reserved from the global IP pool. Specify the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a global subnet. For more information onSubnetresources, see Example custom resources.PROTOCOL_PORT: the protocol and port to expose on the forwarding rule. This field must be in the formatip-protocol=TCP:80. The exposed port must be the same as what the actual application is exposing inside of the container.
To verify the configured ELB, confirm the
Readycondition on each of the created objects. To get the assigned IP address of the load balancer, describe the forwarding rule:gdcloud compute forwarding-rules describe FORWARDING_RULE_EXTERNAL_NAMETo validate the configured ELB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the assigned VIP, describe the forwarding rule:
gdcloud compute forwarding-rules describe FORWARDING_RULE_EXTERNAL_NAME --globalVerify the traffic with a
curlrequest to the VIP at the port specified in thePROTOCOL_PORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace the following:
FORWARDING_RULE_VIP: the VIP of the forwarding rule.PORT: the port number from of thePROTOCOL_PORTfield in the forwarding rule.
API
Create an ELB that targets pod or VM workloads using the KRM API. This ELB targets all of the workloads in the project matching the label defined in the Backend object. To create a zonal ELB using the KRM API, follow these steps:
Create a
Backendresource to define the endpoints for the ELB. CreateBackendresources for each zone the workloads are placed in:kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: Backend metadata: namespace: PROJECT_NAME name: BACKEND_NAME spec: clusterName: CLUSTER_NAME endpointsLabels: matchLabels: app: server EOFReplace the following:
MANAGEMENT_API_SERVER: the kubeconfig path of the global Management API server's kubeconfig path. For more information, see Switch to the global context.PROJECT_NAME: the name of your project.BACKEND_NAME: the name of theBackendresource.CLUSTER_NAME: This is an optional field. This field specifies the cluster to which the scope of the defined selectors are limited to. This field does not apply to VM workloads. If aBackendresource doesn't have theclusterNamefield included, the specified labels apply to all of the workloads in the project.
Skip this step if this ELB is for pod workloads. If you are configuring an ELB for VM workloads, define a health check for the ELB:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.global.gdc.goog/v1 kind: HealthCheck metadata: namespace: PROJECT_NAME name: HEALTH_CHECK_NAME spec: tcpHealthCheck: port: PORT timeoutSec: TIMEOUT checkIntervalSec: CHECK_INTERVAL healthyThreshold: HEALTHY_THRESHOLD unhealthyThreshold: UNHEALTHY_THRESHOLD EOFReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.PORT: the port on which the health check is performed. The default value is to80.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5.HEALTHY_THRESHOLD: the number of sequential probes that must pass for the endpoint to be considered healthy. The default value is to2.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2.
Since this is a global ELB, create the health check in the global API.
Create a
BackendServiceobject using the previously createdBackendresource. If you are configuring an ELB for VM workloads, include theHealthCheckresource.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.global.gdc.goog/v1 kind: BackendService metadata: namespace: PROJECT_NAME name: BACKEND_SERVICE_NAME spec: backendRefs: - name: BACKEND_NAME zone: ZONE healthCheckName: HEALTH_CHECK_NAME targetPorts: - port: PORT protocol: PROTOCOL targetPort: TARGET_PORT EOFReplace the following:
BACKEND_SERVICE_NAME: the chosen name for yourBackendServiceresource.HEALTH_CHECK_NAME: the name of your previously createdHealthCheckresource. Don't include this field if you are configuring an ELB for pod workloads.ZONE: the zone in which theBackendresource is created. You can specify multiple backends inbackendRefsfield. For example:- name: my-be zone: Zone-A - name: my-be zone: Zone-BThe
targetPortsfield is optional. This resource lists ports that thisBackendServiceresource translates. If you are using this object, provide values for the following:PORT: the port exposed by the service.PROTOCOL: the Layer-4 protocol which traffic must match. Only TCP and UDP are supported.TARGET_PORT: the port to which thePORTvalue is translated to, such as8080. The value ofTARGET_PORTcan't be repeated in a given object. An example fortargetPortsmight look like the following:targetPorts: - port: 80 protocol: TCP targetPort: 8080
Optional: Use session affinity for ELBs to ensure that requests from the same client are consistently routed to the same backend. To enable session affinity for load balancers, create a
BackendServicePolicyresource. This resource defines session affinity settings and applies theBackendServicePolicyresource to theBackendServiceresource. Create and apply theBackendServicePolicyresource:kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.global.gdc.goog/v1 kind: BackendServicePolicy metadata: namespace: PROJECT_NAME name: POLICY_NAME spec: sessionAffinity: MODE selector: matchLabels: RESOURCE_LABELReplace the following:
POLICY_NAME: your chosen name for the backend service policy.MODE: the session affinity mode. Two modes are supported:NONE: Session affinity is disabled. Requests are routed to any available backend. This is the default mode.CLIENT_IP_DST_PORT_PROTO: Requests from the same four-tuple (source IP address, destination IP address, destination port, and protocol) are routed to the same backend.
RESOURCE_LABEL: the label selector that selects which backend service theBackendServicePolicyresource is applied to in the project namespace. If multipleBackendServicePolicyresources match the same backend service, and at least one of these policies has session affinity enabled, then session affinity for thisBackendServiceresource becomes enabled.
Create an external
ForwardingRuleresource defining the VIP the service is available at.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.global.gdc.goog/v1 kind: ForwardingRuleExternal metadata: namespace: PROJECT_NAME Name: FORWARDING_RULE_EXTERNAL_NAME spec: cidrRef: CIDR ports: - port: PORT Protocol: PROTOCOL backendServiceRef: name: BACKEND_SERVICE_NAME EOFReplace the following:
FORWARDING_RULE_EXTERNAL_NAME: the chosen name for yourForwardingRuleExternalresource.CIDR: This field is optional. If not specified, anIPv4/32CIDR is automatically reserved from the global IP pool. Specify the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a global subnet. For more information onSubnetresources, see Example custom resources.PORT: Use theportsfield to specify an array of L4 ports for which packets are forwarded to the backends configured with this forwarding rule. At least one port has to be specified. Use theportfield to specify a port number. The exposed port must be the same as what the actual application is exposing inside of the container.PROTOCOL: the protocol to use for the forwarding rule, such asTCP. An entry in theportsarray must look like the following:ports: - port: 80 protocol: TCP
To validate the configured ELB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the VIP, use
kubectl get:kubectl get forwardingruleexternal -n PROJECT_NAMEThe output looks like the following:
NAME BACKENDSERVICE CIDR READY elb-name BACKEND_SERVICE_NAME 10.200.32.59/32 TrueVerify the traffic with a
curlrequest to the VIP at the port specified in thePORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace
FORWARDING_RULE_VIPwith the VIP of the forwarding rule.