Manually setup F5 BIG-IP virtual servers for manual load balancing

This tutorial shows how to set up the F5 BIG-IP when you integrate with Google Distributed Cloud using the manual load-balancing mode on Google Distributed Cloud.

The F5 BIG-IP platform provides various services to help you enhance the security, availability, and performance of your apps. These services include, L7 load balancing, network firewalling, web application firewalling (WAF), DNS services, and more. For Google Distributed Cloud, BIG-IP provides external access and L3/4 load-balancing services.

Additional configuration

After the Setup utility completes, you need to Create an administrative partition for each user cluster you intend to expose and access.

Initially, you define a partition for the first user cluster. Don't use cluster partitions for anything else. Each of the clusters must have a partition that is for the sole use of that cluster.

Configuring the BIG-IP for Google Distributed Cloud external endpoints

If you didn't disable bundled ingress, you must configure the BIG-IP with the virtual servers (VIPs), corresponding to the following Google Distributed Cloud endpoints:

  • User partition

    • VIP for user cluster ingress controller (port exposed: 443)
    • VIP for user cluster ingress controller (port exposed: 80)

Create node object

The cluster node external IP addresses are in turn used to configure node objects on the BIG-IP system. You will create a node object for each Google Distributed Cloud cluster node. The nodes are added to backend pools that are then associated with virtual servers.

  1. To sign in to the BIG-IP management console, go to the IP address. The address is provided during the installation.
  2. Click the User partition that you previously created.
  3. Go to Local Traffic > Nodes > Node List.
  4. Click Create.
  5. Enter a name and IP address for each cluster host and click Finished.

    Configuration of partitions.

Create backend pools

You create a backend pool for each node Port.

  1. In the BIG-IP management console, click User partition for the user partition that you previously created.
  2. Go to Local Traffic > Pools > Pool List.
  3. Click Create.
  4. In the Configuration drop-down list, click Advanced.
  5. In the Name field, enter Istio-80-pool.
  6. To verify the pool member accessibility, under Health Monitor, click tcp. Optional: Because this is a manual configuration, you can also take advantage of more advanced monitors as appropriate for your deployment.
  7. For Action on Service Down, click Reject.

  8. For this tutorial, in the Load Balancing Method drop-down list, click Round Robin.

  9. In the New Members section, click Node List and then select the previously created node.

  10. In the Service Port field, enter the appropriate nodePort from the configuration file or spec.ports[?].nodePort in the runtime istio ingress Kubernetes Service (name: istio-ingress, namespace: gke-system).

  11. Click Add.

  12. Repeat steps 8-9 and add each cluster node instance.

    Configuration of cluster node.

  13. Click Finished.

  14. Repeat all of these steps in this section for the remaining user cluster nodePorts.

Create virtual servers

You create a total of two virtual servers on the BIG-IP for the first user cluster. The virtual servers correspond to the "VIP + port" combinations.

  1. In the BIG-IP management console, click the User partition that you previously created.
  2. Go to Local Traffic > Virtual Servers > Virtual Server List.
  3. Click Create.
  4. In the Name field, enter istio-ingress-80.
  5. In the Destination Address/Mask field, enter the IP address for the VIP. For this tutorial, use the HTTP ingress VIP in the configuration file or spec.loadBalancerIP in the runtime istio ingress Kubernetes Service (name: istio-ingress, namespace: gke-system).

  6. In the Service Port field, enter the appropriate listener port for the VIP. For this tutorial, use port 80 or spec.ports[?].port in the runtime istio ingress Kubernetes Service (name: istio-ingress, namespace: gke-system).

    Configuration of virtual servers.

    There are several configuration options for enhancing your app's endpoint, such as associating protocol-specific profiles, certificate profiles, and WAF policies.

  7. For Source Address Translation click Auto Map.

  8. For Default Pool select the appropriate pool that you previously created.

  9. Click Finished.

  10. Create and download an archive of the current configuration.

What's next