This document shows you two sample configurations for setting up a global external Application Load Balancer with VM instance group backends in a Shared VPC environment:
- In the first example, the load balancer's frontend and backend components are created in one service project.
- In the second example, the load balancer's frontend components and URL map are created in one service project, while the load balancer's backend service and backends are created in a different service project. This type of deployment, where the URL map in one project references a backend service in another project, is referred to as cross-project service referencing.
Both examples require the same initial configuration to grant permissions, configure the network and subnet in the host project, and set up Shared VPC before you can start creating load balancers.
These are not the only Shared VPC configurations supported by the global external Application Load Balancer. For other valid Shared VPC architectures, see Shared VPC architecture.
If you don't want to use a Shared VPC network, see Set up a global external Application Load Balancer with VM instance group backends.
Before you begin
- Read the Shared VPC overview.
- Read the External Application Load Balancer overview, including the Shared VPC architecture section.
Permissions required
Setting up a load balancer on a Shared VPC network requires some initial setup and provisioning by an administrator. After the initial setup, a service project owner can do one of the following:
- Deploy all the load balancer's components and its backends in a service project.
- Deploy the load balancer's backend components (backend service and backends) in service projects that can be referenced by a URL map in another service or host project.
This section summarizes the permissions required to follow this guide to set up a load balancer on a Shared VPC network.
Set up Shared VPC
The following roles are required for the following tasks:
- Perform one-off administrative tasks such as setting up the Shared VPC and enabling a host project.
- Perform administrative tasks that must be repeated every time you want to onboard a new service project. This includes attaching the service project, provisioning and configuring networking resources, and granting access to the service project administrator.
These tasks must be performed in the Shared VPC host project. We recommend that the Shared VPC Admin also be the owner of the Shared VPC host project. This automatically grants the Network Admin and Security Admin roles.
Task | Required role |
---|---|
Set up Shared VPC, enable host project, and grant access to service project administrators | Shared VPC Admin |
Create subnets in the Shared VPC host project and grant access to service project administrators | Network Admin |
Add and remove firewall rules | Security Admin |
After the subnets have been provisioned, the host project owner must grant the Network User role in the host project to anyone (typically service project administrators, developers, or service accounts) who needs to use these resources.
Task | Required role |
---|---|
Use VPC networks and subnets belonging to the host project | Network User |
This role can be granted on the project level or for individual subnets. We recommend that you grant the role on individual subnets. Granting the role on the project provides access to all current and future subnets in the VPC network of the host project.
Deploy load balancer and backends
Service project administrators need the following roles in the service project to create load balancing resources and backends. These permissions are granted automatically to the service project owner or editor.
Task | Required role |
---|---|
Create load balancer components | Network Admin |
Create instances | Instance Admin |
Create and modify SSL certificates | Security Admin |
Reference cross-project backend services
If your load balancer needs to reference backend services from other service projects, also known as cross-project service referencing, load balancer administrators will require the following role in the service project where the backend service is created.
Task | Required role |
---|---|
Permissions to use services in other projects | Load Balancer Services User |
This role can be granted either at the project level or for individual backend services. For instructions about how to grant this role, see the cross-project service referencing example on this page.
For more information about IAM, see the following guides:
Prerequisites
In this section, you need to perform the following steps:
The steps in this section do not need to be performed every time you want to create a new load balancer. However, you must ensure that you have access to the resources described here before you proceed to creating the load balancer.
Configure the network and subnets in the host project
You need a Shared VPC network with a subnet for the load balancer's backends.This example uses the following network, region, and subnet:
Network. The network is named
lb-network
.Subnet for load balancer's backends. A subnet named
lb-backend-subnet
in theus-west1
region uses10.1.2.0/24
for its primary IP range.
Configure the subnet for the load balancer's backends
This step does not need to be performed every time you want to create a new load balancer. You only need to ensure that the service project has access to a subnet in the Shared VPC network.All the steps in this section must be performed in the host project.
Console
- In the Google Cloud console, go to the VPC networks page.
- Click Create VPC network.
- For Name, enter
lb-network
. In the Subnets section:
- Set the Subnet creation mode to Custom.
In the New subnet section, enter the following information:
- Name:
lb-backend-subnet
Region:
us-west1
IP address range:
10.1.2.0/24
- Name:
Click Done.
Click Create.
gcloud
Create a VPC network with the
gcloud compute networks create
command:gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the
lb-network
network in theus-west1
region:gcloud compute networks subnets create lb-backend-subnet
--network=lb-network
--range=10.1.2.0/24
--region=us-west1
Give service project admins access to the backend subnet
Service project administrators require access to thelb-backend-subnet
subnet so that they can provision the load
balancer's backends.
A Shared VPC Admin must grant access to the backend subnet to service project administrators (or developers who deploy resources and backends that use the subnet). For instructions, see Service Project Admins for some subnets.
Configure firewall rules in the host project
This example uses the following firewall rule:fw-allow-health-check
. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems in130.211.0.0/22
and35.191.0.0/16
. This example uses the target tagload-balanced-backend
to identify the instances to which it should apply.
All the steps in this section must be performed in the host project.
Console
In the Google Cloud console, go to the Firewall policies page.
- Click Create firewall rule to create the rule to allow Google Cloud health checks:
- Name:
fw-allow-health-check
- Network:
lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
load-balanced-backend
- Source filter: IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22
and35.191.0.0/16
- Protocols and ports:
- Choose Specified protocols and ports.
- Check TCP and enter
80
for the port number.
As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use
tcp:80
for the protocol and port, Google Cloud can use HTTP on port80
to contact your VMs, but it cannot use HTTPS on port443
to contact them. - Click Create.
gcloud
Create the
fw-allow-health-check
firewall rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers. However, you can configure a narrower set of ports to meet your needs.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=load-balanced-backend \ --rules=tcp
Set up Shared VPC in the host project
This step entails enabling a Shared VPC host project, sharing subnets of the host project, and attaching service projects to the host project so that the service projects can use the Shared VPC network. To set up Shared VPC in the host project, see the following pages:
The rest of these instructions assume that you have already set up Shared VPC. This includes setting up IAM policies for your organization and designating the host and service projects.
Don't proceed until you have set up Shared VPC and enabled the host and service projects.
After completing the steps defined in this prerequisites section, you can pursue either of the following setups:
Configure a load balancer in one service project
After you have configured the VPC network in the host project and set up Shared VPC, you can switch your attention to the service project, in which you need to create all the load balancing components (backend service, URL map, target proxy, and forwarding rule) and the backends.
This section assumes that you have carried out the prerequisite steps described in the previous section in the host project. In this section, the load balancer's frontend and backend components along with the backends are created in one service project.
The following figure depicts the components of a global external Application Load Balancer in one service project, which is attached to the host project in a Shared VPC network.
These steps should be carried out by the service project administrator (or a developer operating within the service project) and do not require involvement from the host project administrator. The steps in this section are similar to the standard steps to set up a global external Application Load Balancer.
The example on this page explicitly sets a reserved IP address for the global external Application Load Balancer's forwarding rule, rather than allowing an ephemeral IP address to be allocated. As a best practice, we recommend reserving IP addresses for forwarding rules.
Create a managed instance group backend
The precursor to creating a managed instance group is the creation of an instance template, which is a resource that you can use to create virtual machine (VM) instances. Traffic from clients is load balanced to VMs in an instance group. The managed instance group provides VMs that run the backend servers of an external Application Load Balancer. In this example, the backends serve their own hostnames.
Console
Create an instance template
In the Google Cloud console, go to the Compute Engine Instance templates page.
Click Create instance template.
For Name, enter
backend-template
.In the Boot disk section, ensure that the boot disk is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). Click Change to change the image if necessary.
Expand the Advanced options section.
Expand the Networking section, and in the Network tags field, enter
load-balanced-backend
.For Network interfaces, select Networks shared with me (from host project:
HOST_PROJECT_ID
).In the Shared subnetwork list, select the
lb-backend-subnet
subnet from thelb-network
network.Expand the Management section, and in the Automation field, specify the following startup script:
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Create a managed instance group
In the Google Cloud console, go to the Compute Engine Instance groups page.
Click Create Instance Group.
From the options, select New managed instance group (stateless).
For the name of the instance group, enter
lb-backend
.In the Instance template list, select the instance template
backend-template
that you created in the previous step.In the Location section, select Single zone, and enter the following values:
For Region, select
us-west1
.For Zone, select
us-west1-a
.
In the Autoscaling section, enter the following values:
For Autoscaling mode, select On: add and remove instances to the group.
For Minimum number of instances, select
2
.For Maximum number of instances, select
3
.
In the Port mapping section, click Add port, and enter the following values:
For Port name, enter
http
.For Port number, enter
80
.
Click Create.
gcloud
Create an instance template:
gcloud compute instance-templates create backend-template \ --region=us-west1 \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-backend-subnet \ --tags=load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2' \ --project=SERVICE_PROJECT_ID
Create a managed instance group and select the instance template that you created in the preceding step:
gcloud compute instance-groups managed create lb-backend \ --zone=us-west1-a \ --size=2 \ --template=backend-template \ --project=SERVICE_PROJECT_ID
Add a named port to the instance group:
gcloud compute instance-groups set-named-ports lb-backend \ --named-ports=http:80 \ --zone=us-west1-a \ --project=SERVICE_PROJECT_ID
Create a health check
Health checks are tests that confirm the availability of backends. Create a
health check that uses the HTTP protocol and probes on port 80
. Later, you'll
attach this health check to the backend service referenced by the load balancer.
Console
In the Google Cloud console, go to the Compute Engine Health checks page.
For the name of the health check, enter
lb-health-check
.Set the protocol to HTTP.
Click Create.
gcloud
Create an HTTP health check.
gcloud compute health-checks create http lb-health-check \ --use-serving-port \ --project=SERVICE_PROJECT_ID
Reserve the load balancer's IP address
Reserve a global static external IP address that can be assigned to the forwarding rule of the load balancer.
Console
In the Google Cloud console, go to the VPC IP addresses page.
Click Reserve external static IP address.
For Name, enter
lb-ipv4-1
.Set Network Service Tier to Premium.
Set IP version to IPv4.
Set Type to Global.
Click Reserve.
gcloud
Create a global static external IP address.
gcloud compute addresses create lb-ipv4-1 \ --ip-version=IPV4 \ --network-tier=PREMIUM \ --global --project=SERVICE_PROJECT_ID
Set up an SSL certificate resource
For a load balancer that uses HTTPS as the request-and-response protocol, create an SSL certificate resource as described in the following resources:
We recommend using a Google-managed certificate.
This example assumes that you have created an SSL certificate named
lb-ssl-cert
. The SSL certificate is attached to the target proxy that you will
create in one of the following steps.
Configure the load balancer
This section shows you how to create the following resources for a global external Application Load Balancer:
- Backend service with a managed instance group as the backend
- URL map
- SSL certificate (required only for HTTPS traffic)
- Target proxy
- Forwarding rule
In this example, you can use HTTP or HTTPS as the request-and-response protocol between the client and the load balancer. For HTTPS, you need an SSL certificate resource to configure the proxy. We recommend using a Google-managed certificate.
Console
Start your configuration
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Application Load Balancer (HTTP/HTTPS) and click Next.
- For Public facing or internal, select Public facing (external) and click Next.
- For Global or single region deployment, select Best for global workloads and click Next.
- For Load balancer generation, select Global external Application Load Balancer and click Next.
- Click Configure.
Basic configuration
- For the load balancer name, enter
l7-xlb-shared-vpc
.
Configure the load balancer frontend
For HTTP traffic:
Click Frontend configuration.
For the name of the load balancer frontend, enter
http-fw-rule
.For Protocol, select HTTP.
Set IP version to IPv4.
For IP address, select
lb-ipv4-1
, which is the IP address that you reserved earlier.Set the Port to
80
to allow HTTP traffic.To complete the frontend configuration, click Done.
Verify that there is a blue check mark next to Frontend configuration before continuing.
For HTTPS traffic:
Click Frontend configuration.
For the name of the load balancer frontend, enter
https-fw-rule
.For Protocol, select HTTPS.
Set IP version to IPv4.
For IP address, select
lb-ipv4-1
, which is the IP address that you reserved earlier.Set the Port to
443
to allow HTTPS traffic.In the Certificate list, select the SSL certificate that you created.
To complete the frontend configuration, click Done.
Verify that there is a blue check mark next to Frontend configuration before continuing.
Configure the backend
Click Backend configuration.
In the Backend services and backend buckets menu, click Create a backend service.
For the name of the backend service, enter
lb-backend-service
.For Backend type, select Instance group.
Set Protocol to HTTP.
In the Named port field, enter
http
. This is the same port name that you entered while creating the managed instance group.To add backends to the backend service, do the following:
In the Backends section, set the Instance group to
lb-backend
, which is the managed instance group that you created in an earlier step.For Port numbers, enter
80
.To add the backend, click Done.
To add a health check, in the Health check list, select
lb-health-check
, which is the health check that you created earlier.To create the backend service, click Create.
Verify that there is a blue check mark next to Backend configuration before continuing.
Configure the routing rules
- Click Routing rules. Ensure that
lb-backend-service
is the default backend service for any unmatched host and any unmatched path.
For information about traffic management, see Set up traffic management.
Review and finalize the configuration
Click Review and finalize.
Review the frontend and backend settings of the load balancer to ensure that it is configured as desired.
Click Create, and then wait for the load balancer to be created.
gcloud
Create a backend service to distribute traffic among backends:
gcloud compute backend-services create lb-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=lb-health-check \ --global \ --project=SERVICE_PROJECT_ID
Add your instance group as the backend to the backend service:
gcloud compute backend-services add-backend lb-backend-service \ --instance-group=lb-backend \ --instance-group-zone=us-west1-a \ --global \ --project=SERVICE_PROJECT_ID
Create a URL map to route incoming requests to the backend service:
gcloud compute url-maps create lb-map \ --default-service=lb-backend-service \ --global \ --project=SERVICE_PROJECT_ID
Create a target proxy.
For HTTP traffic, create a target HTTP proxy to route requests to the URL map:
gcloud compute target-http-proxies create http-proxy \ --url-map=lb-map \ --global \ --project=SERVICE_PROJECT_ID
For HTTPS traffic, create a target HTTPS proxy to route requests to the URL map. The proxy is the part of the load balancer that holds the SSL certificate for an HTTPS load balancer, so you also load your SSL certificate in this step:
gcloud compute target-https-proxies create https-proxy \ --url-map=lb-map \ --ssl-certificates=lb-ssl-cert --global \ --project=SERVICE_PROJECT_ID
Create a forwarding rule.
For HTTP traffic, create a global forwarding rule to route incoming requests to the target proxy:
gcloud compute forwarding-rules create http-fw-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --address=lb-ipv4-1 \ --global \ --target-http-proxy=http-proxy \ --ports=80 \ --project=SERVICE_PROJECT_ID
For HTTPS traffic, create a global forwarding rule to route incoming requests to the target proxy:
gcloud compute forwarding-rules create https-fw-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --address=lb-ipv4-1 \ --global \ --target-https-proxy=https-proxy \ --ports=443 \ --project=SERVICE_PROJECT_ID
Test the load balancer
When the load balancing service is running, you can send traffic to the forwarding rule and watch the traffic be dispersed to different instances.
Console
In the Google Cloud console, go to the Load balancing page.
Click the load balancer that you just created.
Note the load balancer's IP address. This IP address is referred to as
LB_IP_ADDRESS
in the following steps.In the Backend section, confirm that the VMs are healthy.
The Healthy column should be populated, indicating that the VMs are healthy—for example, if two instances are created, then you should see a message indicating
2 of 2
with a green check mark next to it. If you see otherwise, first try reloading the page. It can take a few minutes for the Google Cloud console to indicate that the VMs are healthy. If the backends do not appear healthy after a few minutes, review the firewall configuration and the network tag assigned to your backend VMs.After the Google Cloud console shows that the backend instances are healthy, you can test your load balancer by pointing your web browser to
https://LB_IP_ADDRESS
(orhttp://LB_IP_ADDRESS
). ReplaceLB_IP_ADDRESS
with the load balancer's IP address.If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.
Your browser should render a page with content showing the name of the instance that served the page (for example,
Page served from: lb-backend-example-xxxx
). If your browser doesn't render this page, review the configuration settings in this guide.
gcloud
Note the IP address that was reserved:
gcloud compute addresses describe IP_ADDRESS_NAME \ --format="get(address)" \ --global
You can test your load balancer by pointing your web browser to
https://LB_IP_ADDRESS
(or
http://LB_IP_ADDRESS
). Replace
LB_IP_ADDRESS
with the load balancer's IP
address.
If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.
Your browser should render a page with minimal information about the backend instance. If your browser doesn't render this page, review the configuration settings in this guide.
Configure a load balancer with a cross-project backend service
This section shows you how to configure a load balancer with a cross-project backend service in a Shared VPC environment.
Before you begin
The previous example on this page shows you how to set up a Shared VPC deployment where all the load balancer components and its backends are created in one service project. Global external Application Load Balancers also let you configure Shared VPC deployments where a URL map in one host or service project references backend services (and backends) located across multiple service projects in the Shared VPC environment.
You can use the steps in this section as a reference to configure any of the supported combinations listed here:
- Forwarding rule, target proxy, and URL map in the host project, and backend service in a service project
- Forwarding rule, target proxy, and URL map in one service project, and backend service in another service project
While this section uses a Shared VPC environment to configure a cross-project deployment, a Shared VPC environment isn't required. For global external Application Load Balancers, your load balancer frontend can reference backend services or backend buckets from any project within the same organization.
Setup requirements
If you haven't already done so, you must complete all of the prerequisite steps to set up Shared VPC and configure the network, subnets, and firewall rules required for this example. For instructions, see the following sections at the beginning of this page:
In this setup, the forwarding rule, target proxy, and URL map are located in one service project, and the backend service and backends are located in another service project.
- In service project B, you'll configure the following
backend resources:
- Managed instance group backend
- Health check
- Global backend service
- In service project A, you'll configure the following
frontend resources:
- IP address
- SSL certificate
- URL map
- Target proxy
- Forwarding rule
The following figure depicts a global external Application Load Balancer in which the load balancer's backend service in one service project is referenced by a URL map in another service project.
Cross-project service referencing with a global external Application Load Balancer does not require backend instances to be a part of the same VPC network or a Shared VPC network.
In this example, the backend VMs in the service project are a part of the Shared VPC network that is created in the host project. However, you can also set up a standalone VPC network (that is, an unshared VPC network), along with the required firewall rules, in a service project. You can then create backend instances (for example, an instance group) that are a part of this standalone VPC network. After you create the backend instances, you can follow the remaining steps, as depicted in this example, to create a backend service in the service project and connect it to a URL map in another service project by using cross-project service referencing.
Configure the load balancer's backend components in service project B
In this section, you need to configure the following backend resources in service project B:
- Managed instance group
- Health check
- Global backend service
Create a managed instance group backend
The precursor to creating a managed instance group is the creation of an instance template, which is a resource that you can use to create virtual machine (VM) instances. Traffic from clients is load balanced to VMs in an instance group. The managed instance group provides VMs that run the backend servers of an external Application Load Balancer. In this example, the backends serve their own hostnames.
Console
Create an instance template
In the Google Cloud console, go to the Compute Engine Instance templates page.
Click Create instance template.
For Name, enter
backend-template
.In the Boot disk section, ensure that the boot disk is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). Click Change to change the image if necessary.
Expand the Advanced options section.
Expand the Networking section, and in the Network tags field, enter
load-balanced-backend
.For Network interfaces, select Networks shared with me (from host project:
HOST_PROJECT_ID)
.In the Shared subnetwork list, select the
lb-backend-subnet
subnet from thelb-network
network.Expand the Management section, and in the Automation field, specify the following startup script:
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Create a managed instance group
In the Google Cloud console, go to the Compute Engine Instance groups page.
Click Create Instance Group.
From the options, select New managed instance group (stateless).
For the name of the instance group, enter
lb-backend
.In the Instance template list, select the instance template
backend-template
that you created in the previous step.In the Location section, select Single zone, and enter the following values:
For Region, select
us-west1
.For Zone, select
us-west1-a
.
In the Autoscaling section, enter the following values:
For Autoscaling mode, select On: add and remove instances to the group.
For Minimum number of instances, select
2
.For Maximum number of instances, select
3
.
In the Port mapping section, click Add port, and enter the following values:
For Port name, enter
http
.For Port number, enter
80
.
Click Create.
gcloud
Create an instance template:
gcloud compute instance-templates create backend-template \ --region=us-west1 \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-backend-subnet \ --tags=load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2' \ --project=SERVICE_PROJECT_B_ID
Create a managed instance group and select the instance template that you created in the preceding step:
gcloud compute instance-groups managed create lb-backend \ --zone=us-west1-a \ --size=2 \ --template=backend-template \ --project=SERVICE_PROJECT_B_ID
Add a named port to the instance group:
gcloud compute instance-groups set-named-ports lb-backend \ --named-ports=http:80 \ --zone=us-west1-a \ --project=SERVICE_PROJECT_B_ID
Create a health check
Health checks are tests that confirm the availability of backends. Create a
health check that uses the HTTP protocol and probes on port 80
. Later, you'll
attach this health check to the backend service referenced by the load balancer.
Console
In the Google Cloud console, go to the Compute Engine Health checks page.
For the name of the health check, enter
lb-health-check
.Set the protocol to HTTP.
Click Create.
gcloud
Create an HTTP health check.
gcloud compute health-checks create http lb-health-check \ --use-serving-port \ --project=SERVICE_PROJECT_B_ID
Create a global backend service
Create a global backend service to distribute traffic among backends. As a part of this step, you need to assign the health check that you created to the backend service and add the instance group as the backend to the backend service.
Console
In the Google Cloud console, go to the Load balancing page.
Go to the Backends section.
Click Create backend service.
For Global backend service, click the Create button next to it.
For the name of the backend service, enter
cross-ref-backend-service
.For Backend type, select Instance group.
Set Protocol to HTTP.
In the Named port field, enter
http
. This is the same port name that you entered while creating the managed instance group.To add backends to the backend service, do the following:
In the Backends section, set the Instance group to
lb-backend
, which is the managed instance group that you created in an earlier step.For Port numbers, enter
80
.To add the backend, click Done.
To add a health check, in the Health check list, select
lb-health-check
, which is the health check that you created earlier.Optional: In the Add permissions section, enter the IAM principals from other projects (typically an email address) that have the Compute Load Balancer Admin role (
roles/compute.loadBalancerAdmin
) so that they can use this backend service for load balancers in their own projects. Without this permission, you can't use cross-project service referencing.If you don't have permission to set access control policies for backend services in this project, you can still create the backend service now, and an authorized user can perform this step later as described in the section, Grant permissions to the Compute Load Balancer Admin to use the backend service. That section also describes how to grant access to all the backend services in this project, so that you don't have to grant access every time you create a new backend service.
To create the backend service, click Create.
gcloud
Create a global backend service to distribute traffic among backends:
gcloud compute backend-services create cross-ref-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=lb-health-check \ --global \ --project=SERVICE_PROJECT_B_ID
Add your instance group as the backend to the backend service:
gcloud compute backend-services add-backend cross-ref-backend-service \ --instance-group=lb-backend \ --instance-group-zone=us-west1-a \ --global \ --project=SERVICE_PROJECT_B_ID
Configure the load balancer's frontend components in service project A
In this section, you need to configure the following frontend resources in service project A:
- IP address
- SSL certificate
- URL map
- Target proxy
- Forwarding rule
Reserve the load balancer's IP address
Reserve a global static external IP address that can be assigned to the forwarding rule of the load balancer.
Console
In the Google Cloud console, go to the VPC IP addresses page.
Click Reserve external static IP address.
For Name, enter
cross-ref-ip-address
.Set Network Service Tier to Premium.
Set IP version to IPv4.
Set Type to Global.
Click Reserve.
gcloud
Create a global static external IP address.
gcloud compute addresses create cross-ref-ip-address \ --ip-version=IPV4 \ --network-tier=PREMIUM \ --global \ --project=SERVICE_PROJECT_A_ID
Set up an SSL certificate resource
For a load balancer that uses HTTPS as the request-and-response protocol, create an SSL certificate resource as described in the following resources:
We recommend using a Google-managed certificate.
This example assumes that you have created an SSL certificate named
lb-ssl-cert
. The SSL certificate is attached to the target proxy that you will
create in one of the following steps.
Create the frontend components
Console
Start your configuration
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Application Load Balancer (HTTP/HTTPS) and click Next.
- For Public facing or internal, select Public facing (external) and click Next.
- For Global or single region deployment, select Best for global workloads and click Next.
- For Load balancer generation, select Global external Application Load Balancer and click Next.
- Click Configure.
Basic configuration
- Enter the Name of the load balancer:
cross-ref-lb-shared-vpc
. - Keep the page open to continue.
Configure the frontend
For HTTP:
- Click Frontend configuration.
- Enter a Name for the forwarding rule:
cross-ref-http-forwarding-rule
. - Set the Protocol to
HTTP
. - Select the IP address that you created in Reserve the load balancer's IP address, called cross-ref-ip-address.
- Set the Port to
80
. - Click Done.
For HTTPS:
If you are using HTTPS between the client and the load balancer, you need one or more SSL certificate resources to configure the proxy. For information about how to create SSL certificate resources, see SSL certificates. Google-managed certificates aren't supported with regional external Application Load Balancers.
- Click Frontend configuration.
- Enter a Name for the forwarding rule:
cross-ref-https-forwarding-rule
. - In the Protocol field, select
HTTPS (includes HTTP/2)
. - Select the IP address that you created in Reserve the load balancer's IP address, called cross-ref-ip-address.
- Ensure that the Port is set to
443
to allow HTTPS traffic. - Click the Certificate list.
- If you already have a self-managed SSL certificate resource that you want to use as the primary SSL certificate, select it from the menu.
- Otherwise, select Create a new certificate.
- Enter a Name for the SSL certificate.
- In the appropriate fields upload your PEM-formatted files:
- Public key certificate
- Certificate chain
- Private key
- Click Create.
- To add certificate resources in addition to
the primary SSL certificate resource:
- Click Add certificate.
- Select a certificate from the Certificates list or click Create a new certificate and follow the previous instructions.
- Click Done.
Configure the backend
- Click Backend configuration.
- Click Cross-project backend services.
- For Project ID, enter the project ID for service project B.
- From the Select backend services list, select the backend services
from service project B that you want to use. For this example, you enter
cross-ref-backend-service
. - Click OK.
Configure the routing rules
- Click Routing rules. Ensure that the cross-ref-backend-service is the only backend service for any unmatched host and any unmatched path.
For information about traffic management, see Set up traffic management.
Review and finalize the configuration
- Click Create.
gcloud
Optional: Before creating a load balancer with cross-referencing backend services, find out whether the backend services you want to refer to can be referenced using a URL map:
gcloud compute backend-services list-usable \ --global \ --project=SERVICE_PROJECT_B_ID
Create a URL map to route incoming requests to the backend service:
gcloud compute url-maps create cross-ref-url-map \ --default-service=projects/SERVICE_PROJECT_B_ID/global/backendServices/cross-ref-backend-service \ --global \ --project=SERVICE_PROJECT_A_ID
Create a target proxy.
For HTTP traffic, create a target HTTP proxy to route requests to the URL map:
gcloud compute target-http-proxies create cross-ref-http-proxy \ --url-map=cross-ref-url-map \ --global \ --project=SERVICE_PROJECT_A_ID
For HTTPS traffic, create a target HTTPS proxy to route requests to the URL map. The proxy is the part of the load balancer that holds the SSL certificate for an HTTPS load balancer, so you also load your SSL certificate in this step:
gcloud compute target-https-proxies create cross-ref-https-proxy \ --url-map=cross-ref-url-map \ --ssl-certificates=lb-ssl-cert \ --global \ --project=SERVICE_PROJECT_A_ID
Create a forwarding rule.
For HTTP traffic, create a global forwarding rule to route incoming requests to the target proxy:
gcloud compute forwarding-rules create cross-ref-http-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --address=cross-ref-ip-address \ --global \ --target-http-proxy=cross-ref-http-proxy \ --ports=80 \ --project=SERVICE_PROJECT_A_ID
For HTTPS traffic, create a global forwarding rule to route incoming requests to the target proxy:
gcloud compute forwarding-rules create cross-ref-https-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --address=cross-ref-ip-address \ --global \ --target-https-proxy=cross-ref-https-proxy \ --ports=443 \ --project=SERVICE_PROJECT_A_ID
Grant permissions to the Compute Load Balancer Admin to use the backend service
If you want load balancers to reference backend services in other service
projects, the Load Balancer Admin must have the compute.backendServices.use
permission. To grant this permission, you can use the predefined
IAM role called
Compute Load Balancer Services User (roles/compute.loadBalancerServiceUser
).
This role must be granted by the Service Project Admin and can be applied at
the project level or at the individual backend service level.
This step is not required if you already granted the required permissions at the backend service level while creating the backend service. You can either skip this section or continue reading to learn how to grant access to all the backend services in this project so that you don't have to grant access every time you create a new backend service.
In this example, a Service Project Admin from service project B must run one
of the following commands to grant the compute.backendServices.use
permission
to a Load Balancer Admin from service project A. This can be done either at the
project level (for all backend services in the project) or per backend service.
Console
Project-level permissions
Use the following steps to grant permissions to all backend services in your project.
You require the compute.backendServices.setIamPolicy
and the
resourcemanager.projects.setIamPolicy
permissions to complete this step.
In the Google Cloud console, go to the IAM page.
Select your project.
Click
Grant access.In the New principals field, enter the principal's email address or other identifier.
In the Select a role list, select the Compute Load Balancer Services User.
Optional: Add a condition to the role.
Click Save.
Resource-level permissions for individual backend services
Use the following steps to grant permissions to individual backend services in your project.
You require the compute.backendServices.setIamPolicy
permission to
complete this step.
In the Google Cloud console, go to the Backends page.
From the backends list, select the backend service that you want to grant access to and click
Permissions.Click
Add principal.In the New principals field, enter the principal's email address or other identifier.
In the Select a role list, select the Compute Load Balancer Services User.
Click Save.
gcloud
Project-level permissions
Use the following steps to grant permissions to all backend services in your project.
You require the compute.backendServices.setIamPolicy
and the
resourcemanager.projects.setIamPolicy
permissions to complete this step.
gcloud projects add-iam-policy-binding SERVICE_PROJECT_B_ID \ --member="user:LOAD_BALANCER_ADMIN" \ --role="roles/compute.loadBalancerServiceUser"
Resource-level permissions for individual backend services
At the backend service level, Service Project Admins can use either of the
following commands to grant the Compute Load Balancer Services User role
(roles/compute.loadBalancerServiceUser
).
You require the compute.backendServices.setIamPolicy
permission to
complete this step.
gcloud projects add-iam-policy-binding SERVICE_PROJECT_B_ID \ --member="user:LOAD_BALANCER_ADMIN" \ --role="roles/compute.loadBalancerServiceUser" \ --condition='expression=resource.name=="projects/SERVICE_PROJECT_B_ID/regions/us-west1/backend-services/BACKEND_SERVICE_NAME",title=Shared VPC condition'
or
gcloud compute backend-services add-iam-policy-binding BACKEND_SERVICE_NAME \ --member="user:LOAD_BALANCER_ADMIN" \ --role="roles/compute.loadBalancerServiceUser" \ --project=SERVICE_PROJECT_B_ID \ --region=us-west1
To use these commands, replace LOAD_BALANCER_ADMIN
with the
user's principal—for
example, test-user@gmail.com
.
You can also configure IAM permissions so that they only apply to a subset of regional backend services by using conditions and specifying condition attributes.
Test the load balancer
It can take several minutes for the load balancer to be configured. When the load balancing service is running, you can send traffic to the forwarding rule in service project A and watch the traffic be dispersed to different VM instances in service project B.
Console
In the Google Cloud console, go to the Load balancing page in service project A.
Click the load balancer that you just created.
Note the load balancer's IP address. This IP address is referred to as
LB_IP_ADDRESS
in the following steps.You can test your load balancer by pointing your web browser to
https://LB_IP_ADDRESS
(orhttp://LB_IP_ADDRESS
). ReplaceLB_IP_ADDRESS
with the load balancer's IP address.If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.
Your browser should render a page with content showing the name of the instance that served the page (for example,
Page served from: lb-backend-example-xxxx
). If your browser doesn't render this page, review the configuration settings in this guide.
gcloud
Note the IP address that was reserved:
gcloud compute addresses describe IP_ADDRESS_NAME \ --format="get(address)" \ --global --project=SERVICE_PROJECT_A_ID
You can test your load balancer by pointing your web browser to
https://LB_IP_ADDRESS
(or
http://LB_IP_ADDRESS
). Replace
LB_IP_ADDRESS
with the load balancer's IP
address.
If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.
Your browser should render a page with minimal information about the backend instance. If your browser doesn't render this page, review the configuration settings in this guide.
What's next
- Restrict how Shared VPC features such as cross-project service referencing are used in your project by using organization policy constraints for Cloud Load Balancing.
- Learn how to troubleshoot issues with a global external Application Load Balancer.
- Clean up your load balancing setup.