Monitor infrastructure resources using an App Hub global application


App Hub enables you to manage and view infrastructure resources from Google Cloud projects through App Hub applications. To create these App Hub applications, you require an App Hub host project to which you can attach service projects that contain Google Cloud resources.

This tutorial shows you how to set up a global App Hub application for multiple projects and then view the application's resources. Using multiple service projects, you set up an internal Application Load Balancer in a Shared VPC environment. Then, in a global application on the App Hub host project, you register and monitor all the infrastructure resources from the service projects as App Hub services and workloads.

This tutorial is intended for people who set up and administer App Hub. You should have some experience with Cloud Load Balancing.

Objectives

  • Set up a Global App Hub application that contains resources spanning multiple projects.
  • Monitor the resources through system metrics for the application.

Costs

For an estimate of the cost of the Google Cloud resources that the load balanced managed VM solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator.

Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution.

The precalculated estimate is based on assumptions for certain factors, including the following:

  • The Google Cloud locations where the resources are deployed.
  • The amount of time that the resources are used.

For more information on App Hub costs, see the Pricing page.

Before you begin

Before you set up this tutorial, decide on the roles and permissions for your projects and then create four Google Cloud projects. One of these projects is the App Hub host project and the other three are App Hub service projects.

Required roles and permissions

If you are the project creator, you are granted the basic Owner role (roles/owner). By default, this Identity and Access Management (IAM) role includes the permissions necessary for full access to most Google Cloud resources.

If you are not the project creator, required permissions must be granted on the project to the appropriate principal. For example, a principal can be a Google Account (for end users) or a service account (for applications and workloads).

To get the permissions that you need to manage access to a project, folder, or organization, ask your administrator to grant you the following IAM roles on the resource that you want to manage access for (project, folder, or organization):

For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Create App Hub host and service projects

Console

  1. In the Google Cloud console, go to the project selector page.

    Go to project selector

  2. Select or create a Google Cloud project, to be your App Hub host project.

  3. Enable the App Hub, Compute Engine, Service Management, and Service Usage APIs.

    Enable the APIs

  4. In the same folder as the App Hub host project, create three new Google Cloud projects. These are the App Hub service projects for the App Hub host project.

  5. Make sure that billing is enabled for all your Google Cloud projects.

  6. Enable the App Hub, Compute Engine, Service Management, and Service Usage APIs.

    Enable the APIs

gcloud

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Make sure that the most recent version of Google Cloud CLI is installed. Run the following command from the Cloud Shell:

      gcloud components update

  3. Create or select a new project, HOST_PROJECT_ID, to be the host project for App Hub.

    • Create a Google Cloud project:

      gcloud projects create HOST_PROJECT_ID
    • Select the Google Cloud project that you created:

      gcloud config set project HOST_PROJECT_ID
  4. Make sure that billing is enabled for all your Google Cloud projects.

  5. Enable the App Hub, Compute Engine, Service Management, and Service Usage APIs:

    gcloud services enable apphub.googleapis.com \
       compute.googleapis.com \
       servicemanagement.googleapis.com \
       serviceusage.googleapis.com
  6. Create three new Google Cloud projects to be the App Hub service projects for the App Hub host project.

    1. Create a service project:

      gcloud projects create SERVICE_PROJECT_1_ID

      Replace SERVICE_PROJECT_1_ID with the ID of Service Project 1.

    2. Select the service project that you created:

      gcloud config set project SERVICE_PROJECT_1_ID
    3. Enable the Compute Engine, Service Management, and Service Usage APIs:

      gcloud services enable compute.googleapis.com \
        servicemanagement.googleapis.com \
        serviceusage.googleapis.com
    4. Set the configuration variable used in this tutorial:

      export SERVICE_PROJECT_1_NUMBER= $(gcloud projects describe $(gcloud config get-value project) --format='value(projectNumber)')
    5. Repeat these previous step to create SERVICE_PROJECT_2_ID and SERVICE_PROJECT_3_ID, enable the APIS, and set the configuration variable.

Prepare the environment

If you already have an internal Application Load Balancer in a Shared VPC environment set up in multiple projects, proceed to the Grant IAM permissions section in this document.

If not, to set up an internal Application Load Balancer in a Shared VPC environment, follow these steps:

App Hub workloads and services with a load balancer and managed instance group.
Figure 1. App Hub workloads and services with a load balancer and managed instance groups in a Shared VPC.
  1. In Service Project 1, configure a Shared VPC network and two subnets.
  2. In Service Project 2, create the load balancer's backend service with two managed instance groups as the backends.
  3. In Service Project 3, create another load balancer's backend service with two managed instance groups as the backends.
  4. In Service Project 1, create the load balancer's frontend components and URL map.

The following is the request processing flow of the topology that the load balanced managed VM solution deploys.

  1. From the Shared VPC network, the client VM makes an HTTP request to the internal Application Load Balancer in Service project 1.

  2. The load balancer uses the information in the URL map and backend services to route the request to its managed instance group backends.

Configure the network and subnets in the Shared VPC host project

You need a Shared VPC network with two subnets: one for the load balancer's frontend and backends and one for the load balancer's proxies.

This example uses the following network, region, and subnets:

  • Network. The network is named lb-network.

  • Subnet for load balancer's frontend and backends. A subnet named lb-frontend-and-backend-subnet in the us-west1 region uses 10.1.2.0/24 for its primary IP range.

  • Subnet for proxies. A subnet named proxy-only-subnet in the us-west1 region uses 10.129.0.0/23 for its primary IP range.

In this tutorial, designate Service Project 1 as the Shared VPC host project. All the steps in this section must be performed in Service Project 1.

Configure the subnet for the load balancer's frontend and backends

This step does not need to be performed every time you want to create a new load balancer. You only need to ensure that the service projects have access to a subnet in the Shared VPC network (in addition to the proxy-only subnet).

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.
  3. In the Name field, enter lb-network.
  4. Set the Subnet creation mode to Custom.
  5. In the New subnet section, enter the following information:

    • Name: lb-frontend-and-backend-subnet

    • Region: us-west1

    • IP address range: 10.1.2.0/24

  6. Click Done.

  7. Click Create.

gcloud

  1. Set the project as Service Project 1:

      gcloud config set project SERVICE_PROJECT_1_ID

  2. Create a VPC network with the gcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  3. Create a subnet in the lb-network network in the us-west1 region:

    gcloud compute networks subnets create lb-frontend-and-backend-subnet \
        --network=lb-network \
        --range=10.1.2.0/24 \
        --region=us-west1
    

Configure the proxy-only subnet

The proxy-only subnet is used by all regional Envoy-based load balancers in the us-west1 region, in the lb-network VPC network. There can only be one active proxy-only subnet per region, per network.

Don't perform this step if there is already a proxy-only subnet reserved in the us-west1 region in this network.

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click the name of the Shared VPC network: lb-network.
  3. Click the Subnets tab and click Add subnet.
  4. In the Add a subnet pane, in the Name field, enter proxy-only-subnet.
  5. In the Region list, select us-west1.
  6. Set Purpose to Regional Managed Proxy.
  7. In the IP address range field, enter 10.129.0.0/23.
  8. Click Add.

gcloud

Create the proxy-only subnet with the gcloud compute networks subnets create command:

gcloud compute networks subnets create proxy-only-subnet \
    --purpose=REGIONAL_MANAGED_PROXY \
    --role=ACTIVE \
    --region=us-west1 \
    --network=lb-network \
    --range=10.129.0.0/23

Give service project admins access to the backend subnet

Service project administrators require access to the lb-frontend-and-backend-subnet subnet so that they can provision the load balancer's backends.

A Shared VPC Admin must grant access to the backend subnet to service project administrators (or developers who deploy resources and backends that use the subnet). For instructions, see Service Project Admins for some subnets.

Configure firewall rules in Service Project 1

This example uses the following firewall rules:

  • fw-allow-health-check. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems in 130.211.0.0/22 and 35.191.0.0/16. This example uses the target tag load-balanced-backend to identify the instances to which it should apply.

  • fw-allow-proxies. An ingress rule, applicable to the instances being load balanced, that allows TCP traffic on ports 80, 443, and 8080 from the load balancer's managed proxies. This example uses the target tag load-balanced-backend to identify the instances to which it should apply.

  • fw-allow-ssh. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule. For example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tag allow-ssh to identify the virtual machines (VMs) to which the firewall rule applies.

Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.

Console

  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. Click Create firewall rule to create the rule to allow Google Cloud health checks:
    • Name: fw-allow-health-check
    • Network: lb-network
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags: load-balanced-backend
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 130.211.0.0/22 and 35.191.0.0/16
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Select the TCP checkbox and enter 80 for the port number.
      • As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use tcp:80 for the protocol and port, Google Cloud can use HTTP on port 80 to contact your VMs, but it can't use HTTPS on port 443 to contact them.

  3. Click Create.
  4. Click Create firewall rule to create the rule to allow Google Cloud health checks:
    • Name: fw-allow-proxies
    • Network: lb-network
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags: load-balanced-backend
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 10.129.0.0/23
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Select the TCP checkbox and enter 80, 443, 8080 for the port numbers.
  5. Click Create.
  6. Click Create firewall rule to create the rule to allow Google Cloud health checks:
    • Name: fw-allow-ssh
    • Network: lb-network
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags: allow-ssh
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 0.0.0.0/0
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Select the TCP checkbox and enter 22 for the port number.
  7. Click Create.

gcloud

  1. Create the fw-allow-health-check firewall rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers. However, you can configure a narrower set of ports to meet your needs.

    gcloud compute firewall-rules create fw-allow-health-check \
       --network=lb-network \
       --action=allow \
       --direction=ingress \
       --source-ranges=130.211.0.0/22,35.191.0.0/16 \
       --target-tags=load-balanced-backend \
       --rules=tcp
    
  2. Create the fw-allow-proxies firewall rule to allow traffic from the Envoy proxy-only subnet to reach your backends:

    gcloud compute firewall-rules create fw-allow-proxies \
       --network=lb-network \
       --action=allow \
       --direction=ingress \
       --source-ranges=10.129.0.0/23 \
       --target-tags=load-balanced-backend \
       --rules=tcp:80,tcp:443,tcp:8080
    
  3. Create the fw-allow-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh:

    gcloud compute firewall-rules create fw-allow-ssh \
       --network=lb-network \
       --action=allow \
       --direction=ingress \
       --target-tags=allow-ssh \
       --rules=tcp:22
    

Reserve a static internal IPv4 address

Service Project Admins can reserve an internal IPv4 or IPv6 address in a subnet of a Shared VPC network. The IP address configuration object is created in the service project, while its value comes from the range of available IPv4 addresses in the chosen shared subnet.

To reserve a standalone internal IP address in the service project, complete the following steps.

Console

  1. Set up Shared VPC.
  2. In the Google Cloud console, go to the Shared VPC page.

    Go to Shared VPC

  3. Sign in as a Shared VPC Admin.

  4. Select the service project from the project picker.

  5. Go to VPC network >IP addresses.

  6. In the IP addresses page, click Reserve internal static IP address.

  7. In the Name field, enter l7-ilb-ip-address as the IP address name.

  8. In the IP version list, select IPv4.

  9. In the Network, select lb-network.

  10. In the Subnetwork lists, select lb-frontend-and-backend-subnet.

  11. Click Reserve.

gcloud

  1. If you have not already, authenticate to the Google Cloud CLI as a Shared VPC Service Project Admin.

    gcloud auth login SERVICE_PROJECT_ADMIN
    

    Replace SERVICE_PROJECT_ADMIN with the name of the Shared VPC Service Project Admin. These values must have the format username@yourdomain, for example, 222larabrown@gmail.com.

  2. Use the compute addresses create command to reserve an IP address:

    
    gcloud compute addresses create l7-ilb-ip-address \
        --project SERVICE_PROJECT_1_ID \
        --subnet=lb-frontend-and-backend-subnet \
        --region=us-west1 \
        --ip-version=IPV4
    

Additional details for creating IP addresses are published in the SDK documentation.

Set up Shared VPC in Service Project 1

To set up Shared VPC in Service Project 1, you designate Service Project 1 as the Shared VPC host project and Service Projects 2 and 3 as the Shared VPC service projects. Later in this tutorial, when you create the MIG backends in Service Projects 2 and 3, you'll be able to use the same VPC network and subnets previously created in Service Project 1.

  1. Enable a host project.
  2. Attach a service project.
  3. Grant the Compute Network User role (roles/compute.networkUser) to Service Projects 2 and 3:

    Console

    1. In the Google Cloud console, go to the IAM page.

      Go to IAM

    2. Click Grant access. The Grant access pane opens.

    3. In the New principals field, enter SERVICE_PROJECT_2_NUMBER@cloudservices.gserviceaccount.com.

      Note that you can obtain the service project numbers from the Dashboard of the project:

      Go to Dashboard

    4. Click Select a role and in the Filter field, enter Compute Network User.

    5. Select the Compute Network User role and click Save.

    6. Repeat the preceding steps to grant the Compute Network User role to Service Project 3 (SERVICE_PROJECT_3_NUMBER@cloudservices.gserviceaccount.com).

    gcloud

    1. In Service Project 1, grant the Compute Network User role to Service Project 2.

      gcloud projects add-iam-policy-binding SERVICE_PROJECT_1_ID \
       --member='serviceAccount:SERVICE_PROJECT_2_NUMBER@cloudservices.gserviceaccount.com' \
       --role='roles/compute.networkUser'
      

      Replace SERVICE_PROJECT_2_NUMBER with the project number of Service Project 2.

    2. In Service Project 1, grant the Compute Network User role to Service Project 3.

      gcloud projects add-iam-policy-binding SERVICE_PROJECT_1_ID \
       --member='serviceAccount::SERVICE_PROJECT_3_NUMBER@cloudservices.gserviceaccount.com' \
       --role='roles/compute.networkUser'
      

      Replace SERVICE_PROJECT_3_NUMBER with the project number of Service Project 3.

Create a backend service and MIGs in Service Project 2

All the steps in this section must be performed in Service Project 2.

Console

  1. Create an instance template. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click Create instance template.
    2. Enter a Name for the instance template: tutorial-ig-template-sp2.
    3. In the Location section, select Regional (recommended) and us-west1(Oregon) as the Region.
    4. In the Machine configuration section, select N2 as the series.
    5. Ensure that the Boot disk is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that are only available on Debian, such as apt-get. If you need to change the Boot disk, click Change.
      1. For Operating System, select Debian.
      2. For Version, select one of the available Debian images such as Debian GNU/Linux 12 (bookworm).
      3. Click Select.
    6. Click Advanced options, and then click Networking.
    7. Enter the following Network tags: allow-ssh,load-balanced-backend.
    8. In the Network interfaces section, select Networks shared with me (from host project: SERVICE_PROJECT_1_ID).
    9. Select the lb-frontend-and-backend-subnet subnet from the lb-network network.
    10. Click Management. For Management, insert the following script into the Startup script field.
      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | tee /var/www/html/index.html
      systemctl restart apache2
      
    11. Click Create.
  2. Create a managed instance group. In the Google Cloud console, go to the Instance groups page.

    Go to Instance groups

    1. Click Create instance group.
    2. Choose New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
    3. Enter a Name for the instance group: tutorial-sp2-mig-a.
    4. For Instance template, select tutorial-ig-template-sp2.
    5. For Location, select Single zone.
    6. For Region, select us-west1.
    7. Specify the number of instances that you want to create in the group.

      For this example, specify the following options for Autoscaling:

      • For Autoscaling mode, select Off:do not autoscale.
      • For Maximum number of instances, enter 2.
    8. Click Create.

  3. Create a regional backend service. As a part of this step we'll also create the health check and add backends to the backend service. In the Google Cloud console, go to the Backends page.

    Go to Backends

    1. Click Create backend service.
    2. In the Create backend service dialog, click Create beside Regional backend service.
    3. Enter a Name for the backend service: tutorial-backend-service-sp2.
    4. For Region, select us-west1.
    5. For Load balancer type, select Regional internal Application Load Balancer (INTERNAL_MANAGED).
    6. Set Backend type to Instance group.
    7. In the Backends section, set the following fields:
      1. Set Instance group to tutorial-sp2-mig-a.
      2. Enter the Port numbers: 80.
      3. Set Balancing mode to Utilization.
      4. Click Done.
    8. In the Health check section, click Create a health check and set the following fields:
      1. Name: tutorial-regional-health-check
      2. Protocol: HTTP
      3. Port: 80
      4. Click Save.
    9. Click Continue.
    10. Click Create.
  4. Repeat the earlier steps and create a managed instance group, tutorial-sp2-mig-b and add it to the backend service, tutorial-backend-service-sp2.

gcloud

  1. Select the service project that you created:
    gcloud config set project SERVICE_PROJECT_2_ID
  2. Create a VM instance template, tutorial-ig-template-sp2 with an HTTP server:

    gcloud compute instance-templates create tutorial-ig-template-sp2 \
        --region=us-west1 \
        --network=projects/SERVICE_PROJECT_1_ID/global/networks/lb-network \
        --subnet=projects/SERVICE_PROJECT_1_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
        --tags=allow-ssh,load-balanced-backend \
        --image-family=debian-12 \
        --image-project=debian-cloud \
        --metadata=startup-script='#! /bin/bash
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        vm_hostname="$(curl -H "Metadata-Flavor:Google" \
        http://metadata.google.internal/computeMetadata/v1/instance/name)"
        echo "Page served from: $vm_hostname" | \
        tee /var/www/html/index.html
        systemctl restart apache2' \
        --project=SERVICE_PROJECT_2_ID
    

    Replace the following:

    • SERVICE_PROJECT_1_ID: the project ID for the Shared VPC host project.
    • SERVICE_PROJECT_2_ID: the project ID for the service project, where the load balancer's backends and the backend service are being created.
  3. Create a managed instance group, tutorial-sp2-mig-a in the region:

    gcloud compute instance-groups managed create tutorial-sp2-mig-a \
        --region=us-west1 \
        --size=2 \
        --template=tutorial-ig-template-sp2 \
        --project=SERVICE_PROJECT_2_ID
    
  4. Define the HTTP health check, tutorial-regional-health-check:

    gcloud compute health-checks create http tutorial-regional-health-check \
      --region=us-west1 \
      --use-serving-port \
      --project=SERVICE_PROJECT_2_ID
    
  5. Define the backend service, tutorial-backend-service-sp2:

    gcloud compute backend-services create tutorial-backend-service-sp2 \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --protocol=HTTP \
      --health-checks=tutorial-regional-health-check \
      --health-checks-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_2_ID
    
  6. Add backends to the backend service with the gcloud compute backend-services add-backend command:

    gcloud compute backend-services add-backend tutorial-backend-service-sp2 \
      --balancing-mode=UTILIZATION \
      --instance-group=tutorial-sp2-mig-a \
      --instance-group-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_2_ID
    
  7. Create another managed instance group, tutorial-sp2-mig-b in the region:

    gcloud compute instance-groups managed create tutorial-sp2-mig-b \
        --region=us-west1 \
        --size=2 \
        --template=tutorial-ig-template-sp2 \
        --project=SERVICE_PROJECT_2_ID
    
  8. Add backends to the backend service:

    gcloud compute backend-services add-backend tutorial-backend-service-sp2 \
      --balancing-mode=UTILIZATION \
      --instance-group=tutorial-sp2-mig-b \
      --instance-group-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_2_ID
    

Create a backend service and MIGs in Service Project 3

All the steps in this section must be performed in Service Project 3.

Console

  1. Create an instance template. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click Create instance template.
    2. Enter a Name for the instance template: tutorial-ig-template-sp3.
    3. In the Location section, select Regional (recommended) and us-west1(Oregon) as the Region.
    4. In the Machine configuration section, select N2 as the series.
    5. Ensure that the Boot disk is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that are only available on Debian, such as apt-get. If you need to change the Boot disk, click Change.
      1. For Operating System, select Debian.
      2. For Version, select one of the available Debian images such as Debian GNU/Linux 12 (bookworm).
      3. Click Select.
    6. Click Advanced options, and then click Networking.
    7. Enter the following Network tags: allow-ssh,load-balanced-backend.
    8. In the Network interfaces section, select Networks shared with me (from host project: SERVICE_PROJECT_1_ID).
    9. Select the lb-frontend-and-backend-subnet subnet from the lb-network network.
    10. Click Management. For Management, insert the following script into the Startup script field.
      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | tee /var/www/html/index.html
      systemctl restart apache2
      
    11. Click Create.
  2. Create a managed instance group. In the Google Cloud console, go to the Instance groups page.

    Go to Instance groups

    1. Click Create instance group.
    2. Choose New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
    3. Enter a Name for the instance group: tutorial-sp3-mig-a.
    4. For Instance template, select tutorial-ig-template-sp3.
    5. For Location, select Single zone.
    6. For Region, select us-west1.
    7. Specify the number of instances that you want to create in the group.

      For this example, specify the following options for Autoscaling:

      • For Autoscaling mode, select Off:do not autoscale.
      • For Maximum number of instances, enter 2.
    8. Click Create.

  3. Create a regional backend service. As a part of this step we'll also create the health check and add backends to the backend service. In the Google Cloud console, go to the Backends page.

    Go to Backends

    1. Click Create backend service.
    2. In the Create backend service dialog, click Create beside Regional backend service.
    3. Enter a Name for the backend service: tutorial-backend-service-sp3.
    4. For Region, select us-west1.
    5. For Load balancer type, select Regional internal Application Load Balancer (INTERNAL_MANAGED).
    6. Set Backend type to Instance group.
    7. In the Backends section, set the following fields:
      1. Set Instance group to tutorial-sp3-mig-a.
      2. Enter the Port numbers: 80.
      3. Set Balancing mode to Utilization.
      4. Click Done.
    8. In the Health check section, click Create a health check and set the following fields:
      1. Name: tutorial-regional-health-check
      2. Protocol: HTTP
      3. Port: 80
      4. Click Save.
    9. Click Continue.
    10. Click Create.
  4. Repeat the earlier steps and create a managed instance group, tutorial-sp3-mig-b and add it to the backend service, tutorial-backend-service-sp3.

gcloud

  1. Select the service project that you created:
    gcloud config set project SERVICE_PROJECT_3_ID
  2. Create a VM instance template, tutorial-ig-template-sp3 with an HTTP server:

    gcloud compute instance-templates create tutorial-ig-template-sp3 \
        --region=us-west1 \
        --network=projects/SERVICE_PROJECT_1_ID/global/networks/lb-network \
        --subnet=projects/SERVICE_PROJECT_1_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
        --tags=allow-ssh,load-balanced-backend \
        --image-family=debian-12 \
        --image-project=debian-cloud \
        --metadata=startup-script='#! /bin/bash
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        vm_hostname="$(curl -H "Metadata-Flavor:Google" \
        http://metadata.google.internal/computeMetadata/v1/instance/name)"
        echo "Page served from: $vm_hostname" | \
        tee /var/www/html/index.html
        systemctl restart apache2' \
        --project=SERVICE_PROJECT_3_ID
    

    Replace the following:

    • SERVICE_PROJECT_1_ID: the project ID for the Shared VPC host project.
    • SERVICE_PROJECT_3_ID: the project ID for the service project, where the load balancer's backends and the backend service are being created.
  3. Create a managed instance group, tutorial-sp3-mig-a in the region:

    gcloud compute instance-groups managed create tutorial-sp3-mig-a \
        --region=us-west1 \
        --size=2 \
        --template=tutorial-ig-template-sp3 \
        --project=SERVICE_PROJECT_3_ID
    
  4. Define the HTTP health check, tutorial-regional-health-check:

    gcloud compute health-checks create http tutorial-regional-health-check \
      --region=us-west1 \
      --use-serving-port \
      --project=SERVICE_PROJECT_3_ID
    
  5. Define the backend service, tutorial-backend-service-sp3:

    gcloud compute backend-services create tutorial-backend-service-sp3 \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --protocol=HTTP \
      --health-checks=tutorial-regional-health-check \
      --health-checks-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_3_ID
    
  6. Add backends to the backend service with the gcloud compute backend-services add-backend command:

    gcloud compute backend-services add-backend tutorial-backend-service-sp3 \
      --balancing-mode=UTILIZATION \
      --instance-group=tutorial-sp3-mig-a \
      --instance-group-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_3_ID
    
  7. Create another managed instance group, tutorial-sp3-mig-b in the region:

    gcloud compute instance-groups managed create tutorial-sp3-mig-b \
        --region=us-west1 \
        --size=2 \
        --template=tutorial-ig-template-sp3 \
        --project=SERVICE_PROJECT_3_ID
    
  8. Add backends to the backend service:

    gcloud compute backend-services add-backend tutorial-backend-service-sp3 \
      --balancing-mode=UTILIZATION \
      --instance-group=tutorial-sp3-mig-b \
      --instance-group-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_3_ID
    

Create the URL Map and forwarding rule in Service Project 1

All the steps in this section must be performed in Service Project 1.

Console

Start your configuration

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click Create load balancer.
  3. For Type of load balancer, select Application Load Balancer (HTTP/HTTPS) and click Next.
  4. For Public facing or internal, select Internal and click Next.
  5. For Cross-region or single region deployment, select Best for regional workloads and click Next.
  6. Click Configure.

Basic configuration

  1. Enter a Name for the load balancer, tutorial-url-maps.
  2. In the Region list, select us-west1.
  3. In the Network list, select lb-network (from Project: SERVICE_PROJECT_1_ID).

    If you see a Proxy-only subnet required in Shared VPC network warning, confirm that the host project administrator has created the proxy-only-subnet in the us-west1 region in the lb-network Shared VPC network. Load balancer creation succeeds even if you don't have permission to view the proxy-only subnet on this page.

  4. Keep the window open to continue.

Configure the backend

  1. Click Backend configuration.
  2. Click Cross-project backend services.
  3. In the Project ID field, enter the project ID for Service Project 2.
  4. In the Backend service name field, enter the name of the backend service from Service Project 2 that you want to use. For this example, it's tutorial-backend-service-sp2.
  5. Click Add backend service.
  6. In the Project ID field, enter the project ID for Service Project 3.
  7. In the Backend service name field, enter the name of the backend service from Service Project 3 that you want to use. For this example, it's tutorial-backend-service-sp3.
  8. Click Add backend service.

Configure the routing rules

  1. Click Routing rules.
  2. In the Host and path rules section, in the Host 2 field, enter *.
  3. In the Paths 2 field, enter /*.
  4. From the Backend 2 drop-down list, select tutorial-backend-service-sp2.
  5. Click Add host and path rule.
  6. In the Host 3 field, enter tutorial-host.
  7. In the Paths 3 field, enter /*.
  8. From the Backend 3 drop-down list, select tutorial-backend-service-sp3.

  9. Look for the blue checkmark to the left of Host and Path Rules and click the Update button.

    For information about traffic management, see Setting up traffic management.

Configure the frontend

For cross-project service referencing to work, the frontend must use the same network (lb-network) from the Shared VPC host project that was used to create the backend service.

  1. Click Frontend configuration.
  2. Enter a Name for the forwarding rule: l7-ilb-forwarding-rule.
  3. Set the Protocol to HTTP.
  4. Set the Subnetwork to lb-frontend-and-backend-subnet. Don't select the proxy-only subnet for the frontend even if it is an option in the list.
  5. Set the Port to 80.
  6. In the IP address field, retain the default option, Ephemeral (Automatic).
  7. Click Done.

Review and finalize the configuration

  • Click Create.

gcloud

  1. Select the service project that you created:

    gcloud config set project SERVICE_PROJECT_1_ID

  2. Create the URL map, tutorial-url-maps and set the default service to the backend service created in Service Project 1:

    gcloud compute url-maps create tutorial-url-maps \
        --default-service=projects/SERVICE_PROJECT_2_ID/regions/us-west1/backendServices/tutorial-backend-service-sp2 \
        --region=us-west1 \
        --project=SERVICE_PROJECT_1_ID
    

    Replace the following:

    • SERVICE_PROJECT_2_ID: the project ID for Service Project 2, where the load balancer's backends and the backend service are created.
    • SERVICE_PROJECT_1_ID: the project ID for Service Project 1, where the load balancer's frontend is being created.
  3. Create the target proxy, tutorial-http-proxy:

    gcloud compute target-http-proxies create tutorial-http-proxy \
      --url-map=tutorial-url-maps \
      --url-map-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_1_ID
    
  4. Create the forwarding rule, l7-ilb-forwarding-rule to handle HTTP traffic. For cross-project service referencing to work, the forwarding rule must use the same network (lb-network) from the Shared VPC host project that was used to create the backend service.

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --network=projects/SERVICE_PROJECT_1_ID/global/networks/lb-network \
      --subnet=projects/SERVICE_PROJECT_1_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
      --address=l7-ilb-ip-address \
      --ports=80 \
      --region=us-west1 \
      --target-http-proxy=tutorial-http-proxy \
      --target-http-proxy-region=us-west1 \
      --project=SERVICE_PROJECT_1_ID
    
  5. To send traffic to the backend service, link the URL map with the backend service. For more information, see Use URL maps.

    1. Link the backend service tutorial-backend-service-sp2 in Service Project 2 with the URL map, tutorial-url-maps and path matcher name, tutorial-path-matcher-sp2:

      gcloud compute url-maps add-path-matcher tutorial-url-maps \
         --path-matcher-name=tutorial-path-matcher-sp2 \
         --default-service=projects/SERVICE_PROJECT_2_ID/regions/us-west1/backendServices/tutorial-backend-service-sp2 \
         --region=us-west1
      
    2. Link the backend service, tutorial-backend-service-sp3 in Service Project 3 with the URL map, tutorial-url-maps and path matcher name, tutorial-path-matcher-sp3. Add new host rule, tutorial-host with the given hosts so that the path matcher is tied to the new host rule

      gcloud compute url-maps add-path-matcher tutorial-url-maps \
        --path-matcher-name=tutorial-path-matcher-sp3 \
        --default-service=projects/SERVICE_PROJECT_3_ID/regions/us-west1/backendServices/tutorial-backend-service-sp3 \
        --region=us-west1 \
        --new-hosts=tutorial-host
      

Test the load balancer

To test the load balancer, first create a sample client VM. Then establish an SSH session with the VM and send traffic from this VM to the load balancer.

Create a test VM instance

Clients can be located in either the Shared VPC host project or any connected service project. In this example, you test that the load balancer is working by deploying a client VM in Service Project 1 that is designated as the Shared VPC host project. The client must use the same Shared VPC network and be in the same region as the load balancer.

All the steps in this section must be performed in Service Project 2.

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Click Create instance.

  3. In the Name field, enter client-vm.

  4. Set the Zone to us-west1-b.

  5. Click Advanced options, and then click Networking.

  6. Enter the following Network tags: allow-ssh,load-balanced-backend.

  7. In the Network interfaces section, select Networks shared with me (from host project: SERVICE_PROJECT_1_ID).

  8. Select the lb-frontend-and-backend-subnet subnet from the lb-network network.

  9. Click Create.

gcloud

Create a test VM instance.

gcloud compute instances create client-vm \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --subnet=projects/SERVICE_PROJECT_1_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
    --zone=us-west1-b \
    --tags=allow-ssh \
    --project=SERVICE_PROJECT_2_ID

Send traffic to the load balancer

Use SSH to connect to the instance that you just created and test that HTTP(S) services on the backends are reachable through the internal Application Load Balancer's forwarding rule IP address and that traffic is being load balanced across the backend instances.

  1. Retrieve the value of the load balancer's IP address:

    gcloud compute addresses list --filter="name=( 'l7-ilb-ip-address')"
    

    You see output similar to the following:

    NAME: l7-ilb-ip-address
    ADDRESS/RANGE: 10.1.2.2
    TYPE: INTERNAL
    PURPOSE: GCE_ENDPOINT
    NETWORK:
    REGION: us-west1
    SUBNET: lb-frontend-and-backend-subnet
    STATUS: IN_USE
    

    Copy the service ADDRESS/RANGE, for example, 10.1.2.2 from the output to use in the next steps.

  2. Connect to the client instance with SSH:

    gcloud compute ssh client-vm \
       --zone=us-west1-b \
       --project=SERVICE_PROJECT_2_ID
    
  3. Verify that the load balancer's IP address is serving its hostname:

    1. Verify that the IP address is serving its hostname in Service Project 2:

      curl 10.1.2.2
      

      You see output similar to the following:

      Page served from: tutorial-sp2-mig-a-10xk
      

    2. Verify that the IP address is serving its hostname in Service Project 3:

      curl -H "Host: tutorial-host" 10.1.2.2
      

      You see output similar to the following:

      Page served from: tutorial-sp3-mig-a-3d5h
      

Grant IAM permissions

Provide the appropriate IAM roles and permissions to the App Hub host and service projects.

Console

  1. In the Google Cloud console, go to the project selector page.

    Go to project selector

  2. Select the App Hub host project.

  3. In the Google Cloud console, go to the IAM page.

    Go to IAM

  4. Click Grant access. The Grant access pane opens.

  5. In the New principals field, enter the email address of the individual who will administer App Hub, the App Hub Admin role in the App Hub host project.

  6. Click Select a role and in the Filter field, enter App Hub.

  7. Select the App Hub Admin role and click Save.

  8. In each of the App Hub service projects, grant the App Hub Admin role to the same user.

gcloud

  1. To grant the roles to individuals who will use App Hub, repeat the following command by replacing the IAM roles, as required. For more information, see App Hub roles and permissions.

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
        --member='user:HOST_PROJECT_ADMIN' \
        --role='roles/apphub.admin'

    Replace HOST_PROJECT_ADMIN with the user who has the App Hub Admin role in the App Hub host project. This value has the format username@yourdomain, for example, 222larabrown@gmail.com.

  2. Grant the App Hub Admin role in the service project to the individuals who administer App Hub. They must have the App Hub Admin role to add service projects to the host project. You need at least one person with this role for each service project.

    gcloud projects add-iam-policy-binding SERVICE_PROJECT_ID \
       --member='user:HOST_PROJECT_ADMIN' \
       --role='roles/apphub.admin'

    Replace SERVICE_PROJECT_ID with the ID of the service projects.

Attach the service projects

Service projects are Google Cloud projects that contain infrastructure resources that you can register to an App Hub application. For more information, see Service projects. Attach the service projects on which you deployed the resources to the App Hub host project.

Console

  1. In the Google Cloud console, go to the App Hub Settings page.

    Go to Settings

  2. On the Settings page, click Attach projects.

  3. On the pane that opens, search for projects from the displayed list and select the checkboxes for the projects you want to add as the service projects.

  4. Click Select. The Attached Service Project(s) table displays the selected service projects.

  5. Click Close.

gcloud

  1. Attach service projects 1, 2, and 3 to your App Hub host project.

    gcloud apphub service-projects add SERVICE_PROJECT_1_ID \
      --project=HOST_PROJECT_ID
    
    gcloud apphub service-projects add SERVICE_PROJECT_2_ID \
      --project=HOST_PROJECT_ID
    
    gcloud apphub service-projects add SERVICE_PROJECT_3_ID \
      --project=HOST_PROJECT_ID
    
  2. Confirm that you have attached the App Hub service projects to the App Hub host project:

    gcloud apphub service-projects list --project=HOST_PROJECT_ID
    

After you attach the service project to the App Hub host project, you can view all the Google Cloud resources from the attached service project as discovered App Hub services and workloads. For more information on how to view these discovered services and workloads, see View existing applications, services, and workloads.

Create an application

Create an application to be the container of your services and workloads. When you create an application, you can assign immutable properties such as a scope type or location from which you'd like to register resources and variable attributes such as criticality and environment. You can use the variable attributes to filter the applications. For more information, see Properties and attributes.

In this tutorial, you create a Global application to help you to manage global and regional resources in a single application. If you want to group your resources from a specific region, you can create a Regional application and register these resources. For more information on how to create a regional application, see Set up App Hub.

Console

  1. Make sure that you're in the App Hub host project.
  2. In the Google Cloud console, go to the App Hub Applications page.

    Go to Applications

  3. Click Create application.

  4. On the Create application page, in the Choose application region and name pane, select Global.

  5. In the Application name field, enter tutorial-application. This name is a unique identifier and is immutable after you create the application.

  6. Enter a Display name, Tutorial and click Continue. This is a user-friendly name that you can update. For more information, see Update an existing application.

  7. In the Add attributes pane, from the Criticality list, select High. Criticality indicates how critical an application, service, or workload is to your business operations.

  8. In the Environment field, to indicate the stage of the software lifecycle, select Production.

  9. Click Continue.

  10. In the Add owners pane, add the following details for Developer Owners, Operator Owners, and Business Owners. Note that you must enter the owner's email address if you add a display name.

    1. Enter an owner's display name.
    2. Enter the owner's email address. This value must have the format username@yourdomain, for example, 222larabrown@gmail.com.
  11. Repeat these steps for each developer, operator, and business owner.

  12. Click Create.

The new application gets created and is listed on the Applications page. Note that only the forwarding rule, URL map, backend service, and managed instance group (MIG) become available as discovered resources on the App Hub application. For more information, see concepts and data model.

gcloud

  1. Select the App Hub host project that you created:

    gcloud config set project HOST_PROJECT_ID
  2. Create a new application called tutorial-application in the region, global and give it a display name, Tutorial. This application name, tutorial-application is a unique identifier and is immutable after you create the application. The display name,Tutorial is a user-friendly name that you can update. For more information, see Update an existing application.

    gcloud apphub applications create tutorial-application \
        --display-name='Tutorial' \
        --scope-type=GLOBAL \
        --project=HOST_PROJECT_ID \
        --location=global
    
  3. List the applications in your App Hub host project:

    gcloud apphub applications list \
        --project=HOST_PROJECT_ID \
        --location=global
    

    You see output similar to the following:

    ID                    DISPLAY_NAME  CREATE_TIME
    tutorial-application  Tutorial      2023-10-31T18:33:48
    
  4. Update your application with the criticality-type, environment-type, and owner attributes:

    gcloud apphub applications update tutorial-application \
      --criticality-type='HIGH' \
      --environment-type='PRODUCTION' \
      --developer-owners=display-name=DISPLAY-NAME-DEVELOPER,email=EMAIL-DEVELOPER \
      --operator-owners=display-name=DISPLAY-NAME-OPERATOR,email=EMAIL-OPERATOR \
      --business-owners=display-name=DISPLAY-NAME-BUSINESS,email=EMAIL-BUSINESS \
      --project=HOST_PROJECT_ID \
      --location=global
    

    Replace the following:

    • DISPLAY-NAME-DEVELOPER, DISPLAY-NAME-OPERATOR, and DISPLAY-NAME-BUSINESS: display names of the developer, operator, and business owners, respectively.
    • EMAIL-NAME-DEVELOPER, EMAIL-NAME-OPERATOR, and EMAIL-NAME-BUSINESS: email addresses of the developer, operator, and business owners, respectively. These values must have the format username@yourdomain, for example, 222larabrown@gmail.com.

    Notes:

    • criticality-type: indicates how critical an application, service, or workload is to your business operations.
    • environment-type: indicates the stages of the software lifecycle.
  5. Get details for the application that you created:

    gcloud apphub applications describe tutorial-application \
      --project=HOST_PROJECT_ID \
      --location=global
    

    The command returns information in YAML format, similar to the following:

    attributes:
    businessOwners:
    – displayName: [DISPLAY-NAME-BUSINESS]
      email: [EMAIL-BUSINESS]
    criticality:
      type: HIGH
    developerOwners:
    – displayName: [DISPLAY-NAME-DEVELOPER]
      email: [EMAIL-DEVELOPER]
    environment:
      type: PRODUCTION
    operatorOwners:
    – displayName: [DISPLAY-NAME-OPERATOR]
      email: [EMAIL-OPERATOR]
    createTime: '2023-10-31T18:33:48.199394108Z'
    displayName: Tutorial
    name: projects/HOST_PROJECT_ID/locations/global/applications/tutorial-application
    scope:
      type: REGIONAL
    state: ACTIVE
    uid: 9d991a9d-5d8a-4c0d-b5fd-85e39fb58c73
    updateTime: '2023-10-31T18:33:48.343303819Z'
    

Register services and workloads

Registering services and workloads adds them to an application that lets you monitor the added resources.

Console

  1. In the Google Cloud console, go to the App Hub Applications page.

    Go to Applications

  2. Click the name of your application, Tutorial. The Services and workloads tab is displayed with a list of registered services and workloads that are in your App Hub service projects.

  3. Register a service:

    1. On the Services and workloads tab, click Register service/workload.
    2. On the Register service or workload page, in the Select resource pane, click Browse to select the service or workload as a Resource.
    3. In the Select resource pane, choose the Name of the service, tutorial-backend-service-sp2, and click Select.
    4. In the Select resource pane, enter the Name of the resource, tutorial-service-backend-sp2.
    5. Enter a Display name, Backend service (SP2) and click Continue.
    6. In the Add attributes pane, in the Criticality list, to indicate the importance of the application, select High.
    7. In the Environment field, to indicate the stage of the software lifecycle, select Production.
    8. Click Continue.
    9. In the Add owners pane, add the following details as required for Developer Owners, Operator Owners, and Business Owners. Note that you must enter the owner's email address if you add a display name.
      1. Enter an owner's display name.
      2. Enter the owner's email address. This value must have the format username@yourdomain, for example, 222larabrown@gmail.com.
    10. Repeat these steps for each developer, operator, and business owner.
    11. Click Register.

    On the Services and workloads tab, in the Registered services and workloads section, you can see the new service added.

  4. Repeat the previous steps to register the other services as tutorial-service-backend-sp3, tutorial-service-forwarding-rule and tutorial-service-url-map, respectively.
  5. Register a workload by repeating the earlier steps to register a service with the following exceptions:
    1. In the Register service or workload pane, in the Choose service or workload section, select the Name of the workload, tutorial-sp2-mig-a, and click Continue.
    2. In the Select resource pane, enter the Name of the resource, tutorial-workload-sp2-mig-a.
    3. Enter a Display name, Instance group - A (SP2) and click Continue.
  6. Repeat the previous steps to register the other workloads as tutorial-workload-sp2-mig-a, tutorial-workload-sp2-mig-b, tutorial-workload-sp3-mig-a, and tutorial-workload-sp3-mig-b, respectively.

gcloud

  1. Add an individual with App Hub Editor permissions:

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
      --member='user:APP_HUB_EDITOR' \
      --role='roles/apphub.editor'
    

    Replace APP_HUB_EDITOR with the user who has the App Hub Editor role in the App Hub host project. This value has the format username@yourdomain, for example, 222larabrown@gmail.com.

  2. List all discovered services in the App Hub host project. This command returns services that are available to be registered to an application.

    gcloud apphub discovered-services list \
        --project=HOST_PROJECT_ID \
        --location=us-west1
    

    You see output similar to the following:

    ID                             SERVICE_REFERENCE                                                                                                                      SERVICE_PROPERTIES
    BACKEND_SERVICE_SP2_ID      {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_2_NUMBER]/regions/us-west1/backendServices/tutorial-backend-service-sp2'}  {'gcpProject': 'projects/SERVICE_PROJECT_2_ID', 'location': 'us-west1'}
    BACKEND_SERVICE_SP3_ID      {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_3_NUMBER]/regions/us-west1/backendServices/tutorial-backend-service-sp3'}  {'gcpProject': 'projects/SERVICE_PROJECT_3_ID', 'location': 'us-west1'}
    FORWARDING_RULE_SERVICE_ID  {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_1_NUMBER]/regions/us-west1/forwardingRules/l7-ilb-forwarding-rule'}        {'gcpProject': 'projects/SERVICE_PROJECT_1_ID', 'location': 'us-west1'}
    URL_MAP_SERVICE_ID          {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_1_NUMBER]/regions/us-west1/urlMaps/tutorial-url-maps'}                     {'gcpProject': 'projects/SERVICE_PROJECT_1_ID', 'location': 'us-west1'}
    

    Copy the service IDs, for example, BACKEND_SERVICE_SP2_ID from the output to use in the next step.

  3. Register services from the previous step to your application. Copy the service IDs from the output field in the previous step.

    gcloud apphub applications services create tutorial-service-backend-sp2 \
        --discovered-service='projects/HOST_PROJECT_ID/locations/us-west1/discoveredServices/BACKEND_SERVICE_SP2_ID' \
        --display-name='Backend service (SP2)' \
        --criticality-type='HIGH' \
        --environment-type='PRODUCTION' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    
    gcloud apphub applications services create tutorial-service-backend-sp3 \
        --discovered-service='projects/HOST_PROJECT_ID/locations/us-west1/discoveredServices/BACKEND_SERVICE_SP3_ID' \
        --display-name='Backend service (SP3)' \
        --criticality-type='HIGH' \
        --environment-type='PRODUCTION' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    
    gcloud apphub applications services create tutorial-service-forwarding-rule \
        --discovered-service='projects/HOST_PROJECT_ID/locations/us-west1/discoveredServices/FORWARDING_RULE_SERVICE_ID' \
        --display-name='Forwarding rule' \
        --criticality-type='HIGH' \
        --environment-type='PRODUCTION' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    
    gcloud apphub applications services create tutorial-service-url-map \
        --discovered-service='projects/HOST_PROJECT_ID/locations/us-west1/discoveredServices/URL_MAP_SERVICE_ID' \
        --display-name='URL map' \
        --criticality-type='HIGH' \
        --environment-type='PRODUCTION' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    

    Replace the following:

    • BACKEND_SERVICE_SP2_ID: the service ID of the backend service from Service Project 2 that you want to register.
    • BACKEND_SERVICE_SP3_ID: the service ID of the backend service from Service Project 3 that you want to register.
    • FORWARDING_RULE_SERVICE_ID: the service ID of the forwarding rule from Service Project 1 that you want to register.
    • URL_MAP_SERVICE_ID: the service ID of the URL map from Service Project 1 that you want to register.
  4. List all registered services in the application:

    gcloud apphub applications services list \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    

    You see output similar to the following:

    ID                               DISPLAY_NAME      SERVICE_REFERENCE                                                                                                                       CREATE_TIME
    tutorial-service-backend-sp2     Backend service   {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_2_NUMBER]/regions/us-west1/backendServices/tutorial-backend-service-sp2'}   2024-02-13T00:31:45
    tutorial-service-backend-sp3     Backend service   {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_3_NUMBER]/regions/us-west1/backendServices/tutorial-backend-service-sp3'}   2024-02-13T00:31:45
    tutorial-service-forwarding-rule Forwarding rule   {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_1_NUMBER]/regions/us-west1/forwardingRules/l7-ilb-forwarding-rule'}         2024-02-13T00:31:45
    tutorial-service-url-map         URL map           {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_1_NUMBER]/regions/us-west1/urlMaps/tutorial-url-maps'}                      2024-02-13T00:31:45
    
    Registered, but detached services are denoted by an empty value in the SERVICE_REFERENCE field. For more information on the registration statuses, see the properties and attributes of App Hub.

  5. List all discovered workloads in the App Hub host project. This command returns workloads that are available to be registered to an application.

    gcloud apphub discovered-workloads list \
        --project=HOST_PROJECT_ID \
        --location=global
    

    You see output similar to the following:

    ID                            WORKLOAD_REFERENCE                                                                                                          WORKLOAD_PROPERTIES
    INSTANCE_GROUP_SP3_A_ID    {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_3_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp3-mig-a'}  {'gcpProject': 'projects/SERVICE_PROJECT_3_ID', 'location': 'us-west1'}
    INSTANCE_GROUP_SP3_B_ID    {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_3_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp3-mig-b'}  {'gcpProject': 'projects/SERVICE_PROJECT_3_ID', 'location': 'us-west1'}
    INSTANCE_GROUP_SP2_A_ID    {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_2_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp2-mig-a'}  {'gcpProject': 'projects/SERVICE_PROJECT_2_ID', 'location': 'us-west1'}
    INSTANCE_GROUP_SP2_B_ID    {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_2_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp2-mig-b'}  {'gcpProject': 'projects/SERVICE_PROJECT_2_ID', 'location': 'us-west1'}
    
    Copy the workload ID from the output to use in the next step.

  6. Register the workloads from the previous step to your application. Copy the workload ID from the output field in the previous step.

    gcloud apphub applications workloads create tutorial-workload-sp3-mig-a \
        --discovered-workload='projects/HOST_PROJECT_ID/locations/us-west1/discoveredWorkloads/INSTANCE_GROUP_SP3_A_ID' \
        --display-name='Workload instance group (SP3-A)' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    
    gcloud apphub applications workloads create tutorial-workload-sp3-mig-b \
        --discovered-workload='projects/HOST_PROJECT_ID/locations/us-west1/discoveredWorkloads/INSTANCE_GROUP_SP3_B_ID' \
        --display-name='Workload instance group (SP3-B)' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    
    gcloud apphub applications workloads create tutorial-workload-sp2-mig-a \
        --discovered-workload='projects/HOST_PROJECT_ID/locations/us-west1/discoveredWorkloads/INSTANCE_GROUP_SP2_A_ID' \
        --display-name='Workload instance group (SP2-A)' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    
    gcloud apphub applications workloads create tutorial-workload-sp2-mig-b \
        --discovered-workload='projects/HOST_PROJECT_ID/locations/us-west1/discoveredWorkloads/INSTANCE_GROUP_SP2_B_ID' \
        --display-name='Workload instance group (SP2-B)' \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    

    Replace the following:

    • INSTANCE_GROUP_SP3_A_ID, INSTANCE_GROUP_SP3_B_ID: the workload IDs of the managed instance groups from Service Project 3 that you want to register.
    • INSTANCE_GROUP_SP2_A_ID, INSTANCE_GROUP_SP2_B_ID: the workload IDs of the managed instance groups from Service Project 2 that you want to register.
  7. List all registered workloads in the application:

    gcloud apphub applications workloads list \
        --application=tutorial-application \
        --project=HOST_PROJECT_ID \
        --location=global
    

    You see output similar to the following:

    ID                              DISPLAY_NAME                      SERVICE_REFERENCE                                                                                                            CREATE_TIME
    tutorial-workload-sp3-mig-a     Workload instance group (SP3-A)   {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_3_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp3-mig-a'}   2024-02-13T00:31:45
    tutorial-workload-sp3-mig-b     Workload instance group (SP3-B)   {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_3_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp3-mig-b'}   2024-02-13T00:31:45
    tutorial-workload-sp2-mig-a     Workload instance group (SP2-A)   {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_2_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp2-mig-a'}   2024-02-13T00:31:45
    tutorial-workload-sp2-mig-b     Workload instance group (SP2-B)   {'uri': '//compute.googleapis.com/projects/[SERVICE_PROJECT_2_NUMBER]/zones/us-west1-a/instanceGroups/tutorial-sp2-mig-b'}   2024-02-13T00:31:45
    
    Registered, but detached workloads are denoted by an empty value in the WORKLOAD_REFERENCE field. For more information on the registration statuses, see the properties and attributes of App Hub.

View all services and workloads

You can view details of the services and workloads from the service projects that are attached to the App Hub host project.

  1. In the Google Cloud console, go to the App Hub Services and Workloads page.

    Go to Services and Workloads

    All the services and workloads from the attached App Hub service projects are displayed.

  2. In the Region list, select global. The Workload instance group workload is displayed with details such as App Hub Type, Criticality, and Registered to.

  3. To filter the services or workloads based on its state:

    1. In the Filter field, select filters such as Registration status.
    2. Click Registered. A list of services and workloads registered to the application appears.

View application metrics

You can view the system metrics for the applications created in your App Hub host project. These metrics, correspond to the golden signals - traffic, errors, latency, and saturation that help monitor the performance and health of the application.

  1. In the Google Cloud console, go to the App Hub Applications page.

    Go to Applications

  2. Click the name of the application, Tutorial.

    The Services and workloads tab is displayed with the metadata of services and workloads registered to your application.

  3. To view the system metrics of registered services and workloads, click Metrics.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the App Hub resources

Console

  1. In the Google Cloud console, go to the App Hub Applications page.

    Go to Applications

  2. Click the name of an application, Tutorial.

  3. On the Services and workloads tab, from the Registered services and workloads section, click the name of a service.

  4. On the Services and Workloads page, click Unregister.

    An alert notifies that the service is unregistered.

  5. On the Services and workloads tab, from the Registered services and workloads section, click the name of a workload.

  6. On the Details tab, click Unregister.

    An alert notifies that the workload is unregistered.

  7. Go to the App Hub Applications page.

    Go to Applications

  8. Click the name of an application.

  9. On the tutorial-application page, click Delete.

  10. In the Google Cloud console, go to the App Hub Settings page.

    Go to Settings

  11. On the Settings page, select the checkbox for the service project that you want to remove from the App Hub host project.

  12. Click Detach projects.

gcloud

  1. List the registered services in the application:

    gcloud apphub applications services list \
      --application=tutorial-application --project=HOST_PROJECT_ID \
      --location=global
    
  2. Unregister the services from the application:

    gcloud apphub applications services delete SERVICE_NAME \
      --application=tutorial-application --project=HOST_PROJECT_ID \
      --location=global
    

    Replace SERVICE_NAME with the name of your service. The services are now discovered services that can be registered to the application.

  3. List the registered workloads in the application:

    gcloud apphub applications workloads list \
      --application=tutorial-application --project=HOST_PROJECT_ID \
      --location=global
    
  4. Unregister the workload from the application:

    gcloud apphub applications workloads delete WORKLOAD_NAME \
      --application=tutorial-application --project=HOST_PROJECT_ID \
      --location=global
    

    Replace WORKLOAD_NAME with the name of your workload. The workload is now a discovered workload that can be registered to the application.

  5. Delete the application:

    gcloud apphub applications delete tutorial-application \
      --project=HOST_PROJECT_ID \
      --location=global
    
  6. Remove the service projects from the App Hub host project:

    gcloud apphub service-projects remove SERVICE_PROJECT_ID \
      --project=HOST_PROJECT_ID
    

    Replace SERVICE_PROJECT_ID with the project IDs of service projects 1, 2, and 3.

Delete the deployment

When you no longer need the solution, to avoid continued billing for the resources that you created in this solution, delete all the resources.

For more information, see clean up the load balancer setup.

Delete the project

Console

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

gcloud

Delete a Google Cloud project:

gcloud projects delete PROJECT_ID

Replace PROJECT_ID with the host or service project IDs.

What's next