Deploy a containerized web server app


This tutorial describes how to upload a container application in an air-gapped Google Distributed Cloud (GDC) air-gapped environment, and run that application on a Kubernetes cluster. A containerized workload runs on a Kubernetes cluster within a project namespace. Clusters are logically separate from projects and from each other to provide different failure domains and isolation guarantees. However, you must ensure your cluster is attached to a project to allow for containerized workloads to be managed within a project.

One of the largest obstacles for deploying a container app is getting the binary for the app to your air-gapped data center. Work with your infrastructure team and administrators to transport the application to your workstation or implement this tutorial directly on your continuous integration and continuous delivery (CI/CD) server.

This tutorial uses a sample web server app available from the Google Cloud Artifact Registry.

Objectives

  • Create a managed Harbor registry.
  • Push a container image to the managed Harbor registry.
  • Create a Kubernetes cluster.
  • Deploy the sample container app to the cluster.

Costs

Because GDC is designed to run in an air-gapped data center, billing processes and information is confined only to the GDC deployment and is not managed by other Google products.

To generate a cost estimate based on your projected usage, use the pricing calculator.

Use the Projected Cost dashboard to anticipate future SKU costs for your invoices.

To track storage and compute consumption, use the Billing Usage dashboards.

Before you begin

  1. Make sure you have a project to manage your containerized deployments. Create a project if you don't have one.

  2. Set your project namespace as an environment variable:

    export NAMESPACE=PROJECT_NAMESPACE
    
  3. Download and install the gdcloud CLI.

  4. Ask your Organization IAM Admin to grant you the following roles:

    • Namespace Admin role (namepspace-admin) for your project namespace. This role is required to deploy container workloads in your project.

    • Harbor Instance Admin role (harbor-instance-admin) for your project namespace. This role is required for read and write access to all Harbor resources. It's also required to delete Harbor instances.

    • Harbor Instance Viewer role (harbor-instance-viewer) for your project namespace. This role is required to view and select a Harbor instance.

    • Harbor Project Creator role (harbor-project-creator) for your project namespace. This role is required to access and manage a Harbor project.

    • User Cluster Admin role (user-cluster-admin). This role is required to create a Kubernetes cluster, and isn't bound to a namespace.

  5. Sign in to the org admin cluster and generate its kubeconfig file with a user identity. Make sure you set the kubeconfig path as an environment variable:

    export ORG_ADMIN_CLUSTER_KUBECONFIG=ORG_ADMIN_CLUSTER_KUBECONFIG_PATH
    

Create managed Harbor registry

GDC air-gapped provides Harbor as a Service, which is a fully managed service that lets you store and manage container images using Harbor.

To use Harbor as a Service, you must first create a Harbor registry instance and Harbor project.

Create Harbor registry instance

To create a Harbor container registry instance, complete the following steps:

Console

  1. In the navigation menu, select Harbor Container Registry from the CI/CD section.

  2. Click Create Instance.

  3. Enter the name of the instance and accept the Harbor managed Terms of Service.

  4. Click Create Instance.

  5. View the new Harbor instance in the Instances section.

  6. Click the Go to Harbor Instance external link and note the instance URL. For example, the instance URL format resembles harbor-1.org-1.zone1.google.gdc.test.

  7. Set the instance URL as a variable to use later in the tutorial:

    export INSTANCE_URL=INSTANCE_URL
    

    Replace INSTANCE_URL with the URL of the Harbor registry instance.

gdcloud

  1. Create the new Harbor container registry instance:

    gdcloud harbor instances create INSTANCE_NAME \
        --project=PROJECT \
    

    Replace the following:

    • INSTANCE_NAME: the name of the Harbor instance.
    • PROJECT: the name of the GDC project.
  2. List the instance's URL:

    gdcloud harbor instances describe INSTANCE_NAME \
        --project=PROJECT
    

    For example, the instance URL format resembles harbor-1.org-1.zone1.google.gdc.test.

  3. Set the instance URL as a variable to use later in the tutorial:

    export INSTANCE_URL=INSTANCE_URL
    

Create Harbor project in registry

You must create a Harbor project within the Harbor registry instance to manage your container images:

Console

  1. Click Create A Harbor Project from the Harbor Container Registry page.

  2. Enter the name of the project.

  3. Click Create.

  4. Set the Harbor project name as a variable to use later in the tutorial:

    export HARBOR_PROJECT=HARBOR_PROJECT
    

gdcloud

  1. Create the new Harbor project:

    gdcloud harbor harbor-projects create HARBOR_PROJECT \
        --project=PROJECT \
        --instance=INSTANCE_NAME
    

    Replace the following:

    • HARBOR_PROJECT: the name of the Harbor project to create.
    • PROJECT: the name of the GDC project.
    • INSTANCE_NAME: the name of the Harbor instance.
  2. Set the Harbor project name as a variable to use later in the tutorial:

    export HARBOR_PROJECT=HARBOR_PROJECT
    

Configure Docker

To use Docker in your Harbor registry, complete the following steps:

  1. Configure Docker to trust Harbor as a Service. For more information, see Configure Docker to trust Harbor root CA.

  2. Configure Docker authentication to Harbor. For more information, see Configure Docker authentication to Harbor registry instances.

Create Kubernetes image pull secret

Since you're using a private Harbor project, you must create a Kubernetes image pull secret.

  1. Add a Harbor project robot account. Follow the steps in the Harbor UI to create the robot account and copy the robot secret token: https://goharbor.io/docs/2.8.0/working-with-projects/project-configuration/create-robot-accounts/#add-a-robot-account.

  2. Note the new robot project account name, which has the following syntax:

    <PREFIX><PROJECT_NAME>+<ACCOUNT_NAME>
    

    For example, the robot project account name format resembles harbor@library+artifact-account.

    For more information on finding your robot project account name in Harbor, see Harbor's documentation: https://goharbor.io/docs/2.8.0/working-with-projects/project-configuration/create-robot-accounts/#view-project-robot-accounts.

  3. Sign in to Docker with your Harbor project robot account and secret token:

    docker login ${INSTANCE_URL}
    

    When prompted, insert the robot project account name for the Username and the secret token for the Password.

  4. Set an arbitrary name for the image pull secret:

    export SECRET=SECRET
    
  5. Create the secret that is required for the image pull:

    kubectl create secret docker-registry ${SECRET}  \
        --from-file=.dockerconfigjson=DOCKER_CONFIG \
        -n NAMESPACE
    

    Replace the following:

    • DOCKER_CONFIG: the path to the .docker/config.json file.
    • NAMESPACE: the namespace for the secret you create.

Push container image to managed Harbor registry

For this tutorial, you will download and push the nginx web server image to the managed Harbor registry, and use it to deploy a sample nginx web server app to a Kubernetes cluster. The nginx web server app is available from the public Google Cloud Artifact Registry.

  1. Pull the nginx image from the Google Cloud Artifact Registry to your local workstation using an external network:

    docker pull gcr.io/cloud-marketplace/google/nginx:1.25
    
  2. Set the name of the image. The format of a full image name is the following:

    ${INSTANCE_URL}/${HARBOR_PROJECT}/nginx
    
  3. Tag the local image with the repository name:

    docker tag gcr.io/cloud-marketplace/google/nginx:1.25 ${INSTANCE_URL}/${HARBOR_PROJECT}/nginx:1.25
    
  4. Push the nginx container image to your managed Harbor registry:

    docker push ${INSTANCE_URL}/${HARBOR_PROJECT}/nginx:1.25
    

Create a Kubernetes cluster

Now that you have the nginx container image stored in the managed Harbor registry and can access it, create a Kubernetes cluster to run the nginx web server.

Console

  1. In the navigation menu, select Kubernetes Engine > Clusters.

  2. Click Create Cluster.

  3. In the Name field, specify a name for the cluster.

  4. Click Attach Project and select a project to attach to your cluster. Then click Save.

  5. Click Create.

  6. Wait for the cluster to be created. When the cluster is available to use, the status READY appears next to the cluster name.

API

  1. Create a Cluster custom resource and save it as a YAML file, such as cluster.yaml:

    apiVersion: cluster.gdc.goog/v1
    kind: Cluster
    metadata:
      name: CLUSTER_NAME
      namespace: platform
    

    Replace the CLUSTER_NAME value with the name of the cluster.

  2. Apply the custom resource to your GDC instance:

    kubectl create -f cluster.yaml --kubeconfig ${ORG_ADMIN_CLUSTER_KUBECONFIG}
    
  3. Attach a project to your Kubernetes cluster using the GDC console. You cannot attach a project to the cluster using the API at this time.

For more information on creating a Kubernetes cluster, see Create a user cluster.

Deploy the sample container app

You are now ready to deploy the nginx container image to your Kubernetes cluster.

Kubernetes represents applications as Pod resources, which are scalable units holding one or more containers. The pod is the smallest deployable unit in Kubernetes. Usually, you deploy pods as a set of replicas that can be scaled and distributed together across your cluster. One way to deploy a set of replicas is through a Kubernetes Deployment.

In this section, you create a Kubernetes Deployment to run the nginx container app on your cluster. This Deployment has replicas, or pods. One Deployment pod contains only one container: the nginx container image. You also create a Service resource that provides a stable way for clients to send requests to the pods of your Deployment.

Deploy the nginx web server to your Kubernetes cluster:

  1. Sign in to the Kubernetes cluster and generate its kubeconfig file with a user identity. Make sure you set the kubeconfig path as an environment variable:

    export KUBECONFIG=CLUSTER_KUBECONFIG_PATH
    
  2. Create and deploy the Kubernetes Deployment and Service custom resources:

    kubectl --kubeconfig ${KUBECONFIG} -n ${NAMESPACE} \
    create -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: ${INSTANCE_URL}/${HARBOR_PROJECT}/nginx:1.25
            ports:
            - containerPort: 80
          imagePullSecrets:
          - name: ${SECRET}
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
        - port: 80
          protocol: TCP
      type: LoadBalancer
    EOF
    
  3. Verify the pods were created by the deployment:

    kubectl get pods -l app=nginx -n ${NAMESPACE}
    

    The output is similar to the following:

    NAME                                READY     STATUS    RESTARTS   AGE
    nginx-deployment-1882529037-6p4mt   1/1       Running   0          1h
    nginx-deployment-1882529037-p29za   1/1       Running   0          1h
    nginx-deployment-1882529037-s0cmt   1/1       Running   0          1h
    
  4. Create a network policy to allow all network traffic to the namespace:

    kubectl --kubeconfig ${KUBECONFIG} -n ${NAMESPACE} \
    create -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      annotations:
      name: allow-all
    spec:
      ingress:
      - from:
        - ipBlock:
            cidr: 0.0.0.0/0
      podSelector: {}
      policyTypes:
      - Ingress
    EOF
    
  5. Export the IP address for the nginx service:

      export IP=`kubectl --kubeconfig=${KUBECONFIG} get service nginx-service \
          -n ${NAMESPACE} -o jsonpath='{.status.loadBalancer.ingress[*].ip}'`
    
  6. Test the nginx server IP address using curl:

      curl http://$IP
    

Clean up

To avoid incurring charges to your GDC account for the resources used in this tutorial, you must delete the resources you created.

Delete the container image

To delete the container image from your GDC air-gapped environment, either delete the Harbor instance that contains the image, or keep the Harbor instance and delete the individual container image.

To delete the container image from the managed Harbor registry, use the GDC console:

  1. In the navigation menu, select Harbor Container Registry from the CI/CD section.

  2. Click the Go to Harbor Instance external link.

  3. Delete the container image using the Harbor UI. For more information, see Delete Harbor registry instances.

Delete the container app

To delete the deployed container app, either delete the GDC project that contains the resources, or keep the GDC project and delete the individual resources.

To delete the individual resources, complete the following steps:

  1. Delete the Service object for your container app:

    kubectl delete service nginx-service -n ${NAMESPACE}
    
  2. Delete the Deployment object for your container app:

    kubectl delete deployment nginx-deployment -n ${NAMESPACE}
    
  3. If you created a test Kubernetes cluster solely for this tutorial, delete it:

    kubectl delete clusters.cluster.gdc.goog/USER_CLUSTER_NAME \
        -n platform --kubeconfig ${ORG_ADMIN_CLUSTER_KUBECONFIG}
    

    This deletes the resources that make up the Kubernetes cluster, such as the compute instances, disks, and network resources:

What's next