This tutorial shows you how to access a private cluster in Google Kubernetes Engine (GKE) over the internet by using a bastion host.
You can create GKE private clusters with no client access to the public endpoint. This access option improves the cluster security by preventing all internet access to the control plane. However, disabling access to the public endpoint prevents you from interacting with your cluster remotely, unless you add the IP address of your remote client as an authorized network.
This tutorial shows you how to set up a bastion host, which is a special-purpose host machine designed to withstand attack. The bastion host uses Tinyproxy to forward client traffic to the cluster. You use Identity-Aware Proxy (IAP) to securely access the bastion host from your remote client.
Objectives
- Create a private cluster with no access to the public endpoint.
- Deploy a Compute Engine virtual machine (VM) to act as a bastion host in the cluster subnet.
- Use IAP to connect a remote client to the cluster over the internet.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the GKE, Compute Engine, Identity-Aware Proxy APIs.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Update and install
gcloud
components:gcloud components update
gcloud components install alpha beta -
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the GKE, Compute Engine, Identity-Aware Proxy APIs.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Update and install
gcloud
components:gcloud components update
gcloud components install alpha beta
Create a private cluster
Create a new private cluster with no client access to the public endpoint. Place the cluster in its own subnet. You can do this using the Google Cloud CLI or the Google Cloud console.
gcloud
Run the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--region=COMPUTE_REGION \
--create-subnetwork=name=SUBNET_NAME \
--enable-master-authorized-networks \
--enable-private-nodes \
--enable-private-endpoint
Replace the following:
CLUSTER_NAME
: the name of the new cluster.COMPUTE_REGION
: the Compute Engine region for the cluster.SUBNET_NAME
: the name of the new subnetwork in which you want to place the cluster.
Console
Create a Virtual Private Cloud subnetwork
Go to the VPC networks page in the Google Cloud console.
Click the default network.
In the Subnets section, click Add subnet.
On the Add a subnet dialog, specify the following:
- Name: A name for the new subnet.
- Region: A region for the subnet. This must be the same as the cluster region.
- IP address range: Specify
10.2.204.0/22
or another range that doesn't conflict with other ranges in the VPC network. - For Private Google Access, select the On option.
Click Add.
Create a private cluster
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click
Create.Click Configure for GKE Autopilot.
Specify a Name and Region for the new cluster. The region must be the same as the subnet.
In the Networking section, select the Private cluster option.
Clear the Access control plane using its external IP address checkbox.
From the Node subnet drop-down list, select the subnet you created.
Optionally, configure other settings for the cluster.
Click Create.
You can also use a GKE Standard cluster with the
--master-ipv4-cidr
flag
specified.
Create a bastion host VM
Create a Compute Engine VM within the private cluster internal network to act as a bastion host that can manage the cluster.
gcloud
Create a Compute Engine VM:
gcloud compute instances create INSTANCE_NAME \
--zone=COMPUTE_ZONE \
--machine-type=e2-micro \
--network-interface=no-address,network-tier=PREMIUM,subnet=SUBNET_NAME
Replace the following:
INSTANCE_NAME
: the name of the VM.COMPUTE_ZONE
: the Compute Engine zone for the VM. Place this in the same region as the cluster.SUBNET_NAME
: the subnetwork in which you want to place the VM.
Console
Go to the VM instances page in the Google Cloud console.
Click Create instance.
Specify the following:
- Name: the name of your VM.
- Region and Zone: the region and zone of your VM. Use the same region as your cluster.
- Machine type: a machine type. Choose a small machine type, such
as
e2-micro
. - For Network interfaces, select the same VPC network and subnet as the cluster.
- Optionally, configure other settings for the instance.
Click Create.
Create firewall rule
To allow IAP to connect to your bastion host VM, create a firewall rule.
Deploy the proxy
With the bastion host and the private cluster configured, you must deploy a proxy daemon in the host to forward traffic to the cluster control plane. For this tutorial, you install Tinyproxy.
Start a session into your VM:
gcloud compute ssh INSTANCE_NAME --tunnel-through-iap --project=PROJECT_ID
Install Tinyproxy:
sudo apt install tinyproxy
Open the Tinyproxy configuration file:
sudo vi /etc/tinyproxy/tinyproxy.conf
In the file, do the following:
- Verify that the port is
8888
. Search for the
Allow
section:/Allow 127
Add the following line to the
Allow
section:Allow localhost
- Verify that the port is
Save the file and restart Tinyproxy:
sudo service tinyproxy restart
Exit the session:
exit
Connect to your cluster from the remote client
After configuring Tinyproxy, you must set up the remote client with cluster credentials and specify the proxy. Do the following on the remote client:
Get credentials for the cluster:
gcloud container clusters get-credentials CLUSTER_NAME \ --region=COMPUTE_REGION \ --project=PROJECT_ID
Replace the following:
CLUSTER_NAME
: the name of the private cluster.COMPUTE_REGION
: the region of the cluster.PROJECT_ID
: the ID of the Google Cloud project of the cluster.
Tunnel to the bastion host using IAP:
gcloud compute ssh INSTANCE_NAME \ --tunnel-through-iap \ --project=PROJECT_ID \ --zone=COMPUTE_ZONE \ --ssh-flag="-4 -L8888:localhost:8888 -N -q -f"
Specify the proxy:
export HTTPS_PROXY=localhost:8888 kubectl get ns
The output is a list of namespaces in the private cluster.
Stop listening on the remote client
If you want to revert the change on the remote client at any time, you should end the listener process on TCP port 8888. The command to do this is different depending on the client operating system.
netstat -lnpt | grep 8888 | awk '{print $7}' | grep -o '[0-9]\+' | sort -u | xargs sudo kill
Troubleshooting
Firewall restrictions in enterprise networks
If you're on an enterprise network with a strict firewall, you might not be able
to complete this tutorial without requesting an exception. If you request an
exception, the source IP range for the bastion host is 35.235.240.0/20
by
default.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Delete individual resources
Delete the bastion host that you deployed in this tutorial:
gcloud compute instances delete INSTANCE_NAME \ --zone=COMPUTE_ZONE
Delete the cluster:
gcloud container clusters delete CLUSTER_NAME \ --region=COMPUTE_REGION
Delete the subnet:
gcloud compute networks subnets delete SUBNET_NAME \ --region=COMPUTE_REGION