This page shows how to create an admin cluster and a user cluster.
SSH into your admin workstation
SSH into your admin workstation:
ssh -i ~/.ssh/vsphere_workstation ubuntu@[IP_ADDRESS]
where [IP_ADDRESS] is the IP address of your admin workstation.
Do all of the remaining steps in this topic on your admin workstation.
Logging in
Log in to Google Cloud using your Google Cloud user account credentials. The user account must hold at least the Viewer IAM role:
gcloud auth login
Register gcloud
as a Docker
credential helper.
(Read more about this command):
gcloud auth configure-docker
Configuring static IPs for your admin cluster
To specify the static IP addresses that you want to use for your admin cluster,
create a host configuration file named admin-hostconfig.yaml
. For this
exercise, you need to specify five IP addresses to be used by the admin
cluster.
The following is an example of a host configuration file with five hosts:
hostconfig: dns: 172.16.255.1 tod: 192.138.210.214 otherdns: - 8.8.8.8 - 8.8.4.4 othertod: - ntp.ubuntu.com searchdomainsfordns: - "my.local.com" blocks: - netmask: 255.255.252.0 gateway: 110.116.232.1 ips: - ip: 172.16.20.10 hostname: admin-host1 - ip: 172.16.20.11 hostname: admin-host2 - ip: 172.16.20.12 hostname: admin-host3 - ip: 172.16.20.13 hostname: admin-host4 - ip: 172.16.20.14 hostname: admin-host5
The ips
field is an array of IP addresses and hostnames. These are the IP
addresses and hostnames that GKE on-prem will assign to your admin
cluster nodes.
In the host configuration file, you also specify the addresses of the DNS servers, time servers, and default gateway that the admin cluster nodes will use.
The searchdomainsfordns
field is an array of DNS search domains to use in
the cluster. These domains are used as part of a domain search list.
Configuring static IPs for your user cluster
To specify the static IP addresses that you want to use for your user cluster,
create a host configuration file named user-hostconfig.yaml
.
The following is an example of a host configuration file with three hosts:
hostconfig: dns: 172.16.255.1 tod: 192.138.210.214 otherdns: - 8.8.8.8 - 8.8.4.4 othertod: - ntp.ubuntu.com searchdomainsfordns: - "my.local.com" blocks: - netmask: 255.255.252.0 gateway: 110.116.232.1 ips: - ip: 172.16.20.15 hostname: user-host1 - ip: 172.16.20.16 hostname: user-host2 - ip: 172.16.20.17 hostname: user-host3
The ips
field is an array of IP addresses and hostnames. These are the IP
addresses and hostnames that GKE on-prem will assign to your user
cluster nodes.
The searchdomainsfordns
field is an array of DNS search domains to use in
the cluster. These domains are used as part of a domain search list.
Creating a GKE on-prem configuration file
Copy the following YAML to a file named config.yaml
.
bundlepath: "/var/lib/gke/bundles/gke-onprem-vsphere-1.2.2-gke.2-full.tgz" vcenter: credentials: address: "" username: "" password: "" datacenter: "" datastore: "" cluster: "" network: "" resourcepool: "" datadisk: "" cacertpath: "" proxy: url: "" noproxy: "" admincluster: ipblockfilepath: "admin-hostconfig.yaml" bigip: credentials: &bigip-credentials address: "" username: "" password: "" partition: "" vips: controlplanevip: "" ingressvip: "" serviceiprange: 10.96.232.0/24 podiprange: 192.168.0.0/16 usercluster: ipblockfilepath: "user-hostconfig.yaml" bigip: credentials: *bigip-credentials partition: "" vips: controlplanevip: "" ingressvip: "" clustername: "initial-user-cluster" masternode: cpus: 4 memorymb: 8192 replicas: 1 workernode: cpus: 4 memorymb: 8192 replicas: 3 serviceiprange: 10.96.0.0/12 podiprange: 192.168.0.0/16 lbmode: Integrated gkeconnect: projectid: "" registerserviceaccountkeypath: "" agentserviceaccountkeypath: "" stackdriver: projectid: "" clusterlocation: "" enablevpc: false serviceaccountkeypath: "" gcrkeypath: ""
Modifying the configuration file
Modify config.yaml
as described in the following sections:
vcenter.credentials.address
The vcenter.credentials.address
field holds the IP address or the hostname
of your vCenter server.
Before you fill in the vsphere.credentials.address field
, download and inspect
the serving certificate of your vCenter server. Enter the following command to
download the certificate and save it to a file named vcenter.pem
.
true | openssl s_client -connect [VCENTER_IP]:443 -showcerts 2>/dev/null | sed -ne '/-BEGIN/,/-END/p' > vcenter.pem
where [VCENTER_IP] is the IP address of your vCenter Server.
Open the certificate file to see the Subject Common Name and the Subject Alternative Name:
openssl x509 -in vcenter.pem -text -noout
The output shows the Subject
Common Name (CN). This might be an IP address, or
it might be a hostname. For example:
Subject: ... CN = 203.0.113.100
Subject: ... CN = my-host.my-domain.example
The output might also include one or more DNS names under
Subject Alternative Name
:
X509v3 Subject Alternative Name: DNS:vcenter.my-domain.example
Choose the Subject
Common Name or one of the DNS names under
Subject Alternative Name
to use as the value of vcenter.credentials.address
in your configuration file. For example:
vcenter: credentials: address: "203.0.113.1" ...
vcenter: credentials: address: "my-host.my-domain.example" ...
You must choose a value that appears in the certificate. For example, if the IP
address does not appear in the certificate, you cannot use it for
vcenter.credentials.address
.
vcenter.credentials
GKE on-prem needs to know your vCenter Server's username and
password. To provide this information, set the username
and password
values
under vcenter.credentials
. For example:
vcenter: credentials: ... username: "my-name" password: "my-password"
vcenter.datacenter
, .datastore
, .cluster
, .network
GKE on-prem needs some information about the structure of your
vSphere environment. Set the values under vcenter
to provide this information.
For example:
vcenter: ... datacenter: "MY-DATACENTER" datastore: "MY-DATASTORE" cluster: "MY-VSPHERE-CLUSTER" network: "MY-VIRTUAL-NETWORK"
vcenter.resourcepool
A vSphere resource pool
is a logical grouping of vSphere VMs in your vSphere cluster. If you are using
a resource pool other than the default, provide its name to
vcenter.resourcepool
. For example:
vcenter: ... resourcepool: "my-pool"
If you want
GKE on-prem to deploy its nodes to the vSphere cluster's default
resource pool, provide an empty string to vcenter.resourcepool
. For example:
vcenter: ... resourcepool: ""
vcenter.datadisk
GKE on-prem creates a virtual machine disk (VMDK) to hold the
Kubernetes object data for the admin cluster. The installer creates the VMDK for
you, but you must provide a name for the VMDK in the vcenter.datadisk
field.
For example:
vcenter: ... datadisk: "my-disk.vmdk"
- vSAN datastore: Creating a folder for the VMDK
If you are using a vSAN datastore, you need to put the VMDK in a folder. You must manually create the folder ahead of time. To do so, you could use
govc
to create a folder:govc datastore.mkdir -namespace=true my-gke-on-prem-folder
Then set
vcenter.datadisk
to the path of the VMDK, including the folder. For example:vcenter: ... datadisk: "my-gke-on-prem-folder/my-disk.vmdk"
In version 1.1.1, a known issue requires that you provide the folder's universally unique identifier (UUID) path, rather than its file path, to
vcenter.datadisk
. Copy this from the output of the abovegovc
command.Then, provide the folder's UUID in the
vcenter.datadisk
field. Do not put a forward slash in front of the UUID. For example:vcenter: ... datadisk: "14159b5d-4265-a2ba-386b-246e9690c588/my-disk.vmdk"
This issue has been fixed in versions 1.1.2 and later.
vcenter.cacertpath
When a client, like GKE on-prem, sends a request to vCenter Server, the server must prove its identity to the client by presenting a certificate or a certificate bundle. To verify the certificate or bundle, GKE on-prem must have the root certificate in the chain of trust.
Set vcenter.cacertpath
to the path of the root certificate. For example:
vcenter: ... cacertpath: "/my-cert-folder/the-root.crt"
Your VMware installation has a certificate authority (CA) that issues a certificate to your vCenter server. The root certificate in the chain of trust is a self-signed certificate created by VMware.
If you do not want to use the VMWare CA, which is the default, you can configure VMware to use a different certificate authority.
If your vCenter server uses a certificate issued by the default VMware CA, there are several ways you can get the root certificate:
curl -k "https://[SERVER_ADDRESS]/certs/download.zip" > download.zip
where [SERVER_ADDRESS] is the address of your vCenter server.
In a browser, enter the address of your vCenter server. In the gray box at the right, click Download trusted root CA certificates.
Enter this command to get the serving certificate:
true | openssl s_client -connect [SERVER_ADDRESS]:443 -showcerts
In the output, find a URL like this: https://[SERVER_ADDRESS]/afd/vecs/ca. Enter the URL in a browser. This downloads the root certificate.
The downloaded file is named download.zip
.
Install unzip command and unzip the file:
sudo apt-get install unzip unzip download.zip
If the unzip command doesn't work the first time, enter the command again.
Find the certificate file in certs/lin
.
proxy
If your network is behind a proxy server, set proxy.url
to the address of your
proxy server.
For proxy.noproxy
, provide a list of IP addresses, IP address ranges,
hostnames, and domain names. When GKE on-prem sends a request to one
of these addresses, hosts, or domains, it will send the request directly. It
will not send the request to the proxy server. For example:
proxy: url: "https://my-proxy.example.local" noproxy: "10.151.222.0/24, my-host.example.local,10.151.2.1"
admincluster.ipblockfilepath
Because you are using static IP addresses, you must have a
host configuration file as described in
Configuring static IPs. Provide the path to your host
configuration file in the admincluster.ipblockfilepath
field. For example:
admincluster: ipblockfilepath: "/my-config-directory/admin-hostconfig.yaml"
admincluster.bigip.credentials
GKE on-prem needs to know the IP address or hostname, username,
and password of your F5 BIG-IP load balancer. Set the values under
admincluster.bigip
to provide this information. For example:
admincluster: ... bigip: credentials: address: "203.0.113.2" username: "my-admin-f5-name" password: "rJDlm^%7aOzw"
admincluster.bigip.partition
Previously, you created a BIG-IP partition for your admin cluster. Set
admincluster.bigip.partition
to the name of your partition. For example:
admincluster: ... bigip: partition: "my-admin-f5-partition"
admincluster.vips
Set the value of admincluster.vips.controlplanevip
to the
IP address that you have chosen to configure on the load balancer
for the Kubernetes API server of the admin cluster. Set the value of
ingressvip
to the IP address you have chosen to configure on the load balancer
for the admin cluster's ingress service. For example:
admincluster: ... vips: controlplanevip: 203.0.113.3 ingressvip: 203.0.113.4
admincluster.serviceiprange
and admincluster.podiprange
The admin cluster must have a range of IP addresses
to use for Services and a range of IP addresses to use for Pods. These ranges
are specified by the admincluster.serviceiprange
and admincluster.podiprange
fields. These fields are populated when you run gkectl create-config
. If you
like, you can change the populated values to values of your choice.
The Service and Pod ranges must not overlap. Also, the Service and Pod ranges must not overlap with IP addresses that are used for nodes in any cluster.
Example:
admincluster: ... serviceiprange: 10.96.232.0/24 podiprange: 192.168.0.0/16
usercluster.bigip.partition
Previously, you created a BIG-IP partition for your user cluster. Set
usercluster.bigip.partition
to the name of your partition. For example:
usercluster: ... bigip: partition: "my-user-f5-partition" ...
usercluster.vips
Set the value of usercluster.vips.controlplanevip
to the
IP address that you have chosen to configure on the load balancer
for the Kubernetes API server of the user cluster. Set the value of
ingressvip
to the IP address you have chosen to configure on the load balancer
for the user cluster's ingress service. For example:
usercluster: ... vips: controlplanevip: 203.0.113.6 ingressvip: 203.0.113.7
usercluster.serviceiprange
and usercluster.podiprange
The user cluster must have a range of IP addresses
to use for Services and a range of IP addresses to use for Pods. These ranges
are specified by the usercluster.serviceiprange
and usercluster.podiprange
fields. These fields are populated when you run gkectl create-config
. If you
prefer, you can change the populated values to values of your choice.
The Service and Pod ranges must not overlap. Also, the Service and Pod ranges must not overlap with IP addresses that are used for nodes in any cluster.
Example:
usercluster: ... serviceiprange: 10.96.233.0/24 podiprange: 172.16.0.0/12
Disabling VMware DRS anti-affinity rules
As of version 1.1.0-gke.6, GKE on-prem automatically creates VMware Distributed Resource Scheduler (DRS) anti-affinity rules for your user cluster's nodes, causing them to be spread across at least three physical hosts in your datacenter. As of version 1.1.0-gke.6, this feature is automatically enabled for new clusters and existing clusters.
This feature requires that your vSphere environment meets the following conditions:
- VMware DRS is enabled. VMware DRS requires vSphere Enterprise Plus license edition. To learn how to enable DRS, see Enabling VMware DRS in a cluster.
- The vSphere user account provided in the
vcenter
field has theHost.Inventory.EditCluster
permission. - There are at least three physical hosts available.
Recall that if you have a vSpphere Standard license, you cannot enable VMware DRS.
If you do not have DRS enabled, or if you do not have at least three hosts to
which vSphere VMs can be scheduled, add
usercluster.antiaffinitygroups.enabled: false
to your configuration file.
For example:
usercluster: ... antiaffinitygroups: enabled: false
gkeconnect
The gkeconnect
specification holds information that GKE on-prem
needs to set up management of your on-prem clusters from Google Cloud console.
Set gkeconnect.projectid
to the project ID of the Google Cloud project
where you want to manage your on-prem clusters.
Set the value of gkeconnect.registerserviceaccountkeypath
to the path of the
JSON key file for your
register service account.
Set the value of gkeconnect.agentserviceaccountkeypath
to the path of the
JSON key file for your
connect service account.
Example:
gkeconnect: projectid: "my-project" registerserviceaccountkeypath: "/my-key-directory/register-key.json" agentserviceaccountkeypath: "/my-key-directory/connect-key.json"
stackdriver
The stackdriver
specification holds information that GKE on-prem
needs to store log entries generated by your on-prem clusters.
Set stackdriver.projectid
to the project ID of the Google Cloud project
that you want to associate with Google Cloud Observability. Connect exports
cluster logs from to Stackdriver by way of this project.
Set stackdriver.clusterlocation
to a Google Cloud region where you want
to store logs. It is a good idea to choose a region that is near
your on-prem data center.
Set stackdriver.proxyconfigsecretname
to a Kubernetes Secret that you define
in the kube-system
namespace. This Secret should have a single value defining
https_proxy_url
. The default Secret stackdriver-proxy-config
is immutable and
simply serves as an example.
Set stackdriver.enablevpc
to true
if you have your cluster's network
controlled by a VPC. This ensures that all
telemetry flows through Google's restricted IP addresses.
Set stackdriver.serviceaccountkeypath
to the path of the JSON key file for
your
Google Cloud Observability service account.
Example:
stackdriver: projectid: "my-project" clusterlocation: "us-west1" enablevpc: false serviceaccountkeypath: "/my-key-directory/stackdriver-key.json"
gcrkeypath
Set the value of gcrkeypath
to the path of the JSON key file for your
allowlisted service account. For example:
Note: To learn more about this command, see
Running preflight checks.
gcrkeypath: "/my-key-directory/whitelisted-key.json"
Validating the configuration file
After you've modified the configuration file, run gkectl check-config
to
verify that the file is valid and can be used for installation:
gkectl check-config --config config.yaml
If the command returns any FAILURE
messages, fix the issues and validate the
file again.
If you want to skip the more time-consuming validations, pass the --fast
flag.
To skip individual validations, use the --skip-validation-xxx
flags. To
learn more about the check-config
command, see
Running preflight checks.
Running gkectl prepare
Run gkectl prepare
to initialize your vSphere environment:
gkectl prepare --config config.yaml --skip-validation-all
Creating the admin and user clusters
Create the admin cluster and the user cluster:
gkectl create cluster --config config.yaml --skip-validation-all
The gkectl create cluster
command creates a file named kubeconfig
in the
current directory. The GKE on-prem documentation uses the placeholder
[ADMIN_CLUSTER_KUBECONFIG] to refer to this file.
To verify that the admin cluster was created, enter the following command:
kubectl get nodes --kubeconfig [ADMIN_CLUSTER_KUBECONFIG]
The output shows the admin cluster nodes.
The gkectl create cluster
command creates a file named
init-user-cluster-kubeconfig
in the current directory. The
GKE on-prem documentation uses the placeholder
[USER_CLUSTER_KUBECONFIG] to refer to this file.
To verify that the user cluster was created, enter the following command:
kubectl get nodes --kubeconfig [USER_CLUSTER_KUBECONFIG]
The output shows the user cluster nodes. For example:
NAME STATUS ROLES AGE VERSION xxxxxx-1234-ipam-15008527 Ready <none> 12m v1.14.7-gke.24 xxxxxx-1234-ipam-1500852a Ready <none> 12m v1.14.7-gke.24 xxxxxx-1234-ipam-15008536 Ready <none> 12m v1.14.7-gke.24