Google Distributed Cloud has the following sets of installation prerequisites:
- The prerequisites for the workstation machine running the
bmctl
tool. - The prerequisites for the node machines that are part of the Google Distributed Cloud deployment.
- The prerequisites for the load balancer machines.
- The prerequisites for the Google Cloud project.
- The prerequisites for your service accounts.
If you use the workstation machine as a cluster node machine, it must meet the prerequisites for both.
Before you begin
During installation, you must provide the following credentials:
- The private SSH keys needed to access cluster node machines.
- If you are not using
root
, the cluster node machine login name. - The Google Cloud service account keys. Go to Creating and managing service account keys to learn more.
Ensure you have all the necessary credentials before attempting to install Google Distributed Cloud.
Logging into gcloud
- Login to gcloud as a user using
gcloud auth application-default
login: - Service Account Admin
- Service Account Key Admin
- Project IAM Admin
- Compute Viewer
- Service Usage Admin
- Get your Google Cloud project ID to use with cluster creation:
gcloud auth application-default loginYou need to have a Project Owner/Editor role to use the automatic API enablement and Service Account creation features, described below. You can also add the following IAM roles to the user:
export GOOGLE_APPLICATION_CREDENTIALS=JSON_KEY_FILEJSON_KEY_FILE specifies the path to your service account JSON key file.
export CLOUD_PROJECT_ID=$(gcloud config get-value project)
Workstation prerequisites
The bmctl
workstation must meet the following prerequisites:
- Operating system is the same supported Linux distribution running on the cluster node machines.
- Docker version 19.03 or later installed.
- Non-root user is member of
docker
group (for instructions, go to Manage Docker as a non-root user). - gcloud installed.
- More than 50 GiB of free disk space.
- Layer 3 connectivity to all cluster node machines.
- Access to all cluster node machines through SSH via private keys with passwordless root access. Access can be either direct or through sudo.
- Access the control plane VIP.
Node machine prerequisites
The node machines have the following prerequisites:
- Their operating system is one of the supported Linux distributions.
- The Linux kernel version is 4.17.0 or newer. Ubuntu 18.04 and 18.04.1 are on Linux kernel version 4.15 and therefore incompatible.
- Meet the minimum hardware requirements.
- Internet access.
- Layer 3 connectivity to all other node machines.
- Access the control plane VIP.
- Properly configured DNS nameservers.
- No duplicate host names.
- One of the following NTP services is enabled and working:
- chrony
- ntp
- ntpdate
- systemd-timesyncd
- A working package manager: apt, dnf, etc.
- On Ubuntu, you must disable Uncomplicated Firewall (UFW).
Run
systemctl stop ufw
to disable UFW. - On Ubuntu and starting with Google Distributed Cloud 1.8.2, you aren't required
to disable AppArmor. If you deploy clusters using earlier releases of
Google Distributed Cloud disable AppArmor with the following command:
systemctl stop apparmor
- If you choose Docker as your container runtime, you may use Docker version 19.03 or later installed. If you don't have Docker installed on your node machines or have an older version installed, Anthos on bare metal installs Docker 19.03.13 or later when you create clusters.
If you use the default container runtime, containerd, you don't need Docker, and installing Docker can cause issues. For more information, see the known issues.
Prerequisites for disk space depend on the version of clusters you deploy:
Version 1.10.1 and earlier clusters
Whenever you install Google Distributed Cloud release 1.10.1 or earlier, ensure that the file systems backing the following directories have the required capacity and meet the following requirements:
- The directories have at least 128 GiB of free storage capacity and
the underlying partitions for the directories have following capacity:
/
: 20 GiB (21,474,836,480 bytes)/var/lib/docker
or/var/lib/containerd
, depending on the container runtime: 30 GiB (32,212,254,720 bytes)/var/lib/kubelet
: 10 GiB (10,737,418,240 bytes)/mnt/anthos-system
: 25 GiB (26,843,545,600 bytes)/var/lib/etcd
: 20 GiB (21,474,836,480 bytes, applicable to control plane nodes only)
- The overall disk space is less than 90% utilization.
Version 1.10.2 and later clusters
Starting with release 1.10.2, cluster creation only checks for the required free space for the Google Distributed Cloud system components. This change gives you more control on the space you allocate for application workloads. Whenever you install Google Distributed Cloud release 1.10.2 or later, ensure that the file systems backing the following directories have the required capacity and meet the following requirements:
/
: 17 GiB (18,253,611,008 bytes)./var/lib/docker
or/var/lib/containerd
, depending on the container runtime:- 30 GiB (32,212,254,720 bytes) for control plane nodes.
- 10 GiB (10,485,760 bytes) for worker nodes.
/var/lib/kubelet
: 500 MiB (524,288,000 bytes)./var/lib/etcd
: 20 GiB (21,474,836,480 bytes, applicable to control plane nodes only).
Regardless of cluster version, the preceding lists of directories can be on the same or different partitions. If they are on the same underlying partition, then the space requirement is the sum of the space required for each individual directory on that partition. For all release versions, the cluster creation process creates the directories, if needed.
- The directories have at least 128 GiB of free storage capacity and
the underlying partitions for the directories have following capacity:
/var/lib/etcd
and/etc/kubernetes
directories are either non-existent or empty.
In addition to the prerequisites for installing and running Google Distributed Cloud, customers are expected to comply with relevant standards governing their industry or business segment, such as PCI DSS requirements for businesses that process credit cards or Security Technical Implementation Guides (STIGs) for businesses in the defense industry.
Load balancer machines prerequisites
When your deployment doesn't have a specialized load balancer node pool, you can have worker nodes or control plane nodes build a load balancer node pool. In that case, they have additional prerequisites:
- Machines are in the same Layer 2 subnet.
- All VIPs are in the load balancer nodes subnet and routable from the gateway of the subnet.
- The gateway of the load balancer subnet should listen to gratuitous ARPs to forward packets to the master load balancer.
Google Cloud project prerequisites
Before you install Google Distributed Cloud, enable the following services for your associated GCP project:
anthos.googleapis.com
anthosaudit.googleapis.com
anthosgke.googleapis.com
cloudresourcemanager.googleapis.com
container.googleapis.com
gkeconnect.googleapis.com
gkehub.googleapis.com
iam.googleapis.com
logging.googleapis.com
monitoring.googleapis.com
opsconfigmonitoring.googleapis.com
serviceusage.googleapis.com
stackdriver.googleapis.com
You can also use the bmctl
tool to enable these services.
Service accounts prerequisites
In production environments, you should create separate service accounts for different purposes. Google Distributed Cloud needs the following different types of Google Cloud service accounts depending on their purpose:
- To access Container Registry (
gcr.io
), no special role is required. - To register a cluster in a fleet, grant the
roles/gkehub.admin
IAM role to the service account on your Google Cloud project. - To connect to fleets, grant the
roles/gkehub.connect
IAM role to the service account on your Google Cloud project. To send logs and metrics to Google Cloud Observability, grant the following IAM roles to the service account on your Google Cloud project:
roles/logging.logWriter
roles/monitoring.metricWriter
roles/stackdriver.resourceMetadata.writer
roles/monitoring.dashboardEditor
roles/opsconfigmonitoring.resourceMetadata.writer
You can also use the bmctl
tool to create these service accounts.