Google Distributed Cloud (software only) for bare metal overview

Google Distributed Cloud is our solution that extends Google Cloud's infrastructure and services into your data center. We offer Google Distributed Cloud in both connected and air-gapped configurations that run on Google-provided hardware. We also offer Google Distributed Cloud as a software-only product (formerly known as Google Distributed Cloud Virtual) that runs on your own hardware. Google Distributed Cloud software can be installed on either VMware or bare metal. This guide is for Google Distributed Cloud software that runs directly on your bare metal machines.

Google Distributed Cloud software is based on Google Kubernetes Engine (GKE), with its own Kubernetes package that extends GKE for use in an on-premises environment. With Google Distributed Cloud you can create, manage, and upgrade GKE clusters on your own premises while using Google Cloud features, and deploy and operate containerized applications on your clusters at scale using Google's infrastructure.

Installing Google Distributed Cloud software entitles you to use GKE Enterprise: an enterprise tier for GKE with powerful features for governing, managing, and operating containerized workloads at scale. You can find out more about GKE Enterprise and the features available on bare metal in the GKE Enterprise technical overview.

Why use Google Distributed Cloud on bare metal?

Running Google Distributed Cloud software on bare metal takes advantage of your existing enterprise infrastructure and helps you modernize applications throughout their lifecycle.

Bring your own node

With a software-only installation on bare metal of Google Distributed Cloud, you deploy applications directly on your own hardware infrastructure, which provides the best performance and flexibility. You have direct control over application scale, security, and network latency. You also get the benefits of containerized applications through GKE components, including powerful features from the enterprise tier.

Improved performance and lowered cost

Google Distributed Cloud manages application deployment and health across your existing corporate data centers for more efficient operation. Also, you can run containers at the edge of the network, so analytic apps can run with full performance.

Using Google Distributed Cloud on bare metal instead of virtual machines gives you the ability to manage application containers on a wide variety of performance-optimized hardware types, like GPUs and SSDs. Running Google Distributed Cloud on bare metal also allows for direct application access to hardware.

Compatible security

Because you control your node environment, you can optimize your network, hardware, and applications to meet your specific requirements. As a result, you can directly control system security, without having to worry about your compatibility with virtual machines and operating systems.

Monitored application deployment

Google Distributed Cloud provides advanced monitoring of the health and performance of your environment. You can more easily adjust the scale of applications while maintaining reliability despite fluctuations in workload and network traffic.

You manage monitoring, logging, and analysis of clusters and workloads through Connect.

Network latency and flexibility

Because you manage your network requirements, your network can be optimized for low latency. This network optimization can be crucial for performance in commercial or finance analytics and other enterprise or network edge applications.

Highly available

Google Distributed Cloud includes support for multiple control nodes in a cluster. If a control plane node goes down, you can still administer your environment.

Secure design and control

Your infrastructure security can be customized for your own needs, with minimal connections to outside resources. Most importantly, there is no additional VM complexity when deploying security systems, and you maintain complete OS control when interacting with existing security systems.

Google Distributed Cloud works with lightweight secure connections to Google APIs. You can manage clusters and applications from a central location with Connect and Cloud Monitoring. This centralization also helps keep your deployments running smoothly, and lets Google Cloud Support troubleshoot issues more effectively.

Preflight checks on installation

With Google Distributed Cloud, your GKE clusters run on open source and enterprise Linux systems, and on a minimal hardware infrastructure, and so is flexible in your environment. It also includes various pre-flight checks to help ensure successful configurations and installations.

Application deployment and load balancing

Google Distributed Cloud includes Layer 4 and Layer 7 load balancing mechanisms at cluster creation.

Improved etcd reliability

To monitor the size and defragment etcd databases, Google Distributed Cloud control planes include an etcddefrag Pod. The etcddefrag Pod reclaims storage from large etcd databases and recovers etcd when disk space is exceeded.

How it works

Google Distributed Cloud extends Google Kubernetes Engine (GKE) to let you create GKE clusters on your own Linux servers on your own premises. You manage these on-premises clusters in Google Cloud along with regular GKE clusters and clusters in other environments as part of a fleet.

Traditional GKE clusters run in Google Cloud where the Kubernetes control plane and network infrastructure are managed by Google Cloud. Because Google Distributed Cloud clusters run in your data center, we provide some administration and control plane software in addition to the GKE software. The software that runs in your data center is downloaded as part of the installation and upgrade processes.

The following diagram shows the simplified result of a completed installation:

Diagram of an admin cluster and a user cluster
Google Distributed Cloud architecture on bare metal with an admin cluster and a user cluster (Click to enlarge)

Key components

The following components make up a software-only installation on bare metal of Google Distributed Cloud:

  • The admin cluster consists of one or more control plane nodes. Each node is a physical machine running a supported Linux operating system. The standard deployment consists of an admin cluster that manages the lifecycle of one or more user clusters through Kubernetes Resource Management (KRM). Each node machine in the installation has its own IP address.

  • A user cluster is where the workloads that implement your applications run, like in GKE on Google Cloud. Each user cluster consists of at least one control plane node and one worker node.

  • The admin workstation is a separate machine (typically) that includes the tools and cluster artifacts, such as configuration files. Cluster creators and developers use these tools and artifacts to manage their on-premises GKE clusters, with appropriate permissions:

    • Running bmctl from the admin workstation lets you create and update clusters and perform some other administrative tasks

    • Running kubectl from the admin workstation lets you interact with your admin and user clusters, including deploying and managing workloads

  • The GKE On-Prem API is the Google Cloud-hosted API for cluster lifecycle management. You use the API clients (Google Cloud console, Google Cloud CLI, and Terraform) to create and manage the lifecycle of your on-premises clusters as an alternative to logging into the admin workstation to manage your clusters with the bmctl CLI.

  • The console also provides a web interface for your Google Cloud project, including on-premises clusters. The console displays key metrics about your clusters to help you monitor cluster health.

  • Cluster administrators and developers use kubectl to access cluster control planes through virtual IP addresses (VIPs) specified as part of cluster configuration. Application users and developers use Service and Ingress VIPs to access and expose workloads respectively.

Connecting to the fleet

All user clusters (and optionally admin clusters) are members of a fleet: a logical grouping of Kubernetes clusters. Fleets let your organization uplevel management from individual clusters to entire groups of clusters, and can help your teams adopt similar best practices to those used at Google. You can view and manage fleet clusters together in the Google Cloud console, and use fleet-enabled GKE Enterprise features to help you manage, govern, and operate your workloads at scale. You can see a complete list of available fleet features for on-premises environments in GKE Enterprise deployment options.

Each fleet cluster's connection to Google Cloud is managed by a Connect Agent, which is deployed as part of the Google Distributed Cloud installation process. You can learn more about how this agent works in the Connect Agent overview.

Fleet membership is also used to manage Google Distributed Cloud pricing on bare metal, as described in the next section.

For a deeper discussion of GKE Enterprise features and how they work together, see the GKE Enterprise technical overview.


GKE clusters on-premises created using Google Distributed Cloud are billed per vCPU as part of GKE Enterprise. You enable the Enterprise tier by enabling the Anthos API in your Google Cloud project.

For full pricing information, including how to contact sales, see Google Kubernetes Engine pricing.


To learn about Google Distributed Cloud versions, see Version Support Policy.

Installing Google Distributed Cloud on bare metal

Because GKE clusters created using Google Distributed Cloud run in your own infrastructure, they're highly configurable. Once you select the deployment model that meets your particular organizational and use case needs: you can choose from a range of supported load balancing modes, IP addressing options, security features, connectivity options, and more. Setting up a GKE cluster on bare metal involves making decisions before and during installation. This documentation set includes guides to help your team decide on the right features and options for you. To ensure that your installation meets the needs of your organization, consult with your networking and application teams.