GKE overview


GKE is a Google-managed implementation of the Kubernetes open source container orchestration platform. Kubernetes was developed by Google, drawing on years of experience operating production workloads at scale on Borg, our in-house cluster management system. With GKE, you can deploy and operate your own containerized applications at scale using Google's infrastructure.

This page provides an overview of Google Kubernetes Engine (GKE), and is intended for platform administrators who are looking for a scalable, automated, managed Kubernetes solution.

Get started with GKE

You can start exploring GKE in minutes. You can use GKE's free tier, which lets you get started with Kubernetes without incurring costs for cluster management.

  1. Get started in Google Cloud console

  2. Try the quickstart to deploy a containerized web application.
  3. Read the Autopilot overview, which has guidance and resources for planning and operating your platform.

When to use GKE

GKE is ideal if you need a platform that lets you configure the infrastructure that runs your containerized apps, such as networking, scaling, hardware, and security. GKE provides the operational power of Kubernetes while managing many of the underlying components, such as the control plane and nodes, for you.

Benefits of GKE

The following table describes some of the benefits of using GKE as your managed Kubernetes platform:

GKE benefits
Platform management
  • Fully-managed nodes in GKE Autopilot mode with built-in hardening and best practice configurations automatically applied.
  • Managed upgrade experience with release channels to improve security, reliability, and compliance.
  • Flexible maintenance windows and exclusions that let you configure upgrade type and scope to meet business needs and architecture constraints.
  • In GKE Standard mode, flexible node upgrade strategies to optimize availability and manage disruptions.
  • Automatic scaling of nodes based on the number of Pods in the cluster with Autopilot mode or with node auto-provisioning in Standard mode.
  • Node auto-repair to maintain node health and availability.
  • Built-in logging and monitoring.
  • Google Cloud integrated CI/CD options with Cloud Build and Cloud Deploy.
Improved security posture
Cost optimization
  • In Autopilot mode, pay only for the compute resources your running Pods request.

    In GKE Standard mode, you pay for all resources on nodes, regardless of Pod requests.

  • Save costs by running fault-tolerant workloads, such as batch jobs, on Spot Pods.
  • Minimized operational overhead in Autopilot mode because Google manages both the nodes and the control plane.
Reliability and availability
Best practice:

Configure the application CI/CD pipeline to use Cloud Build, Cloud Deploy, and Artifact Registry. By using managed build and deployment services, you can optimize for security, scale, and simplicity.

Use cases for GKE

GKE and Kubernetes are used in a variety of industries, including robotics, healthcare, retail, education, gaming, and financial services. Examples of workloads you can run include:

For case studies by industry and application, refer to Google Cloud customers.

How GKE works

A GKE environment consists of nodes, which are Compute Engine virtual machines (VMs), that are grouped together to form a cluster. You package your apps (also called workloads) into containers. You deploy sets of containers as Pods to your nodes. You use the Kubernetes API to interact with your workloads, including administering, scaling, and monitoring.

Kubernetes clusters have a set of management nodes called the control plane, which run system components such as the Kubernetes API server. In GKE, Google manages the control plane and system components for you. In Autopilot mode, which is the recommended way to run GKE, Google also manages your worker nodes. Google automatically upgrades component versions for improved stability and security, ensuring high availability, and ensuring integrity of data stored in the cluster's persistent storage.

For more information, refer to GKE cluster architecture.

Best practice:

If you run your workloads in VMs, consider using Migrate for GKE to containerize the workloads without having to rewrite apps or modify source code.

Kubernetes versions and features

GKE automatically upgrades your control plane to new Kubernetes versions that add new features and improvements in the open source project. The Kubernetes version selected for auto-upgrades depends on the stable version in the GKE release channel you select when you create the cluster. You can also manually upgrade your control plane to a different Kubernetes version than the version GKE selects for an upgrade. For detailed information on versions and upgrades, refer to the release notes and GKE versioning and upgrades. If you use GKE Standard mode and don't enroll in a release channel, you won't get automatic upgrades.

GKE includes most beta and stable Kubernetes features. You can use beta APIs in 1.24 and later.

Best practice:

If you want to try less stable Kubernetes features in the alpha stage, use alpha Standard clusters. Also, don't enable beta APIs in production clusters, or carefully considering the implications before doing so.

Modes of operation

GKE has the Autopilot and Standard modes of operation, which offer you different levels of flexibility, responsibility, and control.

If you want more information before you choose a mode, refer to Choose a GKE mode of operation.

Best practice:

Use the fully managed Autopilot mode, in which Google Cloud manages your nodes for you and provides a workload-focused, cost-optimized, production-ready experience. Only use Standard mode if you know you have a specific need to manually manage the node pools and clusters.

What's next