Features
Increased velocity, reduced risk, and lower TCO
Flexible editions
GKE Standard edition provides fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization. It includes all the existing benefits of GKE and offers both the Autopilot and Standard operation modes. The new premium GKE Enterprise edition offers all of the above, plus management, governance, security, and configuration for multiple teams and clusters—all with a unified console experience and integrated service mesh.
Serverless Kubernetes experience using Autopilot
Container-native networking and security
Pod and cluster autoscaling
Prebuilt Kubernetes applications and templates
GPU and TPU support
Multi-team management via fleet team scopes
Multi-cluster management via fleets
Backup for GKE
Multi-cloud support with workload portability
Hybrid support
Managed service mesh
Managed GitOps
Identity and access management
Hybrid networking
Security and compliance
Integrated logging and monitoring
Cluster options
Auto scale
Auto upgrade
Auto repair
Resource limits
Container isolation
Stateful application support
Docker image support
OS built for containers
Private container registry
Fast, consistent builds
Built-in dashboard
Spot VMs
Persistent disks support
Local SSD support
Global load balancing
Linux and Windows support
Serverless containers
Usage metering
Release channels
Software supply chain security
Per-second billing
How It Works
Common Uses
Manage multi-cluster infrastructure
Simplify multi-cluster deployments with fleets
Use fleets to simplify how you manage multi-cluster deployments—such as separating production from non-production environments, or separating services across tiers, locations, or teams. Fleets let you group and normalize Kubernetes clusters, making it easier to administer infrastructure and adopt Google best practices.
Learn about fleet managementFind the right partner to manage multi-cluster infra
Securely manage multi-cluster infrastructure and workloads with the help of Enterprise edition launch partners.
Find a GKE partnerSecurely run optimized AI workloads
Run optimized AI workloads with platform orchestration
A robust AI/ML platform considers the following layers: (i) Infrastructure orchestration that support GPUs for training and serving workloads at scale, (ii) Flexible integration with distributed computing and data processing frameworks, and (iii) Support for multiple teams on the same infrastructure to maximize utilization of resources.
Learn more about AI/ML orchestration on GKEGKE shared-GPU helps to search for neutrinos
Hear from the San Diego Supercomputer Center (SDSC) and University of Wisconsin-Madison about how GPU sharing in Google Kubernetes Engines is helping them detect neutrinos at the South Pole with the gigaton-scale IceCube Neutrino Observatory.
Read to learn moreContinuous integration and delivery
Create a continuous delivery pipeline
This hands-on lab shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it.
Start hands-on labDeploying and running applications
Deploy a containerized web application
Create a containerized web app, test it locally, and then deploy to a Google Kubernetes Engine (GKE) cluster—all directly in the Cloud Shell Editor. By the end of this short tutorial, you'll understand how to build, edit, and debug a Kubernetes app.
Start tutorialFind the right partner to deploy and run
Deploy and run on GKE with the help of our trusted partners, including WALT Labs, Zencore, FTG, and more.
Find a GKE partnerCurrent deploys and runs on GKE
Current, a leading challenger bank based in New York City, now hosts most of its apps in Docker containers, including its business-critical GraphQL API, using GKE to automate cluster deployment and management of containerized apps.
Read how Current deployed apps with GKEMigrate workloads
Migrating a two-tier application to GKE
Use Migrate to Containers to move and convert workloads directly into containers in GKE. Migrate a two-tiered LAMP stack application, with both app and database VMs, from VMware to GKE.
Migration partners and services
Work with a trusted partner to get Google Kubernetes Engine on-prem and bring Kubernetes' world-class management to private infrastructure. Or tap into migration services from the Google Cloud Marketplace.
Find a migration partnerPricing
How GKE pricing works
After free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable ingress fees.
The GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters.
Free
Enterprise edition
Includes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.
$0.0083
Per vCPU per hour
Standard edition
Includes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization.
$0.10
Per cluster per hour
Autopilot mode: CPU, memory, and compute resources that are provisioned for your Pods.
Standard mode: You are billed for each instance according to Compute Engine's pricing.
Refer to Compute Engine pricing
How GKE pricing works | After free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable ingress fees. | |
---|---|---|
Service | Description | Price (USD) |
Free tier |
The GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters. |
Free |
Kubernetes |
Enterprise edition Includes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.
|
$0.0083 Per vCPU per hour |
Standard edition Includes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization. |
$0.10 Per cluster per hour |
|
Compute |
Autopilot mode: CPU, memory, and compute resources that are provisioned for your Pods. Standard mode: You are billed for each instance according to Compute Engine's pricing. |
Refer to Compute Engine pricing |