Jump to

AI Infrastructure

Scalable, high performance, and cost effective infrastructure for every AI workload.

  • AI Accelerators for every use case from high performance training to low-cost inference

  • Scale faster with GPUs and TPUs on Google Kubernetes Engine or Google Compute Engine

  • Deployable solutions for Vertex AI, Google Kubernetes Engine, and the Cloud HPC Toolkit

Benefits

Optimize performance and cost at scale

With Google Cloud, you can choose from GPUs, TPUs, or CPUs to support a variety of use cases including high performance training, low cost inference, and large-scale data processing.

Deliver results faster with managed infrastructure

Scale faster and more efficiently with managed infrastructure provided by Vertex AI. Set up ML environments quickly, automate orchestration, manage large clusters, and set up low latency applications.

Develop with software that’s purpose-built for AI

Improve AI development productivity by leveraging GKE to manage large-scale workloads. Train and serve the foundation models with support for autoscaling, workload orchestration, and automatic upgrades. 

Key features

Key features

Flexible and scalable hardware for any use case

There’s no one-size-fits-all when it comes to AI workloads—that’s why together with our industry hardware partners, like NVIDIA, Intel, AMD, Arm and more, we provide customers with the widest range of AI-optimized compute options across TPUs, GPUs, and CPUs for training and serving the most data-intensive models. 

Easy to use, manage, and scale

Orchestrating large-scale AI workloads with Cloud TPUs and Cloud GPUs has historically required manual effort to handle failures, logging, monitoring, and other foundational operations. Google Kubernetes Engine (GKE), the most scalable and fully-managed Kubernetes service, considerably simplifies the work required to operate TPUs and GPUs. Leveraging GKE to manage large-scale AI workload orchestration on Cloud TPU and Cloud GPU improves AI development productivity.

And for organizations that prefer the simplicity of abstracting away the infrastructure through managed services, Vertex AI now supports training with various frameworks and libraries using Cloud TPU and Cloud GPU.

Scale your AI models exponentially

Our AI-optimized infrastructure is built to deliver the global scale and performance demanded by Google products like YouTube, Gmail, Google Maps, Google Play, and Android that serve billions of users. Our AI infrastructure solutions are all underpinned by Google Cloud's Jupiter data center network which supports best-in-industry, scale-out capability for foundational services, through to high-intensity AI workloads.

Highly flexible and open platform

For decades, we’ve contributed to critical AI projects like TensorFlow and JAX. We co-founded the PyTorch Foundation and recently announced a new industry consortium—the OpenXLA project. Additionally, Google is the leading CNCF Open Source contributor, and has a 20+ year history of OSS contributions like TFX, MLIR, OpenXLA, KubeFlow, and Kubernetes as well as sponsorship of OSS projects critical to the data science community, like Project Jupyter and NumFOCverteUS.

Furthermore, our AI infrastructure services are embedded with the most popular AI frameworks such as TensorFlow, PyTorch, and MXNet allowing customers to continue using whichever framework they prefer, and not be constrained to a specific framework/or hardware architecture.

Customers

Customers leveraging Google Cloud's AI infrastructure

As AI opens the door for innovation across industries, companies are choosing Google Cloud to take advantage of our open, flexible, and performant infrastructure.

Documentation

Documentation

Google Cloud Basics
AI Infrastructure Tools on GKE

Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities.

Google Cloud Basics
Deep Learning VM Images

Deep Learning VM Images are optimized for data science and machine learning tasks. They come with key ML frameworks and tools pre-installed, and work with GPUs.

Google Cloud Basics
Deep Learning Containers

Deep Learning Containers are performance-optimized, consistent environments to help you prototype and implement workflows quickly on CPUs or GPUs.

Tutorial
How are Tensor Processing Units optimized for AI/ML?

Learn about the computational requirements of machine learning, and how TPUs were purpose-built to handle the task.

Google Cloud Basics
TPU System Architecture

TPUs are Google's custom-developed ASICs used to accelerate machine learning workloads. Learn about the underlying system architecture of TPUs from the ground up.

Pricing

Pricing

Pricing for AI Infrastructure is based on the product selected. You can get started with Google's AI infrastructure for free with Colab or Google Cloud's free tier.

Cloud TPU Cloud GPU
For information on TPU pricing for single device TPU types and TPU pod types, refer to TPU pricing. For information about GPU pricing for the different GPU types and regions that are available, refer to the GPU pricing.