AI/ML orchestration on Cloud Run documentation

Cloud Run is a fully managed platform that lets you run your containerized applications, including AI/ML workloads, directly on Google's scalable infrastructure. It handles the infrastructure for you, so you can focus on writing your code instead of spending time on operating, configuring, and scaling your Cloud Run resources. Cloud Run's capabilities provide the following:

  • Hardware accelerators: access and manage GPUs for inference at scale.
  • Frameworks support: integrate with the model serving frameworks you already know and trust such as Hugging Face, TGI, and vLLM.
  • Managed platform: get all the benefits of a managed platform to automate, scale, and enhance the security of your entire AI/ML lifecycle while maintaining flexibility.

Explore our tutorials and best practices to see how Cloud Run can optimize your AI/ML workloads.

  • Get access to Gemini 2.0 Flash Thinking
  • Free monthly usage of popular products, including AI APIs and BigQuery
  • No automatic charges, no commitment

Keep exploring with 20+ always-free products

Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more.

Explore self-paced training, use cases, reference architectures, and code samples with examples of how to use and connect Google Cloud services.
Use case
Use cases

Use NVIDIA L4 GPUs on Cloud Run for real-time AI inference, including fast cold-start and scale-to-zero benefits for Large Language Models (LLMs).

GPUs LLMs

Use case
Use cases

Learn how to use Cloud Run for production-ready AI applications. This guide describes use cases such as traffic splitting for A/B testing prompts, RAG (Retrieval-Augmented Generation) patterns, and connectivity to vector stores.

AI applications traffic splitting for A/B testing RAG patterns vector stores connectivity to vector stores

Use case
Use cases

One-click deployment from Google AI Studio to Cloud Run and the Cloud Run MCP (Model Context Protocol) server to enable AI agents in IDEs or agent SDKs and deploy apps.

MCP servers deployments Cloud Run

Use case
Use cases

Integrate NVIDIA L4 GPUs with Cloud Run for cost-efficient LLM serving. This guide emphasizes scale-to-zero and provides deployment steps for models like Gemma 2 with Ollama.

LLMs GPU Ollama Cost Optimization

Use case
Use cases

Decouple large model files from the container image using Cloud Storage FUSE. Decoupling improves build times, simplifies updates, and creates a more scalable serving architecture.

Model Packaging Cloud Storage FUSE Best Practices Large Models

Use case
Use cases

Use the Cog framework that is optimized for ML serving to simplify packaging and deployment of containers to Cloud Run.

Cog Model Packaging Deployment Tutorial

Use case
Use cases

Use Cloud Run for lightweight ML inference and build a cost-effective monitoring stack by using native GCP services like Cloud Logging and BigQuery.

Monitoring MLOps Cost Efficiency Inference

Use case
Use cases

Deploy a simple Flask application that calls the Vertex AI Generative AI API onto a scalable Cloud Run service.

Generative AI Vertex AI Flask Deployment

Use case
Use cases

Use the Gemma Python code from AI Studio and deploy it directly to a Cloud Run instance, leveraging Secret Manager for secure API key handling.

AI Studio Gemma Deployment Tutorial

Related videos