Aplikasi AI menggunakan model AI untuk beroperasi atau melakukan tugas tertentu.
Misalnya, aplikasi AI dapat menggunakan model AI untuk meringkas dokumen, atau menjadi antarmuka chat yang menggunakan database vektor untuk mengambil lebih banyak konteks.
Agen AI menggabungkan kecerdasan model AI tingkat lanjut dengan akses ke berbagai alat, untuk mengambil tindakan atas nama pengguna dan di bawah kendali pengguna.
Anda dapat menerapkan agen AI sebagai layanan Cloud Run untuk mengatur serangkaian tugas asinkron dan memberikan informasi kepada pengguna, melalui beberapa interaksi permintaan-respons.
Arsitektur agen AI di Cloud Run
Arsitektur agen AI umum yang di-deploy di Cloud Run dapat melibatkan beberapa komponen dari Google Cloud dan di luar Google Cloud:
Penayangan dan Pengelolaan: Layanan Cloud Run bertindak sebagai endpoint API yang skalabel, dan dapat menangani beberapa pengguna serentak melalui penskalaan instance yang otomatis, sesuai permintaan, dan cepat. Layanan ini menjalankan logika agen inti, sering kali menggunakan framework orkestrasi AI seperti LangGraph atau Agent Development Kit (ADK). Lapisan ini mengoordinasikan panggilan ke komponen lain. Cloud Run mendukung streaming respons HTTP kembali ke pengguna menggunakan WebSockets. Identitas layanan bawaan Cloud Run menyediakan kredensial yang aman dan otomatis untuk memanggil Google Cloud API tanpa mengelola kunci API.
Model AI: Lapisan orkestrasi memanggil model untuk kemampuan penalaran. Di antaranya yaitu:
Memori jangka panjang untuk menyimpan histori percakapan atau mengingat preferensi pengguna dapat diimplementasikan dengan menghubungkan Cloud Run ke Firestore, database NoSQL serverless yang skalabel.
Database dan Pengambilan: Untuk Retrieval-Augmented Generation (RAG) atau pengambilan data terstruktur:
Kueri informasi entitas tertentu atau lakukan penelusuran kesamaan atas embedding dengan menghubungkan Cloud Run ke database vektor seperti Cloud SQL untuk PostgreSQL atau AlloyDB untuk PostgreSQL dengan ekstensi pgvector.
Alat: Pengelola menggunakan alat untuk melakukan tugas tertentu yang tidak sesuai untuk model atau untuk berinteraksi dengan layanan, API, atau situs eksternal. Hal ini dapat mencakup:
Utilitas dasar: Kalkulasi matematika yang presisi, konversi waktu, atau utilitas serupa lainnya dapat berjalan di layanan Cloud Run yang mengorkestrasi.
Panggilan API: Lakukan panggilan ke API internal atau pihak ketiga lainnya (akses baca atau tulis).
Pembuatan gambar atau diagram: Gunakan model pembuatan gambar atau jalankan library diagram untuk membuat konten visual dengan cepat dan efektif.
Otomatisasi browser dan OS: Jalankan Sistem Operasi grafis lengkap atau tanpa antarmuka dalam instance container untuk memungkinkan agen menjelajahi web, mengekstrak informasi dari situs, atau melakukan tindakan menggunakan klik dan input keyboard. Layanan Cloud Run menampilkan piksel layar. Gunakan library seperti Puppeteer untuk mengontrol browser.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-21 UTC."],[],[],null,["# Host AI apps and agents on Cloud Run\n\nThis page highlights some use cases for using Cloud Run as a\nhosting platform for the following AI use cases:\n\n- [AI applications](#ai-apps)\n- [AI agents](#ai-agents)\n\nHost AI applications on Cloud Run\n---------------------------------\n\nAI applications use AI models to operate or perform a specific task.\nFor example, an AI application can use an AI model to summarize documents, or be a chat interface that uses a vector database to retrieve more context.\n\nCloud Run is one of the [application hosting infrastructures](/docs/generative-ai/choose-models-infra-for-ai) that provides a fully managed environment for your AI application workloads.\nCloud Run integrates with AI models such as [Gemini API](/vertex-ai/generative-ai/docs/model-reference/inference), [Vertex AI endpoints](/vertex-ai/docs/general/deployment), or models hosted on [a GPU-enabled Cloud Run service](/run/docs/configuring/services/gpu).\nCloud Run also integrates with [Cloud SQL for PostgreSQL](/sql/docs/postgres/connect-run) and [AlloyDB for PostgreSQL](/alloydb/docs/quickstart/integrate-cloud-run), which are two databases offering the `pgvector` extension for Retrieval-Augmented Generation (RAG).\n\nHost AI Agents on Cloud Run\n---------------------------\n\nAI agents combine the intelligence of advanced AI models, with access to tools,\nto take actions on behalf of the user and under the user's control.\n\nYou can implement AI agents as Cloud Run services to orchestrate a set\nof asynchronous tasks and provide information to users, through involving multiple\nrequest-response interactions.\n\n### AI agent on Cloud Run architecture\n\nA typical AI agent architecture deployed on Cloud Run can involve\nseveral components from Google Cloud and outside of Google Cloud:\n\n1. **Serving and Orchestration:** A Cloud Run service acts as a scalable API endpoint, and can handle multiple concurrent users through automatic, on-demand, rapid scaling of instances. This service runs the core agent logic, often using an AI orchestration framework like [LangGraph](https://www.langchain.com/langgraph) or [Agent Development Kit (ADK)](https://google.github.io/adk-docs/). This layer coordinates calls to other components. Cloud Run supports [streaming HTTP responses](/run/docs/triggering/https-request#streaming) back to the user using [WebSockets](/run/docs/triggering/websockets). Cloud Run's built-in [service identity](/run/docs/securing/service-identity) provides secure and automatic credentials for calling Google Cloud APIs without managing API keys.\n\n2. **AI Models:** The orchestration layer calls models for reasoning capabilities. These can be:\n\n - The [Gemini API](/vertex-ai/generative-ai/docs/model-reference/inference)\n - Custom models or other foundation models deployed on [Vertex AI endpoints](/vertex-ai/docs/general/deployment)\n - Your own fine-tuned models served from a separate [GPU-enabled-Cloud Run service](/run/docs/configuring/services/gpu)\n3. **Memory:** Agents often need memory to retain context and learn from past interactions.\n\n - **Short-term memory** can be implemented by [connecting Cloud Run to Memorystore for Redis](/memorystore/docs/redis/connect-redis-instance-cloud-run).\n - **Long-term memory** for storing the conversational history or remembering the user's preferences can be implemented by connecting Cloud Run to [Firestore](/firestore/docs), a scalable, serverless NoSQL database.\n4. **Databases and Retrieval:** For Retrieval-Augmented Generation (RAG) or fetching structured data:\n\n - Query specific entity information or perform similarity searches over embeddings by connecting Cloud Run to vector databases like [Cloud SQL for PostgreSQL](/sql/docs/postgres/connect-run) or [AlloyDB for PostgreSQL](/alloydb/docs/quickstart/integrate-cloud-run) with the `pgvector` extension.\n5. **Tools:** The orchestrator uses tools to perform specific tasks that models are not suited for or to interact with external services, APIs, or websites. This can include:\n\n - Basic utilities: Precise math calculations, time conversions, or other similar utilities can run in the orchestrating Cloud Run service.\n - API calling: Make calls to other internal or third-party APIs (read or write access).\n - Image or chart generation: Use image generation models or run chart libraries to quickly and effectively create visual content.\n - Browser and OS automation: Run a headless or a full graphical Operating System within container instances to allow the agent to browse the web, extract information from websites, or perform actions using clicks and keyboard input. The Cloud Run service returns pixels of screens. Use libraries like [Puppeteer](https://pptr.dev/) to control the browser.\n - Code execution: Cloud Run provides a [secure environment with multi-layered sandboxing](/run/docs/securing/security#compute-security) and can be configured to the code execution service with minimal or no [IAM permissions](/run/docs/securing/service-identity). A [Cloud Run job](/run/docs/create-jobs) can be used to execute code asynchronously and a [Cloud Run service](/run/docs/deploying) with a [concurrency of 1](/run/docs/configuring/concurrency) can be used for synchronous execution.\n\nWhat's next\n-----------\n\n- Watch [Build AI agents on Cloud Run](https://www.youtube.com/watch?v=GwL8e5Z1tl4).\n- Try the [codelab](https://codelabs.developers.google.com/codelabs/build-and-deploy-a-langchain-app-on-cloud-run) for learning how to build and deploy a LangChain app to Cloud Run.\n- Learn how to [deploy Agent Development Kit (ADK) to Cloud Run](https://google.github.io/adk-docs/deploy/cloud-run/).\n- Find ready-to-use agent samples in [Agent Development Kit (ADK) samples](https://github.com/google/adk-samples).\n- [Host Model Context Protocol (MCP) servers on Cloud Run](/run/docs/host-mcp-servers)."]]