Menjalankan inferensi LLM di GPU Cloud Run dengan Hugging Face TGI
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Contoh berikut menunjukkan cara menjalankan layanan backend yang menjalankan toolkit Hugging Face Text Generation Inference (TGI), yang merupakan toolkit untuk men-deploy dan menayangkan Large Language Model (LLM), menggunakan Llama 3.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-21 UTC."],[],[],null,["# Run LLM inference on Cloud Run GPUs with Hugging Face TGI\n\nThe following example shows how to run a backend service that runs the [Hugging Face Text Generation Inference (TGI) toolkit](https://huggingface.co/docs/text-generation-inference), which is a toolkit for deploying and serving Large Language Models (LLMs), using Llama 3.\n\nSee the entire example at [Deploy Llama 3.1 8B with TGI DLC on Cloud Run](https://huggingface.co/docs/google-cloud/examples/cloud-run-tgi-deployment)."]]