Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Vertex AI menyediakan image container Docker yang Anda jalankan sebagai container
bawaan untuk menayangkan inferensi dan penjelasan dari artefak
model terlatih. Container ini, yang diatur berdasarkan versi framework dan framework machine learning (ML), menyediakan server inferensi HTTP yang dapat Anda gunakan untuk menampilkan inferensi dengan konfigurasi minimal. Dalam banyak kasus, menggunakan container bawaan lebih mudah daripada membuat container kustom sendiri untuk inferensi.
Dokumen ini mencantumkan container bawaan untuk inferensi dan penjelasan,
serta menjelaskan cara menggunakannya dengan artefak model yang Anda buat menggunakan
fungsi pelatihan kustom
Vertex AI atau artefak model yang Anda
buat di luar Vertex AI.
Kebijakan dan jadwal dukungan
Vertex AI mendukung setiap versi framework berdasarkan jadwal untuk
meminimalkan kerentanan keamanan. Tinjau
Jadwal kebijakan dukungan untuk memahami implikasi dari
tanggal akhir dukungan dan tanggal akhir ketersediaan.
Image container yang tersedia
Setiap image container berikut tersedia di beberapa repositori Artifact Registry, yang menyimpan data di berbagai lokasi. Anda bisa menggunakan salah satu
URI untuk image saat melakukan pelatihan kustom; masing-masing menyediakan image
container yang sama. Jika Anda menggunakan konsol Google Cloud untuk membuat resource
Model,
konsol Google Cloud akan memilih URI yang paling cocok dengan lokasi tempat
Anda menggunakan Vertex AI untuk mengurangi
latensi.
TensorFlow
Image container TensorFlow yang tersedia (Klik untuk meluaskan)
Versi framework ML
Akselerator yang didukung (dan versi CUDA, jika berlaku)
Untuk menggunakan salah satu container bawaan ini, Anda harus menyimpan model sebagai satu atau beberapa artefak model yang sesuai dengan persyaratan container bawaan. Untuk mengetahui informasi selengkapnya, lihat
Mengekspor artefak model untuk inferensi.
Notebook berikut menunjukkan cara menggunakan container bawaan untuk menyajikan
inferensi.
Apa yang ingin Anda lakukan?
Notebook
Melatih dan menyajikan model TensorFlow menggunakan container bawaan
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-18 UTC."],[],[],null,["# Prebuilt containers for inference and explanation\n\nVertex AI provides Docker container images that you run as *prebuilt\ncontainers* for serving inferences and [explanations](/vertex-ai/docs/explainable-ai/overview) from trained model\nartifacts. These containers, which are organized by machine learning (ML)\nframework and framework version, provide [HTTP inference\nservers](/vertex-ai/docs/predictions/custom-container-requirements#server) that you can use to\nserve inferences with minimal configuration. In many cases, using a prebuilt\ncontainer is simpler than [creating your own custom container for\ninference](/vertex-ai/docs/predictions/use-custom-container).\n\nThis document lists the prebuilt containers for inferences and explanations,\nand it describes how to use them with model artifacts that you [created using\nVertex AI's custom training\nfunctionality](/vertex-ai/docs/training/code-requirements) or model artifacts that you\ncreated outside of Vertex AI.\n\nSupport policy and schedule\n---------------------------\n\nVertex AI supports each framework version based on a schedule to\nminimize security vulnerabilities. Review the\n[Support policy schedule](/vertex-ai/docs/framework-support-policy#support_policy_schedule) to understand the implications of\nthe end-of-support and end-of-availability dates.\n\nAvailable container images\n--------------------------\n\nEach of the following container images is available in several\nArtifact Registry repositories, which [store data in various\nlocations](/artifact-registry/docs/repo-locations). You can use any of\nthe URIs for an image when you perform custom training; each provides the same\ncontainer image. If you use the Google Cloud console to create a\n[`Model`](/vertex-ai/docs/reference/rest/v1/projects.locations.models) resource,\nthe Google Cloud console selects the URI that best matches the [location where\nyou are using Vertex AI](/vertex-ai/docs/general/locations) in order to reduce\nlatency.\n| **Note:** Using image names without the `latest` tag isn't supported. You must use an image with the `latest` tag.\n\n### TensorFlow\n\n#### Available TensorFlow container images (Click to expand)\n\n### Optimized TensorFlow runtime\n\nThe following container images use the optimized TensorFlow runtime. For\nmore information, see [Use the optimized TensorFlow runtime](/vertex-ai/docs/predictions/optimized-tensorflow-runtime). \n\n#### Available optimized TensorFlow runtime container images (Click to expand)\n\n\u003cbr /\u003e\n\n### PyTorch\n\n#### Available PyTorch container images (Click to expand)\n\n### scikit-learn\n\n#### Available scikit-learn container images (Click to expand)\n\n### XGBoost\n\n#### Available XGBoost container images (Click to expand)\n\nUse a prebuilt container\n------------------------\n\nYou can specify a prebuilt container for inference when you\n[create a custom `TrainingPipeline` resource that uploads a `Model`](/vertex-ai/docs/training/create-training-pipeline#custom-job-model-upload) or when\nyou [import model artifacts as a `Model`](/vertex-ai/docs/model-registry/import-model).\n\nTo use one of these prebuilt containers, you must save your model as one or\nmore *model artifacts* that comply with the requirements of the prebuilt\ncontainer. For more information, see\n[Export model artifacts for inference](/vertex-ai/docs/training/exporting-model-artifacts).\n\nThe following notebooks demonstrate how to use a prebuilt container to serve\ninferences.\n\nNotebooks\n---------\n\n| To learn more,\n| run the \"Serving PyTorch image models with prebuilt containers on Vertex AI\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/prediction/pytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fprediction%2Fpytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fprediction%2Fpytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/prediction/pytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n\nWhat's next\n-----------\n\n- Learn how to [deploy a model to an endpoint to serve\n inferences](/vertex-ai/docs/predictions/deploy-model-api)."]]