Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Vertex AI adalah platform machine learning (ML) yang memungkinkan Anda melatih
dan men-deploy model ML dan aplikasi AI, serta menyesuaikan model bahasa besar
(LLM) untuk digunakan dalam aplikasi yang didukung teknologi AI. Vertex AI menggabungkan alur kerja data engineering, data science, dan ML engineering, sehingga tim Anda dapat berkolaborasi menggunakan rangkaian alat yang sama dan menskalakan aplikasi menggunakan manfaat Google Cloud.
Vertex AI menyediakan beberapa opsi untuk pelatihan
dan deployment model:
Model Garden
memungkinkan Anda menemukan, menguji, menyesuaikan, dan men-deploy
Vertex AI serta memilih model dan aset open source.
AI Generatif memberi Anda akses ke model AI generatif besar dari Google
untuk berbagai modalitas (teks, kode, gambar, ucapan). Anda dapat menyesuaikan
LLM Google untuk memenuhi kebutuhan, lalu men-deploy-nya
untuk digunakan dalam aplikasi yang didukung teknologi AI.
Setelah Anda men-deploy model, gunakan alat MLOps menyeluruh Vertex AI untuk
mengotomatiskan dan menskalakan project di seluruh siklus proses ML.
Alat MLOps ini dijalankan di infrastruktur yang terkelola sepenuhnya dan dapat disesuaikan
berdasarkan performa dan kebutuhan anggaran Anda.
Anda dapat menggunakan Vertex AI SDK untuk Python guna menjalankan seluruh alur kerja
machine learning di Vertex AI Workbench, lingkungan pengembangan
berbasis notebook Jupyter. Anda dapat berkolaborasi dengan tim
untuk mengembangkan model di Colab Enterprise,
sebuah versi Colaboratory yang terintegrasi dengan
Vertex AI. Antarmuka yang tersedia
lainnya meliputi konsol Google Cloud , alat command line Google Cloud CLI, library
klien, dan Terraform (dukungan terbatas).
Alur kerja Vertex AI dan machine learning (ML)
Bagian ini memberikan ringkasan tentang alur kerja machine learning dan cara
menggunakan Vertex AI untuk membangun serta men-deploy model Anda.
Persiapan data: Setelah mengekstrak dan membersihkan set data, lakukan
analisis data eksploratif (EDA) untuk memahami skema dan
karakteristik data yang diharapkan oleh model ML. Terapkan transformasi data
dan rekayasa fitur ke model, lalu pisahkan data menjadi set pelatihan,
validasi, dan pengujian.
Jelajahi dan visualisasikan data menggunakan notebook Vertex AI Workbench. Vertex AI Workbench terintegrasi dengan Cloud Storage dan
BigQuery untuk membantu Anda mengakses serta memproses data dengan lebih cepat.
Untuk set data besar, gunakan Dataproc Serverless Spark dari
notebook Vertex AI Workbench untuk menjalankan workload Spark tanpa harus
mengelola cluster Dataproc Anda sendiri.
Pelatihan model: Pilih metode pelatihan untuk melatih model dan menyesuaikannya untuk
performa.
Untuk melatih model tanpa menulis kode, lihat ringkasan
AutoML. AutoML mendukung data tabulasi, gambar, dan
video.
Untuk menulis kode pelatihan Anda sendiri dan melatih model kustom menggunakan
framework ML pilihan Anda, lihat Ringkasan pelatihan kustom.
Vertex AI Vizier menyesuaikan hyperparameter untuk Anda dalam model machine
learning (ML) yang kompleks.
Gunakan Vertex AI Experiments untuk melatih model Anda menggunakan
berbagai teknik ML dan bandingkan hasilnya.
Daftarkan model terlatih Anda di
Vertex AI Model Registry untuk pembuatan versi dan penyerahan
ke produksi. Vertex AI Model Registry terintegrasi dengan fitur validasi dan
deployment seperti evaluasi dan endpoint model.
Evaluasi dan iterasi model: Evaluasi model yang telah dilatih, sesuaikan
data Anda berdasarkan metrik evaluasi, dan lakukan iterasi pada
model Anda.
Gunakan metrik evaluasi model, seperti presisi dan perolehan, untuk
mengevaluasi serta membandingkan performa model Anda. Buat evaluasi
melalui Vertex AI Model Registry, atau sertakan evaluasi dalam
alur kerja Vertex AI Pipelines Anda.
Penyaluran model: Deploy model Anda ke produksi dan dapatkan inferensi online atau kueri secara langsung untuk inferensi batch.
Deploy model yang dilatih khusus menggunakan container bawaan atau
kustom untuk mendapatkan inferensi
online secara real-time (terkadang disebut inferensi HTTP).
Dapatkan inferensi tumpukan asinkron, yang tidak memerlukan
deployment ke endpoint.
Dengan runtime TensorFlow yang dioptimalkan, Anda dapat melakukan inferensi model TensorFlow
dengan biaya lebih rendah dan latensi lebih rendah dibandingkan container TensorFlow Serving bawaan
berbasis open source.
Untuk kasus inferensi online dengan model tabulasi, gunakan
Vertex AI Feature Store untuk melakukan inferensi fitur dari
repositori pusat dan memantau kondisi fitur.
Vertex Explainable AI membantu Anda memahami cara setiap fitur berkontribusi terhadap
inferensi model (atribusi fitur) dan menemukan data yang salah diberi label dari
set data pelatihan (penjelasan berbasis contoh).
Deploy dan dapatkan inferensi online untuk model yang dilatih dengan
BigQuery ML.
Pemantauan model: Memantau performa model yang di-deploy. Gunakan
data inferensi yang masuk untuk melatih ulang model Anda agar mendapatkan performa yang lebih baik.
Vertex AI Model Monitoring memantau model untuk
mencari diferensiasi performa pelatihan dan penayangan serta penyimpangan inferensi, lalu mengirimi Anda pemberitahuan saat
data inferensi yang masuk menyimpang terlalu jauh dari dasar pengukuran pelatihan.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-18 UTC."],[],[],null,["# Introduction to Vertex AI\n\nVertex AI is a machine learning (ML) platform that lets you train\nand deploy ML models and AI applications, and customize large language models\n(LLMs) for use in your AI-powered applications. Vertex AI combines data\nengineering, data science, and ML engineering workflows, enabling your\nteams to collaborate using a common toolset and scale your applications using\nthe benefits of Google Cloud.\n\nVertex AI provides several options for model [training](/vertex-ai/docs/start/training-methods)\nand deployment:\n\n- [AutoML](/vertex-ai/docs/beginner/beginners-guide) lets you train tabular, image, or video data\n without writing code or preparing data splits. These models can be\n [deployed for online inference or queried directly for batch inference](/vertex-ai/docs/predictions/overview#get_predictions_from_models).\n\n- [Custom training](/vertex-ai/docs/training/overview) gives you complete control over the training\n process, including using your preferred ML framework, writing your own\n training code, and choosing hyperparameter tuning options. You can import\n your custom-trained model into the [Model Registry](/vertex-ai/docs/model-registry/introduction)\n and [deploy it to an endpoint](/vertex-ai/docs/general/deployment) for online\n inference using [prebuilt](/vertex-ai/docs/predictions/pre-built-containers) or [custom](/vertex-ai/docs/predictions/use-custom-container) containers.\n Or you can\n [query it directly for batch inferences](/vertex-ai/docs/predictions/get-batch-predictions).\n\n- [Model Garden](/vertex-ai/generative-ai/docs/model-garden/explore-models)\n lets you discover, test, customize, and deploy\n Vertex AI and select open-source models and assets.\n\n- [Generative AI](/vertex-ai/generative-ai/docs/overview) gives you access to Google's large generative AI\n models for multiple modalities (text, code, images, speech). You can tune\n Google's LLMs to meet your needs, and then deploy them\n for use in your AI-powered applications.\n\nAfter you deploy your models, use Vertex AI's end-to-end MLOps tools to\nautomate and scale projects throughout the ML lifecycle.\nThese MLOps tools are run on fully-managed infrastructure that you can customize\nbased on your performance and budget needs.\n\nYou can use the [Vertex AI SDK for Python](/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk) to run the entire machine\nlearning workflow in [Vertex AI Workbench](/vertex-ai/docs/workbench/introduction), a Jupyter\nnotebook-based development environment. You can collaborate with a team\nto develop your model in [Colab Enterprise](/colab/docs/introduction),\na version of [Colaboratory](https://colab.google/) that is integrated with\nVertex AI. Other [available interfaces](/vertex-ai/docs/start/introduction-interfaces)\ninclude the Google Cloud console, the Google Cloud CLI command line tool, client\nlibraries, and Terraform (limited support).\n\nVertex AI and the machine learning (ML) workflow\n------------------------------------------------\n\nThis section provides an overview of the machine learning workflow and how you\ncan use Vertex AI to build and deploy your models.\n\n1. **Data preparation** : After extracting and cleaning your dataset, perform\n [exploratory data analysis (EDA)](/vertex-ai/docs/glossary#exploratory_data_analysis) to understand the data schema and\n characteristics that are expected by the ML model. Apply data transformations\n and feature engineering to the model, and split the data into training,\n validation, and test sets.\n\n - Explore and visualize data using [Vertex AI Workbench](/vertex-ai/docs/workbench/introduction)\n notebooks. Vertex AI Workbench integrates with Cloud Storage and\n BigQuery to help you access and process your data faster.\n\n - For large datasets, use [Dataproc Serverless Spark](/dataproc-serverless/docs/overview) from a\n Vertex AI Workbench notebook to run Spark workloads without having to\n manage your own Dataproc clusters.\n\n2. **Model training**: Choose a training method to train a model and tune it for\n performance.\n\n - To train a model without writing code, see the [AutoML\n overview](/vertex-ai/docs/training-overview#automl). AutoML supports tabular, image, and\n video data.\n\n - To write your own training code and train custom models using your preferred\n ML framework, see the [Custom training overview](/vertex-ai/docs/training/overview).\n\n - Optimize hyperparameters for custom-trained models using [custom tuning\n jobs](/vertex-ai/docs/training/using-hyperparameter-tuning).\n\n - [Vertex AI Vizier](/vertex-ai/docs/vizier/overview) tunes hyperparameters for you in complex machine\n learning (ML) models.\n\n - Use [Vertex AI Experiments](/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to train your model using\n different ML techniques and compare the results.\n\n - Register your trained models in the\n [Vertex AI Model Registry](/vertex-ai/docs/model-registry/introduction) for versioning and hand-off to\n production. Vertex AI Model Registry integrates with validation and\n deployment features such as model evaluation and endpoints.\n\n3. **Model evaluation and iteration**: Evaluate your trained model, make\n adjustments to your data based on evaluation metrics, and iterate on your\n model.\n\n - Use [model evaluation](/vertex-ai/docs/evaluation/introduction) metrics, such as precision and recall, to evaluate and compare the performance of your models. Create evaluations through Vertex AI Model Registry, or include evaluations in your [Vertex AI Pipelines](/vertex-ai/docs/pipelines/introduction) workflow.\n4. **Model serving**: Deploy your model to production and get online\n inferences or query it directly for batch inferences.\n\n - Deploy your custom-trained model using [prebuilt](/vertex-ai/docs/predictions/pre-built-containers) or\n [custom](/vertex-ai/docs/predictions/use-custom-container) containers to get real-time [*online\n inferences*](/vertex-ai/docs/predictions/overview#online_predictions) (sometimes called HTTP inference).\n\n - Get asynchronous [*batch inferences*](/vertex-ai/docs/predictions/overview#batch_predictions), which don't require\n deployment to endpoints.\n\n - [Optimized TensorFlow runtime](/vertex-ai/docs/predictions/optimized-tensorflow-runtime) lets you serve TensorFlow\n models at a lower cost and with lower latency than open source based\n prebuilt TensorFlow serving containers.\n\n - For online serving cases with tabular models, use\n [Vertex AI Feature Store](/vertex-ai/docs/featurestore/overview) to serve features from a\n central repository and monitor feature health.\n\n - [Vertex Explainable AI](/vertex-ai/docs/explainable-ai/overview) helps you understand how each feature contributes to\n model inference (*feature attribution* ) and find mislabeled data from the\n training dataset (*example-based explanation*).\n\n - Deploy and get online inferences for models trained with\n [BigQuery ML](/vertex-ai/docs/beginner/bqml).\n\n5. **Model monitoring**: Monitor the performance of your deployed model. Use\n incoming inference data to retrain your model for improved performance.\n\n - [Vertex AI Model Monitoring](/vertex-ai/docs/model-monitoring/overview) monitors models for training-serving skew and inference drift and sends you alerts when the incoming inference data skews too far from the training baseline.\n\nWhat's next\n-----------\n\n- Learn about [Vertex AI's MLOps features](/vertex-ai/docs/start/introduction-mlops).\n\n- Learn about [interfaces that you can use to interact with\n Vertex AI](/vertex-ai/docs/start/introduction-interfaces)."]]