Alur Kerja Tabular untuk AutoML End-to-End adalah pipeline AutoML lengkap untuk tugas klasifikasi dan regresi. Hal ini mirip dengan AutoML API, tetapi memungkinkan Anda memilih apa yang akan dikontrol dan diotomatisasi. Alih-alih memiliki kontrol untuk seluruh pipeline, Anda memiliki kontrol untuk setiap langkah di pipeline. Kontrol pipeline ini mencakup:
Pemisahan data
Rekayasa fitur
Penelusuran arsitektur
Pelatihan model
Ansambel model
Distilasi model
Manfaat
Berikut ini beberapa manfaat
Tabular Workflow untuk AutoML End-to-End
:
Mendukung set data besar berukuran beberapa TB dan memiliki hingga 1.000 kolom.
Memungkinkan Anda meningkatkan stabilitas dan menurunkan waktu pelatihan dengan membatasi ruang penelusuran jenis arsitektur atau melewati penelusuran arsitektur.
Memungkinkan Anda meningkatkan kecepatan pelatihan dengan memilih secara manual hardware yang digunakan untuk penelusuran arsitektur dan pelatihan.
Memungkinkan Anda mengurangi ukuran model dan meningkatkan latensi dengan distilasi atau dengan mengubah ukuran ansambel.
Setiap komponen AutoML dapat diperiksa dengan antarmuka grafik pipeline andal yang memungkinkan Anda melihat tabel data yang ditransformasi, arsitektur model yang dievaluasi, dan banyak detail lainnya.
Setiap komponen AutoML mendapatkan fleksibilitas dan transparansi yang lebih luas, seperti kemampuan untuk menyesuaikan parameter, hardware, status proses tampilan, log, dan lain-lain.
AutoML End-to-End di Vertex AI Pipelines
Alur Kerja Tabular untuk AutoML End-to-End adalah instance terkelola dari Vertex AI Pipelines.
Vertex AI Pipelines adalah layanan serverless yang menjalankan pipeline Kubeflow. Anda dapat menggunakan pipeline untuk mengotomatisasi
dan memantau machine learning serta tugas penyiapan data Anda. Setiap langkah di
pipeline menjalankan bagian dari alur kerja pipeline. Misalnya,
pipeline dapat mencakup langkah-langkah untuk memisahkan data, mengubah jenis data, dan melatih model. Karena langkah tersebut
adalah instance komponen pipeline, langkah memiliki input, output, dan
image container. Input langkah dapat ditetapkan dari input pipeline atau dapat
bergantung pada output langkah lain dalam pipeline ini. Dependensi ini
menentukan alur kerja pipeline sebagai directed acyclic graph.
Ringkasan pipeline dan komponen
Diagram berikut menunjukkan pipeline pemodelan untuk
Tabular Workflow untuk AutoML End-to-End
:
Komponen pipeline adalah:
feature-transform-engine: Melakukan rekayasa fitur. Lihat Feature Transform Engine untuk mengetahui detailnya.
split-materialized-data:
Memisahkan data terwujud ke dalam set pelatihan, set evaluasi, dan set pengujian.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-18 UTC."],[],[],null,["# Tabular Workflow for End-to-End AutoML\n\nThis document provides an overview of the End-to-End AutoML\n[pipeline and components](#components). To learn how to train a model with End-to-End AutoML,\nsee [Train a model with End-to-End AutoML](/vertex-ai/docs/tabular-data/tabular-workflows/e2e-automl-train).\n\n\nTabular Workflow for End-to-End AutoML is a complete AutoML\npipeline for classification and regression tasks. It is similar to the\n[AutoML API](/vertex-ai/docs/tabular-data/classification-regression/overview),\nbut allows you to choose what to control and what to automate. Instead of having\ncontrols for the *whole* pipeline, you have controls for *every step* in the\npipeline. These pipeline controls include:\n\n- Data splitting\n- Feature engineering\n- Architecture search\n- Model training\n- Model ensembling\n- Model distillation\n\n\u003cbr /\u003e\n\nBenefits\n--------\n\nThe following lists some of the benefits of\nTabular Workflow for End-to-End AutoML\n:\n\n\n- Supports **large datasets** that are multiple TB in size and have up to 1000 columns.\n- Allows you to **improve stability and lower training time** by limiting the search space of architecture types or skipping architecture search.\n- Allows you to **improve training speed** by manually selecting the hardware used for training and architecture search.\n- Allows you to **reduce model size and improve latency** with distillation or by changing the ensemble size.\n- Each AutoML component can be inspected in a powerful pipelines graph interface that lets you see the transformed data tables, evaluated model architectures, and many more details.\n- Each AutoML component gets extended flexibility and transparency, such as being able to customize parameters, hardware, view process status, logs, and more.\n\n\u003cbr /\u003e\n\nEnd-to-End AutoML on Vertex AI Pipelines\n----------------------------------------\n\n\nTabular Workflow for End-to-End AutoML\nis a managed instance of Vertex AI Pipelines.\n\n\n[Vertex AI Pipelines](/vertex-ai/docs/pipelines/introduction) is a serverless\nservice that runs Kubeflow pipelines. You can use pipelines to automate\nand monitor your machine learning and data preparation tasks. Each step in a\npipeline performs part of the pipeline's workflow. For example,\na pipeline can include steps to split data, transform data types, and train a model. Since steps\nare instances of pipeline components, steps have inputs, outputs, and a\ncontainer image. Step inputs can be set from the pipeline's inputs or they can\ndepend on the output of other steps within this pipeline. These dependencies\ndefine the pipeline's workflow as a directed acyclic graph.\n\nOverview of pipeline and components\n-----------------------------------\n\nThe following diagram shows the modeling pipeline for\nTabular Workflow for End-to-End AutoML\n:\n\n\u003cbr /\u003e\n\nThe pipeline components are:\n\n1. **feature-transform-engine** : Performs feature engineering. See [Feature Transform Engine](/vertex-ai/docs/tabular-data/tabular-workflows/feature-engineering) for details.\n2. **split-materialized-data** : Split the materialized data into a training set, an evaluation set, and a test set.\n\n \u003cbr /\u003e\n\n Input:\n - Materialized data `materialized_data`.\n\n Output:\n - Materialized training split `materialized_train_split`.\n - Materialized evaluation split `materialized_eval_split`.\n - Materialized test set `materialized_test_split`.\n3. **merge-materialized-splits** - Merges the materialized evaluation split and the materialized train split.\n4. **automl-tabular-stage-1-tuner** - Performs model architecture search and tunes hyperparameters.\n\n - An architecture is defined by a set of hyperparameters.\n - Hyperparameters include the model type and the model parameters.\n - Model types considered are neural networks and boosted trees.\n - The system trains a model for each architecture considered.\n5. **automl-tabular-cv-trainer** - Cross-validates architectures by training models on different folds of the input data.\n\n - The architectures considered are those that give the best results in the previous step.\n - The system selects approximately ten best architectures. The precise number is defined by the training budget.\n6. **automl-tabular-ensemble** - Ensembles the best architectures to produce a final model.\n\n - The following diagram illustrates K-fold cross-validation with bagging:\n\n \u003cbr /\u003e\n\n7. **condition-is-distill** - **Optional**. Creates a smaller version of the ensemble model.\n\n - A smaller model reduces latency and cost for inference.\n8. **automl-tabular-infra-validator** - Validates whether the trained model is a valid model.\n\n9. **model-upload** - Uploads the model.\n\n10. **condition-is-evaluation** - **Optional**. Uses the test set to calculate evaluation metrics.\n\nWhat's next\n-----------\n\n- [Train a model using End-to-End\n AutoML](/vertex-ai/docs/tabular-data/tabular-workflows/e2e-automl-train)."]]