Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Saat melatih model menggunakan Tabular Workflow, Anda akan ditagih berdasarkan
biaya infrastruktur dan layanan dependen. Saat membuat inferensi
dengan model ini, Anda dikenai biaya berdasarkan biaya infrastruktur.
Biaya infrastruktur bergantung pada faktor-faktor berikut:
Jumlah mesin yang Anda gunakan. Anda dapat menetapkan parameter terkait
selama pelatihan model, inferensi batch, atau inferensi online.
Jenis mesin yang Anda gunakan. Anda dapat menetapkan parameter ini
selama pelatihan model, inferensi batch, atau inferensi online.
Durasi waktu saat mesin digunakan.
Jika Anda melatih model atau membuat inferensi batch, ini adalah ukuran
total waktu pemrosesan operasi.
Jika Anda membuat inferensi online, ini adalah ukuran waktu
model Anda di-deploy ke endpoint.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-02 UTC."],[],[],null,["# Pricing for Tabular Workflows\n\nWhen you train a model using a Tabular Workflow, you are charged based on the cost of the infrastructure and the dependent services. When you make inferences with this model, you are charged based on the cost of the infrastructure.\n\n\u003cbr /\u003e\n\nThe cost of the infrastructure depends on the following factors:\n\n- The number of machines you use. You can set associated parameters during model training, batch inference, or online inference.\n- The type of machines you use. You can set this parameter during model training, batch inference, or online inference.\n- The length of time the machines are in use.\n - If you train a model or make batch inferences, this is a measure of the total processing time of the operation.\n - If you make online inferences, this is a measure of the time that your model is deployed to an endpoint.\n\nTabular Workflows runs multiple dependent services in your project on your\nbehalf: [Dataflow](https://cloud.google.com/dataflow), [BigQuery](https://cloud.google.com/bigquery),\n[Cloud Storage](https://cloud.google.com/storage),\n[Vertex AI Pipelines](/vertex-ai/docs/pipelines/introduction),\n[Vertex AI Training](https://cloud.google.com/vertex-ai#section-9). These services charge you\ndirectly.\n\nExamples of training cost calculation\n-------------------------------------\n\n**Example 1: 110MB dataset in CSV format, trained for one hour with default hardware configuration.**\n\nThe cost breakdown for the default workflow with Architecture Search and\nTraining is as follows:\n\nOptionally, you can enable model distillation to reduce the resulting model size.\nThe cost breakdown is as follows:\n\n**Example 2: 1.84TB dataset in BigQuery, trained for 20 hours with hardware override.**\n\nThe hardware configuration for this example is as follows:\n\nThe cost breakdown for the default workflow with Architecture Search and\nTraining is as follows:"]]