Pelajari cara meminta resource mesin Google Cloud di Vertex AI Pipelines
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Anda dapat menjalankan komponen Python di Vertex AI Pipelines menggunakan resource mesin khusus Google Cloudyang ditawarkan oleh pelatihan kustom Vertex AI.
Membuat tugas pelatihan kustom dari komponen menggunakan Vertex AI Pipelines
Contoh berikut menunjukkan cara menggunakan metode create_custom_training_job_from_component untuk mengubah komponen Python menjadi tugas pelatihan kustom dengan resource mesin Google Cloud yang ditentukan pengguna, lalu menjalankan pipeline yang dikompilasi di Vertex AI Pipelines:
importkfpfromkfpimportdslfromgoogle_cloud_pipeline_components.v1.custom_jobimportcreate_custom_training_job_from_component# Create a Python component@dsl.componentdefmy_python_component():importtimetime.sleep(1)# Convert the above component into a custom training jobcustom_training_job=create_custom_training_job_from_component(my_python_component,display_name='DISPLAY_NAME',machine_type='MACHINE_TYPE',accelerator_type='ACCELERATOR_TYPE',accelerator_count='ACCELERATOR_COUNT',boot_disk_type:'BOOT_DISK_TYPE',boot_disk_size_gb:'BOOT_DISK_SIZE',network:'NETWORK',reserved_ip_ranges:'RESERVED_IP_RANGES',nfs_mounts:'NFS_MOUNTS'persistent_resource_id:'PERSISTENT_RESOURCE_ID')# Define a pipeline that runs the custom training job@dsl.pipeline(name="resource-spec-request",description="A simple pipeline that requests a Google Cloud machine resource",pipeline_root='PIPELINE_ROOT',)defpipeline():training_job_task=custom_training_job(project='PROJECT_ID',location='LOCATION',).set_display_name('training-job-task')
Ganti kode berikut:
DISPLAY_NAME: Nama tugas kustom. Jika Anda tidak menentukan nama, nama komponen akan digunakan secara default.
MACHINE_TYPE: Jenis mesin untuk menjalankan tugas kustom—misalnya, e2-standard-4. Untuk mengetahui informasi selengkapnya tentang jenis mesin, lihat Jenis mesin. Jika Anda menentukan TPU sebagai accelerator_type, tetapkan ini ke cloud-tpu.
Untuk mengetahui informasi selengkapnya, lihat referensi parameter machine_type.
ACCELERATOR_TYPE: Jenis akselerator yang dipasang ke mesin. Untuk mengetahui informasi selengkapnya tentang GPU yang tersedia dan cara mengonfigurasinya, lihat GPU. Untuk mengetahui informasi selengkapnya tentang jenis TPU yang tersedia dan cara mengonfigurasinya, lihat TPU.
Untuk mengetahui informasi selengkapnya, lihat referensi parameter accelerator_type.
ACCELERATOR_COUNT: Jumlah akselerator yang terpasang ke mesin yang menjalankan tugas kustom. Jika Anda menentukan jenis akselerator, jumlah akselerator akan disetel ke 1 secara default.
NETWORK: Jika tugas kustom di-peering ke jaringan Compute Engine yang telah dikonfigurasi akses layanan pribadinya, tentukan nama lengkap jaringan. Untuk mengetahui informasi selengkapnya, lihat referensi parameter network.
RESERVED_IP_RANGES: Daftar nama untuk rentang IP yang dicadangkan
di jaringan VPC yang digunakan untuk men-deploy tugas kustom.
Untuk mengetahui informasi selengkapnya, lihat referensi parameter reserved_ip_ranges.
NFS_MOUNTS: Daftar resource pemasangan NFS dalam format dict JSON.
Untuk mengetahui informasi selengkapnya, lihat referensi parameter nfs_mounts.
PERSISTENT_RESOURCE_ID (pratinjau): ID resource persisten
untuk menjalankan pipeline. Jika Anda menentukan
resource persisten, pipeline akan berjalan di mesin yang ada
yang terkait dengan resource persisten, bukan di resource mesin
on-demand dan berdurasi singkat. Perhatikan bahwa konfigurasi jaringan dan CMEK untuk
pipeline harus cocok dengan konfigurasi yang ditentukan untuk resource persisten.
Untuk mengetahui informasi selengkapnya tentang resource persisten dan cara membuatnya, lihat Membuat resource persisten.
PIPELINE_ROOT: Tentukan Cloud Storage URI yang dapat diakses oleh akun layanan pipeline Anda. Artefak operasi pipeline Anda disimpan dalam root pipeline.
PROJECT_ID: Project Google Cloud tempat pipeline ini berjalan.
LOCATION: Lokasi atau region tempat pipeline ini berjalan.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-02 UTC."],[],[],null,["# Learn how to request Google Cloud machine resources in Vertex AI Pipelines\n\nYou can run your Python component on Vertex AI Pipelines by using Google Cloud-specific machine resources offered by Vertex AI custom training.\n\nYou can use the [`create_custom_training_job_from_component`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component) method from the [Google Cloud Pipeline Components](/vertex-ai/docs/pipelines/gcpc-list) to transform a Python component into a Vertex AI custom training job. [Learn how to create a custom job](/vertex-ai/docs/training/create-custom-job).\n\nCreate a custom training job from a component using Vertex AI Pipelines\n-----------------------------------------------------------------------\n\nThe following sample shows how to use the [`create_custom_training_job_from_component`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component) method to transform a Python component into a custom training job with user-defined Google Cloud machine resources, and then run the compiled pipeline on Vertex AI Pipelines: \n\n\n import kfp\n from kfp import dsl\n from google_cloud_pipeline_components.v1.custom_job import create_custom_training_job_from_component\n\n # Create a Python component\n @dsl.component\n def my_python_component():\n import time\n time.sleep(1)\n\n # Convert the above component into a custom training job\n custom_training_job = create_custom_training_job_from_component(\n my_python_component,\n display_name = '\u003cvar translate=\"no\"\u003eDISPLAY_NAME\u003c/var\u003e',\n machine_type = '\u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e',\n accelerator_type='\u003cvar translate=\"no\"\u003eACCELERATOR_TYPE\u003c/var\u003e',\n accelerator_count='\u003cvar translate=\"no\"\u003eACCELERATOR_COUNT\u003c/var\u003e',\n boot_disk_type: '\u003cvar translate=\"no\"\u003eBOOT_DISK_TYPE\u003c/var\u003e',\n boot_disk_size_gb: '\u003cvar translate=\"no\"\u003eBOOT_DISK_SIZE\u003c/var\u003e',\n network: '\u003cvar translate=\"no\"\u003eNETWORK\u003c/var\u003e',\n reserved_ip_ranges: '\u003cvar translate=\"no\"\u003eRESERVED_IP_RANGES\u003c/var\u003e',\n nfs_mounts: '\u003cvar translate=\"no\"\u003eNFS_MOUNTS\u003c/var\u003e'\n persistent_resource_id: '\u003cvar translate=\"no\"\u003ePERSISTENT_RESOURCE_ID\u003c/var\u003e'\n )\n\n # Define a pipeline that runs the custom training job\n @dsl.pipeline(\n name=\"resource-spec-request\",\n description=\"A simple pipeline that requests a Google Cloud machine resource\",\n pipeline_root='\u003cvar translate=\"no\"\u003ePIPELINE_ROOT\u003c/var\u003e',\n )\n def pipeline():\n training_job_task = custom_training_job(\n project='\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e',\n location='\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e',\n ).set_display_name('training-job-task')\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eDISPLAY_NAME\u003c/var\u003e: The name of the custom job. If you don't specify the name, the component name is used, by default.\n\n- \u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e: The type of the machine for running the custom job---for example, `e2-standard-4`. For more information about machine types, see [Machine types](/vertex-ai/docs/training/configure-compute#machine-types). If you specified a TPU as the `accelerator_type`, set this to `cloud-tpu`.\n For more information, see the [`machine_type` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.machine_type).\n\n- \u003cvar translate=\"no\"\u003eACCELERATOR_TYPE\u003c/var\u003e: The type of accelerator attached to the machine. For more information about the available GPUs and how to configure them, see [GPUs](/vertex-ai/docs/training/configure-compute#specifying_gpus). For more information about the available TPU types and how to configure them, see [TPUs](/vertex-ai/docs/training/configure-compute#tpu).\n For more information, see the [`accelerator_type` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.accelerator_type).\n\n- \u003cvar translate=\"no\"\u003eACCELERATOR_COUNT\u003c/var\u003e: The number of accelerators attached to the machine running the custom job. If you specify the accelerator type, the accelerator count is set to `1`, by default.\n\n- \u003cvar translate=\"no\"\u003eBOOT_DISK_TYPE\u003c/var\u003e: The type of boot disk.\n For more information, see the [`boot_disk_type` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.boot_disk_type).\n\n- \u003cvar translate=\"no\"\u003eBOOT_DISK_SIZE\u003c/var\u003e: The size of the boot disk in GB.\n For more information, see the [`boot_disk_size_gb` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.boot_disk_size_gb).\n\n- \u003cvar translate=\"no\"\u003eNETWORK\u003c/var\u003e: If the custom job is peered to a Compute Engine\n network that has private services access configured, specify the full name of the network. For more information, see the [`network` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.network).\n\n- \u003cvar translate=\"no\"\u003eRESERVED_IP_RANGES\u003c/var\u003e: A list of names for the reserved IP ranges\n under the VPC network used to deploy the custom job.\n For more information, see the [`reserved_ip_ranges` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.reserved_ip_ranges).\n\n- \u003cvar translate=\"no\"\u003eNFS_MOUNTS\u003c/var\u003e: A list of NFS mount resources in JSON dict format.\n For more information, see the [`nfs_mounts` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.nfs_mounts).\n\n- \u003cvar translate=\"no\"\u003ePERSISTENT_RESOURCE_ID\u003c/var\u003e (preview): The ID of the persistent\n resource to run the pipeline. If you specify\n a persistent resource, the pipeline runs on existing machines\n associated to the persistent resource, instead of on-demand and short-lived\n machine resources. Note that the network and CMEK configuration for the\n pipeline must match the configuration specified for the persistent resource.\n For more information about persistent resources and how to create them, see\n [Create a persistent resource](/vertex-ai/docs/training/persistent-resource-create#create-persistent-resource-gcloud).\n\n- \u003cvar translate=\"no\"\u003ePIPELINE_ROOT\u003c/var\u003e: Specify a Cloud Storage URI that your pipelines service account can access. The artifacts of your pipeline runs are stored within the pipeline root.\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The Google Cloud project that this pipeline runs in.\n\n- \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e: The location or region that this pipeline runs in.\n\nAPI Reference\n-------------\n\nFor a complete list of arguments supported by the [`create_custom_training_job_from_component`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component) method, see the [Google Cloud Pipeline Components SDK Reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/index.html)."]]