Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Penelusuran Arsitektur Neural Vertex AI tidak memiliki persyaratan yang menjelaskan cara
mendesain pelatih Anda. Oleh karena itu, pilih framework pelatihan untuk membangun pelatih.
Untuk pelatihan PyTorch dengan data dalam jumlah besar, praktik terbaiknya adalah menggunakan paradigma pelatihan
terdistribusi dan membaca data dari Cloud Storage.
Lihat postingan blog
Pelatihan PyTorch yang efisien dengan Vertex AI guna mengetahui metode untuk meningkatkan performa
pelatihan. Anda dapat melihat peningkatan performa 6x lipat secara keseluruhan dengan data di Cloud Storage menggunakan WebDataset dan memilih strategi pelatihan terdistribusi DistributedDataParallel atau FullyShardedDataParallel. Performa pelatihan menggunakan data di Cloud Storage serupa dengan performa pelatihan menggunakan data di disk lokal.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-18 UTC."],[],[],null,["# Efficient PyTorch training with cloud data\n\nVertex AI Neural Architecture Search has no requirements describing how to\ndesign your trainers. Therefore, choose any training frameworks to build the trainer.\n\nFor PyTorch training with large amounts of data, the best practice is to use the distributed training\nparadigm and to read data from Cloud Storage.\nCheck out the blog post\n[Efficient PyTorch training with Vertex AI](https://cloud.google.com/blog/products/ai-machine-learning/efficient-pytorch-training-with-vertex-ai) for methods to improve the training\nperformance. You can see an overall 6x performance improvement with data on\nCloud Storage using `WebDataset` and choosing `DistributedDataParallel` or\n`FullyShardedDataParallel` distributed training strategies. The training\nperformance using data on Cloud Storage is similar to the training performance using data on\na local disk.\n\nThe prebuilt\n[MNasNet classification example](https://github.com/google/vertex-ai-nas/blob/main/pytorch/classification/cloud_search_main.py)\nhas incorporated these methods into its training pipeline."]]