Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Pertanyaan Umum (FAQ) - Cloud TPU
Dokumen ini berisi daftar pertanyaan umum (FAQ) tentang Cloud TPU. Bagian ini dibagi menjadi beberapa bagian:
FAQ independen framework - pertanyaan tentang penggunaan Cloud TPU, terlepas dari framework ML yang Anda gunakan.
FAQ JAX - pertanyaan tentang penggunaan Cloud TPU dengan JAX.
FAQ PyTorch - pertanyaan tentang penggunaan Cloud TPU dengan PyTorch.
FAQ independen framework
Bagaimana cara memeriksa proses mana yang menggunakan TPU di VM Cloud TPU?
Jalankan tpu-info di VM Cloud TPU untuk mencetak ID proses dan
informasi lain tentang proses menggunakan TPU. Lihat metrik yang didukung untuk mengetahui metrik dan definisi yang sesuai.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-18 UTC."],[],[],null,["# Frequently Asked Questions - Cloud TPU\n======================================\n\nThis document contains a list of frequently asked questions about Cloud TPUs. It\nis broken up into sections:\n\n1. Framework independent FAQs - questions about using Cloud TPUs regardless of what ML framework you are using.\n2. JAX FAQS - questions about using Cloud TPUs with JAX.\n3. PyTorch FAQs - questions about using Cloud TPUs with PyTorch.\n\nFramework independent FAQs\n--------------------------\n\n### How do I check which process is using the TPU on a Cloud TPU VM?\n\nRun `tpu-info` on the Cloud TPU VM to print the process ID and\nother information about the process using the TPU. See [supported metrics](/tpu/docs/tpu-monitoring-library#list-all-supported-metric-names) for the metrics\nand their corresponding definitions. \n\n tpu-info\n\nThe output from `tpu-info` is similar to the following: \n\n TPU Chips\n ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓\n ┃ Chip ┃ Type ┃ Devices ┃ PID ┃\n ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩\n │ /dev/accel0 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel1 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel2 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel3 │ TPU v4 chip │ 1 │ 130007 │\n └─────────────┴─────────────┴─────────┴────────┘\n\n TPU Runtime Utilization\n ┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓\n ┃ Device ┃ Memory usage ┃ Duty cycle ┃\n ┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩\n │ 0 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 1 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 2 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 3 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n └────────┴──────────────────────┴────────────┘\n\n TensorCore Utilization\n ┏━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓\n ┃ Chip ID ┃ TensorCore Utilization ┃\n ┡━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩\n │ 0 │ 0.00% │\n │ 1 │ 0.00% │\n │ 3 │ 0.00% │\n │ 2 │ 0.00% |\n └─────────┴────────────────────────┘\n\n Buffer Transfer Latency\n ┏━━━━━━━━━━━━━┳━━━━━┳━━━━━┳━━━━━┳━━━━━━┓\n ┃ Buffer Size ┃ P50 ┃ P90 ┃ P95 ┃ P999 ┃\n ┡━━━━━━━━━━━━━╇━━━━━╇━━━━━╇━━━━━╇━━━━━━┩\n │ 8MB+ | 0us │ 0us │ 0us │ 0us |\n └─────────────┴─────┴─────┴─────┴──────┘\n\n### How do I add a persistent disk volume to a Cloud TPU VM?\n\nFor more information, see [Add a persistent disk to a TPU VM](/tpu/docs/attach-durable-block-storage).\n\n### What storage options are supported or recommended for training with TPU VM?\n\nFor more information, see [Cloud TPU storage options](/tpu/docs/storage-options).\n\nJAX FAQs\n--------\n\n### How do I know if the TPU is being used by my program?\n\nThere are a few ways to double check JAX is using the TPU:\n\n1. Use the `jax.devices()` function. For example:\n\n assert jax.devices()[0].platform == 'tpu'\n\n2. Profile your program and verify the profile contains TPU operations. For more\n information, see [Profiling JAX programs](https://github.com/google/jax/blob/main/docs/profiling.md)\n\nFor more information, see [JAX FAQ](https://jax.readthedocs.io/en/latest/faq.html)\n\nPytorch FAQs\n------------\n\n### How do I know if the TPU is being used by my program?\n\nYou can run following python commands: \n\n \u003e\u003e\u003e import torch_xla.core.xla_model as xm\n \u003e\u003e\u003e xm.get_xla_supported_devices(devkind=\"TPU\")\n\nAnd verify if you can see any TPU devices."]]