Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Perguntas frequentes: Cloud TPU
Este documento contém uma lista de perguntas frequentes sobre os Cloud TPUs. Ela
é dividida em seções:
Perguntas frequentes independentes de framework: perguntas sobre o uso de Cloud TPUs
independente do framework de ML que você está usando.
Perguntas frequentes sobre o JAX: dúvidas sobre como usar Cloud TPUs com o JAX.
Perguntas frequentes sobre o PyTorch: dúvidas sobre como usar as Cloud TPUs com o PyTorch.
Perguntas frequentes independentes de framework
Como verificar qual processo está usando a TPU em uma VM do Cloud TPU?
Execute tpu-info na VM da Cloud TPU para imprimir o ID do processo e
outras informações sobre o processo usando a TPU. Consulte as métricas compatíveis e as definições correspondentes.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-02 UTC."],[],[],null,["# Frequently Asked Questions - Cloud TPU\n======================================\n\nThis document contains a list of frequently asked questions about Cloud TPUs. It\nis broken up into sections:\n\n1. Framework independent FAQs - questions about using Cloud TPUs regardless of what ML framework you are using.\n2. JAX FAQS - questions about using Cloud TPUs with JAX.\n3. PyTorch FAQs - questions about using Cloud TPUs with PyTorch.\n\nFramework independent FAQs\n--------------------------\n\n### How do I check which process is using the TPU on a Cloud TPU VM?\n\nRun `tpu-info` on the Cloud TPU VM to print the process ID and\nother information about the process using the TPU. See [supported metrics](/tpu/docs/tpu-monitoring-library#list-all-supported-metric-names) for the metrics\nand their corresponding definitions. \n\n tpu-info\n\nThe output from `tpu-info` is similar to the following: \n\n TPU Chips\n ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓\n ┃ Chip ┃ Type ┃ Devices ┃ PID ┃\n ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩\n │ /dev/accel0 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel1 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel2 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel3 │ TPU v4 chip │ 1 │ 130007 │\n └─────────────┴─────────────┴─────────┴────────┘\n\n TPU Runtime Utilization\n ┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓\n ┃ Device ┃ Memory usage ┃ Duty cycle ┃\n ┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩\n │ 0 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 1 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 2 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 3 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n └────────┴──────────────────────┴────────────┘\n\n TensorCore Utilization\n ┏━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓\n ┃ Chip ID ┃ TensorCore Utilization ┃\n ┡━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩\n │ 0 │ 0.00% │\n │ 1 │ 0.00% │\n │ 3 │ 0.00% │\n │ 2 │ 0.00% |\n └─────────┴────────────────────────┘\n\n Buffer Transfer Latency\n ┏━━━━━━━━━━━━━┳━━━━━┳━━━━━┳━━━━━┳━━━━━━┓\n ┃ Buffer Size ┃ P50 ┃ P90 ┃ P95 ┃ P999 ┃\n ┡━━━━━━━━━━━━━╇━━━━━╇━━━━━╇━━━━━╇━━━━━━┩\n │ 8MB+ | 0us │ 0us │ 0us │ 0us |\n └─────────────┴─────┴─────┴─────┴──────┘\n\n### How do I add a persistent disk volume to a Cloud TPU VM?\n\nFor more information, see [Add a persistent disk to a TPU VM](/tpu/docs/attach-durable-block-storage).\n\n### What storage options are supported or recommended for training with TPU VM?\n\nFor more information, see [Cloud TPU storage options](/tpu/docs/storage-options).\n\nJAX FAQs\n--------\n\n### How do I know if the TPU is being used by my program?\n\nThere are a few ways to double check JAX is using the TPU:\n\n1. Use the `jax.devices()` function. For example:\n\n assert jax.devices()[0].platform == 'tpu'\n\n2. Profile your program and verify the profile contains TPU operations. For more\n information, see [Profiling JAX programs](https://github.com/google/jax/blob/main/docs/profiling.md)\n\nFor more information, see [JAX FAQ](https://jax.readthedocs.io/en/latest/faq.html)\n\nPytorch FAQs\n------------\n\n### How do I know if the TPU is being used by my program?\n\nYou can run following python commands: \n\n \u003e\u003e\u003e import torch_xla.core.xla_model as xm\n \u003e\u003e\u003e xm.get_xla_supported_devices(devkind=\"TPU\")\n\nAnd verify if you can see any TPU devices."]]