[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-11。"],[],[],null,["# TPU software versions\n=====================\n\nThis document provides guidance on selecting the appropriate TPU software\nversion when you create Cloud TPUs.\n\nWhen you create TPU resources, you specify the software version, also called\nruntime version, which refers to the software environment that is pre-installed\non your TPU VM. This includes the Ubuntu operating system, Docker, and other\nsoftware required to run your code on TPUs.\n\nIf you're using the Google Cloud CLI, you specify the TPU software\nversion using the `--version` or `--runtime-version` parameter. If you're using\nthe Google Cloud console, you select a TPU software version from the **TPU software\nversion** list.\n\nPyTorch and JAX\n---------------\n\nUse the following common TPU software versions for PyTorch and JAX, then install\nthe framework you want to use.\n\nFor more information about installation and getting started with PyTorch or JAX,\nsee [Run a calculation on a Cloud TPU VM using PyTorch](/tpu/docs/run-calculation-pytorch) and [Run a calculation on a Cloud TPU VM\nusing JAX](/tpu/docs/run-calculation-jax).\n\nTensorFlow\n----------\n\nTPU software versions for TensorFlow follow a specific naming\nconvention:\n\n`tpu-vm-tf-x.y.z[-{pod}][-{device_api}]`\n\n- `x.y.z`: Represents the major, minor, and patch versions of TensorFlow.\n- `-pod` (optional): Indicates that you're using a multi-host TPU slice.\n- `-{device_api}` (optional): Specifies the device API, for example, `-pjrt` (if you're using the [PJRT API](/tpu/docs/runtimes#pjrt-support)).\n\nSee the following sections for more information on how to\nspecify a TPU software version.\n\nThere are TPU software versions specific to each version of TensorFlow.\nThe following table shows the supported TensorFlow versions and\nassociated libtpu versions:\n\nFor more information about TensorFlow patch versions, see\n[Supported TensorFlow patch versions](/tpu/docs/supported-patches).\n\n### TPU v6e, v5p, and v5e\n\nTPU v6e, v5e, and v5p support TensorFlow 2.15.0 and newer. You specify\nthe TPU software version using the form: `tpu-vm-tf-x.y.z-{pod}-pjrt` where `x`\nis the major TensorFlow version, `y` is the minor version, and `z` is\nthe TensorFlow patch version. Add `pod` after the TensorFlow\nversion if you are using a multi-host TPU. For example, if you are using\nTensorFlow 2.16.0 on a multi-host TPU, use the\n`tpu-vm-tf-2.16.0-pod-pjrt` TPU software version. For other versions of\nTensorFlow, replace `2.16.0` with the major and patch versions of\nTensorFlow you are using. If you are using a single-host TPU, omit\n`pod`.\n\n### TPU v4\n\nIf you're using TPU v4 and TensorFlow 2.10.1 or newer, follow the\ninstructions for [TPU v2 and v3](#tf-v2-v3). If you're using TensorFlow\n2.10.0 or earlier, use a v4-specific TPU software version:\n\n### TPU v2 and v3\n\nIf you are using TPU v2 or v3, use the TPU software version that matches the\nversion of TensorFlow you are using. For example if you are using\nTensorFlow 2.14.1, use the `tpu-vm-tf-2.14.1` TPU software version. For\nother versions of TensorFlow, replace `2.14.1` with the\nTensorFlow version you are using. If you are using a multi-host TPU,\nappend pod to the end of the TPU software version, for example\n`tpu-vm-tf-2.14.1-pod`.\n\nBeginning with TensorFlow 2.15.0, you must also specify a device API as\npart of the software version name. For example, if you are using\nTensorFlow 2.16.1 with the PJRT API, use the TPU software version\n`tpu-vm-tf-2.16.1-pjrt`. If you are using the stream executor API with the same\nversion of TensorFlow, use the `tpu-vm-tf-2.16.1-se` TPU software\nversion. TensorFlow versions older than 2.15.0 only support stream\nexecutor.\n\n### TensorFlow PJRT support\n\nBeginning with TensorFlow 2.15.0, you can use the PJRT interface for\nTensorFlow on TPU. PJRT features automatic device memory defragmentation\nand simplifies the integration of hardware with frameworks. For more information\nabout PJRT, see\n[PJRT: Simplifying ML Hardware and Framework Integration](https://opensource.googleblog.com/2023/05/pjrt-simplifying-ml-hardware-and-framework-integration.html).\n| **Important:** TPU v5e and future TPU generations only support PJRT. Not all features of TPU v2, v3, and v4 have been migrated to the PJRT API. The following table describes which features are supported on PJRT or stream executor.\n\nWhat's next\n-----------\n\n- See [TPU architecture](/tpu/docs/system-architecture-tpu-vm) to learn more about TPU architecture.\n- See [When to use TPUs](/tpu/docs/tpus#when_to_use_tpus) to learn about the types of models that are well suited to Cloud TPU."]]