TPU software versions

This document provides guidance on selecting the appropriate TPU software version when you create Cloud TPUs.

When you create TPU resources, you specify the software version, also called runtime version, which refers to the software environment that is pre-installed on your TPU VM. This includes the Ubuntu operating system, Docker, and other software required to run your code on TPUs.

If you're using the Google Cloud CLI, you specify the TPU software version using the --version or --runtime-version parameter. If you're using the Google Cloud console, you select a TPU software version from the TPU software version list.

PyTorch and JAX

Use the following common TPU software versions for PyTorch and JAX, then install the framework you want to use.

TPU version TPU software version
Trillium (v6e) v2-alpha-tpuv6e
v5p v2-alpha-tpuv5
v5e v2-alpha-tpuv5-lite
v4 and older tpu-ubuntu2204-base

For more information about installation and getting started with PyTorch or JAX, see Run a calculation on a Cloud TPU VM using PyTorch and Run a calculation on a Cloud TPU VM using JAX.

TensorFlow

The name of the TPU software version is composed of:

  • tpu-vm-tf
  • The version of TensorFlow
  • -pod (if you're using a multi-host TPU slice)
  • -pjrt (if you're using the PJRT API

See the following sections for more information on how to specify a TPU software version.

There are TPU software versions specific to each version of TensorFlow. The following table shows the supported TensorFlow versions and associated libtpu versions:

TensorFlow version libtpu.so version
2.18.0 1.12.0
2.17.1 1.11.1
2.17.0 1.11.0
2.16.2 1.10.1
2.16.1 1.10.1
2.15.1 1.9.0
2.15.0 1.9.0
2.14.1 1.8.1
2.14.0 1.8.0
2.13.1 1.7.1
2.13.0 1.7.0
2.12.1 1.6.1
2.12.0 1.6.0
2.11.1 1.5.1
2.11.0 1.5.0
2.10.1 1.4.1
2.10.0 1.4.0
2.9.3 1.3.2
2.9.1 1.3.0
2.8.3 1.2.3
2.8.0 1.2.0
2.7.3 1.1.2

For more information about TensorFlow patch versions, see Supported TensorFlow patch versions.

TPU v6e, v5p, and v5e

TPU v6e, v5e, and v5p support TensorFlow 2.15.0 and newer. You specify the TPU software version using the form: tpu-vm-tf-x.y.z-{pod}-pjrt where x is the major TensorFlow version, y is the minor version, and z is the TensorFlow patch version. Add pod after the TensorFlow version if you are using a multi-host TPU. For example, if you are using TensorFlow 2.16.0 on a multi-host TPU, use the tpu-vm-tf-2.16.0-pod-pjrt TPU software version. For other versions of TensorFlow, replace 2.16.0 with the major and patch versions of TensorFlow you are using. If you are using a single-host TPU, omit pod.

TPU v4

If you're using TPU v4 and TensorFlow 2.10.1 or newer, follow the instructions for TPU v2 and v3. If you're using TensorFlow 2.10.0 or earlier, use a v4-specific TPU software version:

TensorFlow version TPU software version
2.10.0 tpu-vm-tf-2.10.0-v4
tpu-vm-tf-2.10.0-pod-v4
2.9.3 tpu-vm-tf-2.9.3-v4
tpu-vm-tf-2.9.3-pod-v4
2.9.2 tpu-vm-tf-2.9.2-v4
tpu-vm-tf-2.9.2-pod-v4
2.9.1 tpu-vm-tf-2.9.1-v4
tpu-vm-tf-2.9.1-pod-v4

TPU v2 and v3

If you are using TPU v2 or v3, use the TPU software version that matches the version of TensorFlow you are using. For example if you are using TensorFlow 2.14.1, use the tpu-vm-tf-2.14.1 TPU software version. For other versions of TensorFlow, replace 2.14.1 with the TensorFlow version you are using. If you are using a multi-host TPU, append pod to the end of the TPU software version, for example tpu-vm-tf-2.14.1-pod.

Beginning with TensorFlow 2.15.0, you must also specify a device API as part of the software version name. For example, if you are using TensorFlow 2.16.1 with the PJRT API, use the TPU software version tpu-vm-tf-2.16.1-pjrt. If you are using the stream executor API with the same version of TensorFlow, use the tpu-vm-tf-2.16.1-se TPU software version. TensorFlow versions older than 2.15.0 only support stream executor.

TensorFlow PJRT support

Beginning with TensorFlow 2.15.0, you can use the PJRT interface for TensorFlow on TPU. PJRT features automatic device memory defragmentation and simplifies the integration of hardware with frameworks. For more information about PJRT, see PJRT: Simplifying ML Hardware and Framework Integration.

Accelerator Feature PJRT support Stream executor support
TPU v2 - v4 Dense compute (no TPU embedding API) Yes Yes
TPU v2 - v4 Dense compute API + TPU embedding API No Yes
TPU v2 - v4 tf.summary/tf.print with soft device placement No Yes
TPU v5e Dense compute (no TPU embedding API) Yes No
TPU v5e TPU embedding API N/A No
TPU v5p Dense compute (no TPU embedding API) Yes No
TPU v5p TPU embedding API Yes No

What's next