GPU machine types

To use GPUs on Google Cloud, you can either deploy an accelerator-optimized VM that has attached GPUs, or attach GPUs to an N1 general-purpose VM. The following GPU machine types are supported for running your artificial intelligence (AI), machine learning (ML) and high performance computing (HPC) workloads on the AI Hypercomputer platform.

A4 series

The A4 machine series is available in the following configurations. For more information about this machine series, see A4 accelerator-optimized machine series.

A4

These machine types have NVIDIA B200 GPUs (nvidia-b200) attached and are ideal for foundation model training and serving.

Machine type GPU count GPU memory*
(GB HBM3e)
vCPU count VM memory (GB) Attached Local SSD (GiB) Physical NIC count Maximum network bandwidth (Gbps)
a4-highgpu-8g 8 1,440 224 3,968 12,000 10 3,600

*GPU memory is the memory on a GPU device that can be used for temporary storage of data. It is separate from the VM's memory and is specifically designed to handle the higher bandwidth demands of your graphics-intensive workloads.
A vCPU is implemented as a single hardware hyper-thread on one of the available CPU platforms.
Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. See Network bandwidth.

A3 series

The A3 machine series is available in the following configurations. For more information about this machine series, see A3 accelerator-optimized machine series.

A3 Ultra

These machine types have NVIDIA H200 GPUs (nvidia-h200-141gb) attached and are ideal for foundation model training and serving.

Machine type GPU count GPU memory*
(GB HBM3e)
vCPU count VM memory (GB) Attached Local SSD (GiB) Physical NIC count Maximum network bandwidth (Gbps)
a3-ultragpu-8g 8 1128 224 2,952 12,000 10 3,600

*GPU memory is the memory on a GPU device that can be used for temporary storage of data. It is separate from the VM's memory and is specifically designed to handle the higher bandwidth demands of your graphics-intensive workloads.
A vCPU is implemented as a single hardware hyper-thread on one of the available CPU platforms.
Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. See Network bandwidth.

A3 Mega

These machine types have NVIDIA H100 80GB GPUs (nvidia-h100-mega-80gb) and are ideal for large model training and multi-host inference.

Machine type GPU count GPU memory*
(GB HBM3)
vCPU count VM memory (GB) Attached Local SSD (GiB) Physical NIC count Maximum network bandwidth (Gbps)
a3-megagpu-8g 8 640 208 1,872 6,000 9 1,800

*GPU memory is the memory on a GPU device that can be used for temporary storage of data. It is separate from the VM's memory and is specifically designed to handle the higher bandwidth demands of your graphics-intensive workloads.
A vCPU is implemented as a single hardware hyper-thread on one of the available CPU platforms.
Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. See Network bandwidth.

A3 High

These machine types have NVIDIA H100 80GB GPUs (nvidia-h100-80gb) and are well-suited for both large model inference and model fine tuning.

Machine type GPU count GPU memory*
(GB HBM3)
vCPU count VM memory (GB) Attached Local SSD (GiB) Physical NIC count Maximum network bandwidth (Gbps)
a3-highgpu-1g 1 80 26 234 750 1 25
a3-highgpu-2g 2 160 52 468 1,500 1 50
a3-highgpu-4g 4 320 104 936 3,000 1 100
a3-highgpu-8g 8 640 208 1,872 6,000 5 1,000

*GPU memory is the memory on a GPU device that can be used for temporary storage of data. It is separate from the VM's memory and is specifically designed to handle the higher bandwidth demands of your graphics-intensive workloads.
A vCPU is implemented as a single hardware hyper-thread on one of the available CPU platforms.
Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. See Network bandwidth.

A3 Edge

These machine types have NVIDIA H100 80GB GPUs (nvidia-h100-80gb), are designed specifically for serving and are available in a limited set of regions.

Machine type GPU count GPU memory*
(GB HBM3)
vCPU count VM memory (GB) Attached Local SSD (GiB) Physical NIC count Maximum network bandwidth (Gbps)
a3-edgegpu-8g 8 640 208 1,872 6,000 5
  • 800: for asia-south1 and northamerica-northeast2
  • 400: for all other A3 Edge regions

*GPU memory is the memory on a GPU device that can be used for temporary storage of data. It is separate from the VM's memory and is specifically designed to handle the higher bandwidth demands of your graphics-intensive workloads.
A vCPU is implemented as a single hardware hyper-thread on one of the available CPU platforms.
Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. See Network bandwidth.

What's next?