This document describes the accelerator-optimized machine family, which provides you with virtual machine (VM) instances that have pre-attached NVIDIA GPUs. These instances are designed specifically for artificial intelligence (AI), machine learning (ML), high performance computing (HPC), and graphics-intensive applications.
The accelerator-optimized machine family is available in the following machine series: A4X, A4, A3, A2, and G2. Each machine type within a series has a specific model and number of NVIDIA GPUs attached. You can also attach some GPU models to N1 general-purpose machine types.
Recommended machine series by workload type
The following section provides the recommended machine series based on your GPU workloads.
Workload type | Recommended machine type |
---|---|
Pre-training models | A4X, A4, A3 Ultra, A3 Mega, A3 High, and A2 To identify the best fit, see Recommendations for pre-training models in the AI Hypercomputer documentation. |
Fine-tuning models | A4X, A4, A3 Ultra, A3 Mega, A3 High, and A2 To identify the best fit, see Recommendations for fine-tuning models in the AI Hypercomputer documentation. |
Serving inference | A4X, A4, A3 Ultra, A3 Mega, A3 High, A3 Edge, and A2 To identify the best fit, see Recommendations for serving inference in the AI Hypercomputer documentation. |
Graphics-intensive workloads | G2 and N1+T4 |
High performance computing | For high performance computing workloads, any accelerator-optimized machine
series works well. The best fit depends on the amount of computation that must
be offloaded to the GPU. For more information, see Recommendations for HPC in the AI Hypercomputer documentation. |
Pricing and provisioning options
Google Cloud bills accelerator-optimized machine types for their attached GPUs, predefined vCPU, memory, and bundled Local SSD (if applicable). Discounts for accelerator-optimized instances vary based on the provisioning option you use, as the following table summarizes. For more pricing information for accelerator-optimized instances, see the Accelerator-optimized machine type family section on the VM instance pricing page.
On-demand (default) | Spot VMs | Flex-start (Preview) | Reservations | |
---|---|---|---|---|
Supported accelerator-optimized machine types | A3 Mega, A3 High with 8 GPUs, A3 Edge, all A2 and G2 machine types | All A and G series machine types except A4X | All A and G series machine types except A4X | Support varies based on the type of reservation:
|
Discounts | You can receive committed use discounts (CUDs) for some resources by purchasing resource-based commitments. However, GPUs and Local SSD disks that you use with the on-demand provisioning option are ineligible for CUDs. To receive CUDs for GPUs and Local SSD disks, use the reservations provisioning option instead. |
Spot VMs automatically receive discounts through Spot VMs pricing. |
Instances provisioned by using the flex-start provisioning model automatically receive discounts through Dynamic Workload Scheduler pricing. |
You can receive CUDs for your accelerator-optimized machine type resources by purchasing resource-based commitments. Commitments for GPUs and Local SSD disks require attached reservations for those resources. |
The A4X machine series
The A4X machine series runs on an exascale platform based on NVIDIA GB200 NVL72 rack-scale architecture and has up to 140 vCPUs, and 884 GB of memory. This machine series is optimized for compute and memory intensive, network bound ML training, and HPC workloads. The A4X machine series is available in a single machine type.
VM instances created by using the A4X machine type provide the following features:
GPU acceleration with NVIDIA GB200 Superchips: A4X instances have NVIDIA GB200 Superchips automatically attached. These Superchips have NVIDIA B200 GPUs and offer 180 GB memory per GPU. A4X has two sockets with NVIDIA Grace™ CPUs with Arm® Neoverse™ V2 cores. These CPUs are connected to four B200 GPUs with fast chip-to-chip (NVLink-C2C) communication.
NVIDIA Grace CPU Platform: A4X instances use the NVIDIA Grace CPU platform. For more details about the platform, see CPU platforms.
Industry-leading NVLink scalability: Multi-node NVLink that scales up to 72 GPUs in a single domain. NVIDIA B200 GPUs provide GPU NVLink bandwidth of 1800 GBps, bidirectionally per GPU. With all-to-all NVLink topology between 4 GPUs in a system, the aggregate NVLink Bandwidth is up to 130 TB/s.
Enhanced Networking with RoCE: For A4X instances, RDMA over Converged Ethernet (RoCE) increases the network performance by combining NVIDIA ConnectX-7 (CX-7) network interface cards (NICs) with Google's datacenter-wide four-way rail-aligned network. By leveraging RDMA over Converged Ethernet (RoCE), the A4X instances achieve much higher throughput between instances in a cluster when compared to A4 instances.
The CX-7 NICs, physically isolated on a four-way rail-aligned network topology, allows A4X instances to scale out in groups of 72 GPUs to up to thousands of GPUs in a single non-blocking cluster.
Increased network speeds: Offers up to 4x networking speeds when compared to instances created by using the A3 machine types.
Virtualization optimizations for data transfers and recovery: the Peripheral Component Interconnect Express (PCIe) topology of A4X instances provides more accurate locality information that workloads can use to optimize data transfers.
The GPUs also expose Function Level Reset (FLR) for graceful recovery from failures and atomic operations support for concurrency improvements in certain scenarios.
Local SSD and Hyperdisk support: 12,000 GiB of Local SSD is automatically added to A4X instances. Local SSD can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks.
For applications that require higher storage performance, you can attach up to 512 TiB of Hyperdisk to A4X instances.
Dense allocation and topology aware scheduling support: When you provision A4X instances through Cluster Director, you can request blocks of densely allocated capacity. Your host machines are allocated physically close to each other, provisioned as blocks of resources, and are interconnected with a dynamic ML network fabric to minimize network hops and optimize for the lowest latency. Additionally, Cluster Director provides topology information at node and cluster level that can be used for job placement.
A4X machine type
A4X accelerator-optimized
machine types use NVIDIA GB200 Grace Blackwell Superchips (nvidia-gb200
) and
are ideal for foundation model training and serving.
A4X is an exascale platform based on NVIDIA GB200 NVL72. Each machine has two sockets with NVIDIA Grace CPUs with Arm Neoverse V2 cores. These CPUs are connected to four NVIDIA B200 Blackwell GPUs with fast chip-to-chip (NVLink-C2C) communication.
Attached NVIDIA GB200 Grace Blackwell Superchips | |||||||
---|---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Attached Local SSD (GiB) | Physical NIC count | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3e) |
a4x-highgpu-4g |
140 | 884 | 12,000 | 6 | 2,000 | 4 | 720 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A4X limitations
- You can only request capacity by using the supported provisioning options for an A4X machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A4X machine type.
- You can only use an A4X machine type in certain regions and zones.
- You can't use Persistent Disk (regional or zonal) on an instance that uses an A4X machine type.
- The A4X machine type is only available on the NVIDIA Grace platform.
- You can't change the machine type of an existing instance to an A4X machine type. You can only create new A4X instances. After creating an instance using an A4X machine type, you can't change the machine type.
- You can't run Windows operating systems on an A4X machine type.
- A4X instances don't support the following:
Supported disk types for A4X instances
A4X instances can use the following block storage types:
- Hyperdisk Balanced (
hyperdisk-balanced
): this is the only disk type that is supported for the boot disk - Hyperdisk Extreme (
hyperdisk-extreme
) - Local SSD: which is automatically added to instances that are created by using any of the A4X machine types
Maximum number of disks per instance* | ||||||
---|---|---|---|---|---|---|
Machine types | All Hyperdisk | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Attached Local SSD |
a4x-highgpu-4g |
128 | 128 | N/A | N/A | 8 | 32 |
*Hyperdisk usage is charged separately from
machine type pricing. For disk pricing, see
Hyperdisk pricing.
Disk and capacity limits
You can use a mixture of different Hyperdisk types with a VM, but the maximum total disk capacity (in TiB) across all disk types can't exceed:
For machine types with less than 32 vCPUs: 257 TiB for all Hyperdisk
For machine types with 32 or more vCPUs: 512 TiB for all Hyperdisk
For details about the capacity limits, see Hyperdisk size and attachment limits.
The A4 machine series
The A4 machine series offers machine types with up to 224 vCPUs, and 3,968 GB of memory. A4 instances provide up to 3x performance of previous GPU instance types for most GPU accelerated workloads. A4 is recommended for ML training workloads especially at large scales—for example, hundreds or thousands of GPUs. The A4 machine series is available in a single machine type.
VM instances created by using the A4 machine type provide the following features:
GPU acceleration with NVIDIA B200 GPUs: NVIDIA B200 GPUs are automatically attached to A4 instances, which offer 180 GB GPU memory per GPU.
5th Generation Intel Xeon Scalable Processor (Emerald Rapids): offers up to 4.0 GHz sustained single-core max turbo frequency. For more information about this processor, see CPU platform.
Industry-leading NVLink scalability: NVIDIA B200 GPUs provide GPU NVLink bandwidth of 1,800 GBps, bidirectionally per GPU.
With all-to-all NVLink topology between 8 GPUs in a system, the aggregate NVLink Bandwidth is up to 14.4 TBps.
Enhanced networking with RoCE: RDMA over Converged Ethernet (RoCE) increases the network performance by combining NVIDIA ConnectX-7 network interface cards (NICs) with Google's datacenter-wide four-way rail-aligned network. By leveraging RDMA over Converged Ethernet (RoCE), A4 instances achieve much higher throughput between instances in a cluster compared to most A3 instances, except those running on the A3 Ultra machine type.
Increased network speeds: Offers up to 4x networking speeds when compared to the previous generation A2 instances.
For more information about networking, see Network bandwidths and GPUs.
Virtualization optimizations for data transfers and recovery: the Peripheral Component Interconnect Express (PCIe) topology of A4 instances provides more accurate locality information that workloads can use to optimize data transfers.
The GPUs also expose Function Level Reset (FLR) for graceful recovery from failures and atomic operations support for concurrency improvements in certain scenarios.
Local SSD and Hyperdisk support: 12,000 GiB of Local SSD is automatically added to A4 instances. Local SSD can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks.
For applications that require higher storage performance, you can also attach up to 512 TiB of Hyperdisk to A4 instances.
Dense allocation and topology aware scheduling support: When you provision A4 instances that use the features and services available from Cluster Director, you can request blocks of densely allocated capacity. Your host machines are allocated physically close to each other, provisioned as blocks of resources, and are interconnected with a dynamic ML network fabric to minimize network hops and optimize for the lowest latency. Additionally, you can get topology information at node and cluster level that can be used for job placement.
A4 machine type
A4 accelerator-optimized
machine types have
NVIDIA B200 Blackwell GPUs
(nvidia-b200
) attached and are ideal for foundation model
training and serving.
Attached NVIDIA Blackwell GPUs | |||||||
---|---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Attached Local SSD (GiB) | Physical NIC count | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3e) |
a4-highgpu-8g |
224 | 3,968 | 12,000 | 10 | 3,600 | 8 | 1,440 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth, see
Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A4 limitations
- You can only request capacity by using the supported provisioning options for an A4 machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A4 machine type.
- You can only use an A4 machine type in certain regions and zones.
- You can't use Persistent Disk (regional or zonal) on an instance that uses an A4 machine type.
- The A4 machine type is only available on the Emerald Rapids CPU platform.
- You can't change the machine type of an existing instance to an A4 machine type. You can only create new A4 instances. After creating an instance using an A4 machine type, you can't change the machine type.
- A4 machine types don't support sole-tenancy.
- You can't run Windows operating systems on an A4 machine type.
Supported disk types for A4 instances
A4 instances can use the following block storage types:
- Hyperdisk Balanced (
hyperdisk-balanced
): this is the only disk type that is supported for the boot disk - Hyperdisk Extreme (
hyperdisk-extreme
) - Local SSD: which is automatically added to instances that are created by using any of the A4 machine types
Maximum number of disks per instance* | ||||||
---|---|---|---|---|---|---|
Machine types | All Hyperdisk | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Attached Local SSD |
a4-highgpu-8g |
128 | 128 | N/A | N/A | 8 | 32 |
*Hyperdisk usage is charged separately from
machine type pricing. For disk pricing, see
Hyperdisk pricing.
Disk and capacity limits
If supported by the machine type, you can use a mixture of Hyperdisk and Persistent Disk volumes on a VM, but the following restrictions apply:
- The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per VM.
The maximum total disk capacity (in TiB) across all disk types can't exceed:
For machine types with less than 32 vCPUs:
- 257 TiB for all Hyperdisk or all Persistent Disk
- 257 TiB for a mixture of Hyperdisk and Persistent Disk
For machine types with 32 or more vCPUs:
- 512 TiB for all Hyperdisk
- 512 TiB for a mixture of Hyperdisk and Persistent Disk
- 257 TiB for all Persistent Disk
For details about the capacity limits, see Hyperdisk size and attachment limits and Persistent Disk maximum capacity.
The A3 machine series
The A3 machine series has up to 224 vCPUs, and 2,944 GB of memory. This machine series is optimized for compute and memory intensive, network bound ML training, and HPC workloads. The A3 machine series is available in A3 Ultra, A3 Mega, A3 High, and A3 Edge machine types.
VM instances created by using the A3 machine types provide the following features:
Feature | A3 Ultra | A3 Mega, High, Edge |
---|---|---|
GPU acceleration | NVIDIA H200 SXM GPUs attached, which offers 141 GB GPU memory per GPU and provides larger and faster memory for supporting large language models and HPC workloads. |
NVIDIA H100 SXM GPUs attached, which offers 80 GB GPU memory per GPU and is ideal for large transformer-based language models, databases, and HPC. |
Intel Xeon Scalable Processors | 5th Generation Intel Xeon Scalable processor (Emerald Rapids) and offers up to 4.0 GHz sustained single-core max turbo frequency. For more information about this processor, see CPU platform. |
4th Generation Intel Xeon Scalable processor (Sapphire Rapids) and offers up to 3.3 GHz sustained single-core max turbo frequency. For more information about this processor, see CPU platform. |
Industry-leading NVLink scalability | NVIDIA H200 GPUs provide peak GPU NVLink bandwidth of 900 GB/s, unidirectionally. With all-to-all NVLink topology between 8 GPUs in a system, the aggregate NVLink Bandwidth is up to 7.2 TB/s. |
NVIDIA H100 GPUs provide peak GPU NVLink bandwidth of 450 GB/s, unidirectionally. With all-to-all NVLink topology between 8 GPUs in a system, the aggregate NVLink Bandwidth is up to 7.2 TB/s. |
Enhanced networking | For this machine type, RDMA over Converged Ethernet (RoCE) increases the network performance by
combining
NVIDIA ConnectX-7 network interface cards (NICs) with our
datacenter-wide four-way rail-aligned network. By leveraging RDMA over Converged Ethernet (RoCE),
the a3-ultragpu-8g machine type achieves much higher
throughput between instances in a cluster when compared to other
A3 machine types.
|
|
Improved networking speeds | Offers up to 4x networking speeds when compared to the previous generation A2 machine series. For more information about networking, see Network bandwidths and GPUs. |
Offers up to 2.5X networking speeds when compared to the previous generation A2 machine series. For more information about networking, see Network bandwidths and GPUs. |
Virtualization optimizations | The Peripheral Component Interconnect Express (PCIe) topology of A3 instances provides more accurate locality information that workloads can use to optimize data transfers. The GPUs also expose Function Level Reset (FLR) for graceful recovery from failures and atomic operations support for concurrency improvements in certain scenarios. |
|
Local SSD, Persistent Disk, and Hyperdisk support |
Local SSD can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks. Local SSD is attached as follows:
You can also attach up to 512 TiB of Persistent Disk and Hyperdisk to machine types in these series for applications that require higher storage performance. For select machine types, up to 257 TiB of Persistent Disk is also supported. |
|
Compact placement policy support | Provides you with more control over the physical placement of your instances within data centers. This enables lower-latency and higher bandwidth for instances that are located within a single availability zone. For more information, see About compact placement policies. |
A3 Ultra machine type
A3 Ultra
machine types have NVIDIA H200 SXM GPUs
(nvidia-h200-141gb
) attached and provides the highest network
performance in the A3 series. A3 Ultra machine types are ideal for foundation model training and
serving.
Attached NVIDIA H200 GPUs | |||||||
---|---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Attached Local SSD (GiB) | Physical NIC count | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3e) |
a3-ultragpu-8g |
224 | 2,952 | 12,000 | 10 | 3,600 | 8 | 1128 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A3 Ultra limitations
- You can only request capacity by using the supported provisioning options for an A3 Ultra machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A3 Ultra machine type.
- You can only use an A3 Ultra machine type in certain regions and zones.
- You can't use Persistent Disk (regional or zonal) on an instance that uses an A3 Ultra machine type.
- The A3 Ultra machine type is only available on the Emerald Rapids CPU platform.
- You can't change the machine type of an existing instance to an A3 Ultra machine type. You can only create new A3-ultra instances. After creating an instance using an A3 Ultra machine type, you can't change the machine type.
- A3 Ultra machine types don't support sole-tenancy.
- You can't run Windows operating systems on an A3 Ultra machine type.
A3 Mega machine type
A3 Mega machine types have NVIDIA H100 SXM GPUs and are ideal for large model training and multi-host inference.Attached NVIDIA H100 GPUs | |||||||
---|---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Attached Local SSD (GiB) | Physical NIC count | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3) |
a3-megagpu-8g |
208 | 1,872 | 6,000 | 9 | 1,800 | 8 | 640 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A3 Mega limitations
- You can only request capacity by using the supported provisioning options for an A3 Mega machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A3 Mega machine type.
- You can only use an A3 Mega machine type in certain regions and zones.
- You can't use regional Persistent Disk on an instance that uses an A3 Mega machine type.
- The A3 Mega machine type is only available on the Sapphire Rapids CPU platform.
- You can't change the machine type of an existing instance to an A3 Mega machine type. You can only create new A3-mega instances. After creating an instance using an A3 Mega machine type, you can't change the machine type.
- A3 Mega machine types don't support sole-tenancy.
- You can't run Windows operating systems on an A3 Mega machine type.
A3 High machine type
A3 High machine types have NVIDIA H100 SXM GPUs and are well-suited for both large model inference and model fine tuning.Attached NVIDIA H100 GPUs | |||||||
---|---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Attached Local SSD (GiB) | Physical NIC count | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3) |
a3-highgpu-1g |
26 | 234 | 750 | 1 | 25 | 1 | 80 |
a3-highgpu-2g |
52 | 468 | 1,500 | 1 | 50 | 2 | 160 |
a3-highgpu-4g |
104 | 936 | 3,000 | 1 | 100 | 4 | 320 |
a3-highgpu-8g |
208 | 1,872 | 6,000 | 5 | 1,000 | 8 | 640 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A3 High limitations
- You can only request capacity by using the supported provisioning options for an A3 High machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A3 High machine type.
- You can only use an A3 High machine type in certain regions and zones.
- You can't use regional Persistent Disk on an instance that uses an A3 High machine type.
- The A3 High machine type is only available on the Sapphire Rapids CPU platform.
- You can't change the machine type of an existing instance to an A3 High machine type. You can only create new A3-high instances. After creating an instance using an A3 High machine type, you can't change the machine type.
- A3 High machine types don't support sole-tenancy.
- You can't run Windows operating systems on an A3 High machine type.
- For
a3-highgpu-1g
,a3-highgpu-2g
, anda3-highgpu-4g
machine types, the following limitations apply:-
For these machine types,
you must create instances using Spot VMs or a feature
that uses the
Dynamic Workload Scheduler (DWS), such
as resize requests in a MIG. For detailed instructions on either of these options, review the
following:
- To create Spot VMs, set the provisioning model to
SPOT
when you Create an accelerator-optimized VM. - To create a resize request in a MIG, which uses DWS, see Create a MIG with GPU VMs.
- To create Spot VMs, set the provisioning model to
- You can't create reservations.
-
For these machine types,
you must create instances using Spot VMs or a feature
that uses the
Dynamic Workload Scheduler (DWS), such
as resize requests in a MIG. For detailed instructions on either of these options, review the
following:
A3 Edge machine type
A3 Edge machine types have NVIDIA H100 SXM GPUs and are designed specifically for serving and are available in a limited set of regions.Attached NVIDIA H100 GPUs | |||||||
---|---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Attached Local SSD (GiB) | Physical NIC count | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3) |
a3-edgegpu-8g |
208 | 1,872 | 6,000 | 5 |
|
8 | 640 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A3 Edge limitations
- You can only request capacity by using the supported provisioning options for an A3 Edge machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A3 Edge machine type.
- You can only use an A3 Edge machine type in certain regions and zones.
- You can't use regional Persistent Disk on an instance that uses an A3 Edge machine type.
- The A3 Edge machine type is only available on the Sapphire Rapids CPU platform.
- You can't change the machine type of an existing instance to an A3 Edge machine type. You can only create new A3-edge instances. After creating an instance using an A3 Edge machine type, you can't change the machine type.
- A3 Edge machine types don't support sole-tenancy.
- You can't run Windows operating systems on an A3 Edge machine type.
Supported disk types for A3 instances
A3 Ultra
A3 Ultra instances can use the following block storage types:
- Hyperdisk Balanced (
hyperdisk-balanced
): this is the only disk type that is supported for the boot disk - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability
) - Hyperdisk Extreme (
hyperdisk-extreme
) - Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance* | |||||||
---|---|---|---|---|---|---|---|
Machine types |
All Hyperdisk | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Attached Local SSD disks |
a3-ultragpu-8g |
128 | 128 | 128 | N/A | N/A | 8 | 32 |
*Hyperdisk usage is charged separately from machine type pricing. For disk pricing, see Hyperdisk pricing.
A3 Mega
A3 Mega instances can use the following block storage types:
- Balanced Persistent Disk (
pd-balanced
) - SSD (performance) Persistent Disk (
pd-ssd
) - Hyperdisk Balanced (
hyperdisk-balanced
) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability
) - Hyperdisk ML (
hyperdisk-ml
) - Hyperdisk Extreme (
hyperdisk-extreme
) - Hyperdisk Throughput (
hyperdisk-throughput
) - Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance* | |||||||
---|---|---|---|---|---|---|---|
Machine types |
All Hyperdisk | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Attached Local SSD disks |
a3-megagpu-8g |
128 | 32 | 32 | 64 | 64 | 8 | 16 |
*Hyperdisk and Persistent Disk usage are charged separately from
machine type pricing. For disk pricing, see
Persistent Disk and Hyperdisk pricing.
A3 High
A3 High instances can use the following block storage types:
- Balanced Persistent Disk (
pd-balanced
) - SSD (performance) Persistent Disk (
pd-ssd
) - Hyperdisk Balanced (
hyperdisk-balanced
) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability
) - Hyperdisk ML (
hyperdisk-ml
) - Hyperdisk Extreme (
hyperdisk-extreme
) - Hyperdisk Throughput (
hyperdisk-throughput
) - Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance* | |||||||
---|---|---|---|---|---|---|---|
Machine types |
All Hyperdisk | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Attached Local SSD disks |
a3-highgpu-1g |
128 | 32 | 32 | 64 | 64 | N/A | 2 |
a3-highgpu-2g |
128 | 32 | 32 | 64 | 64 | N/A | 4 |
a3-highgpu-4g |
128 | 32 | 32 | 64 | 64 | 8 | 8 |
a3-highgpu-8g |
128 | 32 | 32 | 64 | 64 | 8 | 16 |
*Hyperdisk and Persistent Disk usage are charged separately from
machine type pricing. For disk pricing, see
Persistent Disk and Hyperdisk pricing.
A3 Edge
A3 Edge instances can use the following block storage types:
- Balanced Persistent Disk (
pd-balanced
) - SSD (performance) Persistent Disk (
pd-ssd
) - Hyperdisk Balanced (
hyperdisk-balanced
) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability
) - Hyperdisk ML (
hyperdisk-ml
) - Hyperdisk Extreme (
hyperdisk-extreme
) - Hyperdisk Throughput (
hyperdisk-throughput
) - Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance* | |||||||
---|---|---|---|---|---|---|---|
Machine types | All Hyperdisk | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Attached Local SSD |
a3-edgegpu-8g |
128 | 32 | 32 | 64 | 64 | 8 | 16 |
*Hyperdisk and Persistent Disk usage are charged separately from
machine type pricing. For disk pricing, see
Persistent Disk and Hyperdisk pricing.
Disk and capacity limits
If supported by the machine type, you can use a mixture of Hyperdisk and Persistent Disk volumes on a VM, but the following restrictions apply:
- The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per VM.
The maximum total disk capacity (in TiB) across all disk types can't exceed:
For machine types with less than 32 vCPUs:
- 257 TiB for all Hyperdisk or all Persistent Disk
- 257 TiB for a mixture of Hyperdisk and Persistent Disk
For machine types with 32 or more vCPUs:
- 512 TiB for all Hyperdisk
- 512 TiB for a mixture of Hyperdisk and Persistent Disk
- 257 TiB for all Persistent Disk
For details about the capacity limits, see Hyperdisk size and attachment limits and Persistent Disk maximum capacity.
The A2 machine series
The A2 machine series is available in A2 Standard and A2 Ultra machine types. These machine types have 12 to 96 vCPUs, and up to 1,360 GB of memory.
VM instances created by using the A2 machine types provide the following features:
GPU acceleration: each A2 instance has NVIDIA A100 GPUs. These are available in both A100 40GB and A100 80GB options.
Industry-leading NVLink scale that provides peak GPU to GPU NVLink bandwidth of 600 GBps. For example, systems with 16 GPUs have an aggregate NVLink bandwidth of up to 9.6 TBps. These 16 GPUs can be used as a single high performance accelerator with unified memory space to deliver up to 10 petaFLOPS of compute power and up to 20 petaFLOPS of inference compute power that can be used for artificial intelligence, deep learning, and machine learning workloads.
Improved computing speeds: the attached NVIDIA A100 GPUs offer up to 10x improvements in computing speed when compared to previous generation NVIDIA V100 GPUs.
With the A2 machine series, you can get up to 100 Gbps network bandwidth.
Local SSD, Persistent Disk, and Hyperdisk support: for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks, the A2 machine types support Local SSD as follows:
- For the A2 Standard machine types, you can add up to 3,000 GiB of Local SSD when you create an instance.
- For the A2 Ultra machine types, Local SSD is automatically attached when you create an instance.
For applications that require higher storage performance, you can also attach up to 257 TiB of Persistent Disk and 512 TiB of Hyperdisk volumes to A2 instances.
Compact placement policy support: provides you with more control over the physical placement of your instances within data centers. This enables lower-latency and higher bandwidth for instances that are located within a single availability zone. For more information, see Reduce latency by using compact placement policies.
The following machine types are available for the A2 machine series.
A2 Ultra machine types
These machine types have a fixed number of A100 80GB GPUs. Local SSD is automatically attached to instances created by using the A2 Ultra machine types.
Attached NVIDIA A100 80GB GPUs | ||||||
---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Attached Local SSD (GiB) | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3) |
a2-ultragpu-1g |
12 | 170 | 375 | 24 | 1 | 80 |
a2-ultragpu-2g |
24 | 340 | 750 | 32 | 2 | 160 |
a2-ultragpu-4g |
48 | 680 | 1,500 | 50 | 4 | 320 |
a2-ultragpu-8g |
96 | 1,360 | 3,000 | 100 | 8 | 640 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A2 Ultra limitations
- You can only request capacity by using the supported provisioning options for an A2 Ultra machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A2 Ultra machine type.
- You can only use an A2 Ultra machine type in certain regions and zones.
- The A2 Ultra machine type is only available on the Cascade Lake platform.
- If your instance uses an A2 Ultra machine type, you can't change the machine type. If you need to use a different A2 Ultra machine type, or any other machine type, you must create a new instance.
- You can't change any other machine type to an A2 Ultra machine type. If you need a instance that uses an A2 Ultra machine type, you must create a new instance.
- You can't do a quick format of the attached Local SSDs on Windows instances that use A2 Ultra
machine types. To format these Local SSDs, you must do a full format by using the diskpart
utility and specifying
format fs=ntfs label=tmpfs
.
A2 Standard machine types
These machine types have a fixed number of A100 40GB GPUs. You can also add Local SSD disks when creating an A2 Standard instance. For the number of disks you can attach, see Machine types that require you to choose a number of Local SSD disks.
Attached NVIDIA A100 40GB GPUs | ||||||
---|---|---|---|---|---|---|
Machine type | vCPU count* | Instance memory (GB) | Local SSD supported | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB HBM3) |
a2-highgpu-1g |
12 | 85 | Yes | 24 | 1 | 40 |
a2-highgpu-2g |
24 | 170 | Yes | 32 | 2 | 80 |
a2-highgpu-4g |
48 | 340 | Yes | 50 | 4 | 160 |
a2-highgpu-8g |
96 | 680 | Yes | 100 | 8 | 320 |
a2-megagpu-16g |
96 | 1,360 | Yes | 100 | 16 | 640 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
A2 Standard limitations
- You can only request capacity by using the supported provisioning options for an A2 Standard machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use an A2 Standard machine type.
- You can only use an A2 Standard machine type in certain regions and zones.
- The A2 Standard machine type is only available on the Cascade Lake platform.
- If your instance uses an A2 Standard machine type, you can only switch from one A2 Standard machine type type to another A2 Standard machine type. You can't change to any other machine type. For more information, see Modify accelerator-optimized instances.
- You can't use the Windows operating system with
a2-megagpu-16g
A2 Standard machine types. When using Windows operating systems, choose a different A2 Standard machine type. - You can't do a quick format of the attached Local SSDs on Windows instances that use A2 Standard machine types.
To format these Local SSDs, you must do a full format by using the diskpart
utility and specifying
format fs=ntfs label=tmpfs
. - A2 Standard machine types don't support sole-tenancy.
Supported disk types for A2 instances
A2 instances can use the following block storage types:
- Hyperdisk ML (
hyperdisk-ml
) - Balanced Persistent Disk (
pd-balanced
) - SSD (performance) Persistent Disk (
pd-ssd
) - Standard Persistent Disk (
pd-standard
) - Local SSD: which is automatically attached to instances created by using the A2 Ultra machine types.
If supported by the machine type, you can use a mixture of Hyperdisk and Persistent Disk volumes on a VM, but the following restrictions apply:
- The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per VM.
The maximum total disk capacity (in TiB) across all disk types can't exceed:
For machine types with less than 32 vCPUs:
- 257 TiB for all Hyperdisk or all Persistent Disk
- 257 TiB for a mixture of Hyperdisk and Persistent Disk
For machine types with 32 or more vCPUs:
- 512 TiB for all Hyperdisk
- 512 TiB for a mixture of Hyperdisk and Persistent Disk
- 257 TiB for all Persistent Disk
For details about the capacity limits, see Hyperdisk size and attachment limits and Persistent Disk maximum capacity.
The G2 machine series
The G2 machine series is available in standard machine types that have 4 to 96 vCPUs, and up to 432 GB of memory. This machine series is optimized for inference and graphics workloads. The G2 machine series is available in a single standard machine type with multiple configurations.
Instances created by using the G2 machine types provide the following features:
GPU acceleration: each G2 machine type has NVIDIA L4 GPUs.
Improved inference rates: the G2 machine type provides support for the FP8 (8-bit floating point) data type which speeds up ML inference rates and reduces memory requirements.
Next generation graphics performance: NVIDIA L4 GPUs provide up to 3X improvement in graphics performance by using third-generation RT cores and NVIDIA DLSS 3 (Deep Learning Super Sampling) technology.
High performance network bandwidth: with the G2 machine types, you can get up to 100 Gbps network bandwidth.
Local SSD, Persistent Disk, and Hyperdisk support: you can add up to 3,000 GiB of Local SSD to G2 instances. This can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks.
You can also attach Hyperdisk and Persistent Disk volumes to G2 instances, for applications that require more persistent storage. The maximum storage capacity depends on the number of vCPUs the instance has. For details, see Supported disk types.
Compact placement policy support: provides you with more control over the physical placement of your instances within data centers. This enables lower-latency and higher bandwidth for instances that are located within a single availability zone. For more information, see Reduce latency by using compact placement policies.
G2 machine types
G2 accelerator-optimized machine types have NVIDIA L4 GPUs attached and are ideal for cost-optimized inference, graphics-intensive and high performance computing workloads.
Each G2 machine type also has a default memory and a custom memory range. The custom memory range defines the amount of memory that you can allocate to your instance for each machine type. You can also add Local SSD disks when creating a G2 instance. For the number of disks you can attach, see Machine types that require you to choose a number of Local SSD disks.
Attached NVIDIA L4 GPUs | |||||||
---|---|---|---|---|---|---|---|
Machine type | vCPU count* | Default instance memory (GB) | Custom instance memory range (GB) | Max Local SSD supported (GiB) | Maximum network bandwidth (Gbps)† | GPU count | GPU memory‡ (GB GDDR6) |
g2-standard-4 |
4 | 16 | 16 to 32 | 375 | 10 | 1 | 24 |
g2-standard-8 |
8 | 32 | 32 to 54 | 375 | 16 | 1 | 24 |
g2-standard-12 |
12 | 48 | 48 to 54 | 375 | 16 | 1 | 24 |
g2-standard-16 |
16 | 64 | 54 to 64 | 375 | 32 | 1 | 24 |
g2-standard-24 |
24 | 96 | 96 to 108 | 750 | 32 | 2 | 48 |
g2-standard-32 |
32 | 128 | 96 to 128 | 375 | 32 | 1 | 24 |
g2-standard-48 |
48 | 192 | 192 to 216 | 1,500 | 50 | 4 | 96 |
g2-standard-96 |
96 | 384 | 384 to 432 | 3,000 | 100 | 8 | 192 |
*A vCPU is implemented as a single hardware hyper-thread on one of
the available CPU platforms.
†Maximum egress bandwidth cannot exceed the number given. Actual
egress bandwidth depends on the destination IP address and other factors.
For more information about network bandwidth,
see Network bandwidth.
‡GPU memory is the memory on a GPU device that can be used for
temporary storage of data. It is separate from the instance's memory and is
specifically designed to handle the higher bandwidth demands of your
graphics-intensive workloads.
G2 limitations
- You can only request capacity by using the supported provisioning options for a G2 machine type.
- You don't receive sustained use discounts and flexible committed use discounts for instances that use a G2 machine type.
- You can only use a G2 machine type in certain regions and zones.
- The G2 machine type is only available on the Cascade Lake platform.
- Standard Persistent Disk (
pd-standard
) isn't supported on instances that use the G2 machine type. For supported disk types, see Supported disk types for G2. - You can't create Multi-Instance GPUs on an instance that uses a G2 machine type.
- If you need to change the machine type of a G2 instance, review Modify accelerator-optmized instances.
- You can't use Deep Learning VM Images as boot disks for instances that use the G2 machine type.
- The current default driver for Container-Optimized OS doesn't support L4 GPUs running on
G2 machine types. Also, Container-Optimized OS only supports a select set of drivers.
If you want to use Container-Optimized OS on G2 machine types, review the following notes:
- Use a Container-Optimized OS version that supports the minimum recommended
NVIDIA driver version
525.60.13
or later. For more information, review the Container-Optimized OS release notes. - When you install the driver,
specify the latest available version that works for the L4 GPUs.
For example,
sudo cos-extensions install gpu -- -version=525.60.13
.
- Use a Container-Optimized OS version that supports the minimum recommended
NVIDIA driver version
- You must use the Google Cloud CLI or REST to
create G2 instances
for the following scenarios:
- You want to specify custom memory values.
- You want to customize the number of visible CPU cores.
Supported disk types for G2 instances
G2 instances can use the following block storage types:
- Balanced Persistent Disk (
pd-balanced
) - SSD (performance) Persistent Disk (
pd-ssd
) - Hyperdisk ML (
hyperdisk-ml
) - Hyperdisk Throughput (
hyperdisk-throughput
) - Local SSD
If supported by the machine type, you can use a mixture of Hyperdisk and Persistent Disk volumes on a VM, but the following restrictions apply:
- The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per VM.
The maximum total disk capacity (in TiB) across all disk types can't exceed:
For machine types with less than 32 vCPUs:
- 257 TiB for all Hyperdisk or all Persistent Disk
- 257 TiB for a mixture of Hyperdisk and Persistent Disk
For machine types with 32 or more vCPUs:
- 512 TiB for all Hyperdisk
- 512 TiB for a mixture of Hyperdisk and Persistent Disk
- 257 TiB for all Persistent Disk
For details about the capacity limits, see Hyperdisk size and attachment limits and Persistent Disk maximum capacity.