This document describes the machine families, machine series, and machine types that you can choose from to create a virtual machine (VM) instance or bare metal instance with the resources that you need. When you create a compute instance, you select a machine type from a machine family that determines the resources available to that instance.
There are several machine families you can choose from. Each machine family is
further organized into machine series and predefined machine types within each
series. For example, within the N2 machine series in the general-purpose
machine family, you can select the n2-standard-4
machine type.
All machine series support Spot VMs (and preemptible VMs), with the exception of the M2, M3, and X4 machine series, and C3 bare metal machine types.
- General-purpose —best price-performance ratio for a variety of workloads.
- Storage-optimized —best for workloads that are low in core usage and high in storage density.
- Compute-optimized —highest performance per core on Compute Engine and optimized for compute-intensive workloads.
- Memory-optimized —ideal for memory-intensive workloads, offering more memory per core than other machine families, with up to 12 TB of memory.
- Accelerator-optimized —ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This family is the best option for workloads that require GPUs.
Compute Engine terminology
This documentation uses the following terms:
Machine family: A curated set of processor and hardware configurations optimized for specific workloads.
Machine series: Machine families are further classified by series, generation, and processor type.
- Each series focuses on a different aspect of computing power or performance. For example, the E series offers efficient VMs at a low cost, while the C series offer better performance.
- The generation is denoted by an ascending number. For example, the N1 series within the general-purpose machine family is the older version of the N2 series. A higher generation or series number usually indicates newer underlying CPU platforms or technologies. For example, the M3 series, which runs on Intel Xeon Scalable Processor 3rd Generation (Ice Lake), is a newer generation than the M2 series, which runs on Intel Xeon Scalable Processor 2nd Generation (Cascade Lake).
Generation | Intel | AMD | Arm |
4th generation machine series | N4, C4, X4 | N/A | C4A |
3rd generation machine series | C3, H3, Z3, M3, A3 | C3D | N/A |
2nd generation machine series | N2, E2, C2, M2, A2, G2 | N2D, C2D, T2D, E2 | T2A |
- Machine type: Every machine series offers at least one machine type. Each machine type provides a set of resources for your compute instance, such as vCPUS, memory, disks, and GPUs. If a predefined machine type does not meet your needs, you can also create a custom machine type for some machine series.
The following sections describe the different machine types.
Predefined machine types
Predefined machine types come with a non-configurable amount of memory and vCPUs. Predefined machine types use a variety of vCPU to memory ratios:
highcpu
— from 1 to 3 GB memory per vCPU; typically, 2 GB memory per vCPU.standard
— from 3 to 7 GB memory per vCPU; typically, 4 GB memory per vCPU.highmem
— from 7 to 14 GB memory per vCPU; typically, 8 GB memory per vCPU.megamem
— from 14 to 19 GB memory per vCPUhypermem
— from 19 to 24 GB memory per vCPU; typically, 21 GB memory per vCPUultramem
— from 24 to 31 GB memory per vCPU
For example, a c3-standard-22
machine type has 22 vCPUs, and as a
standard
machine type, it also has 88 GB of memory.
Local SSD machine types
Local SSD machine types are a special predefined machine type. The machine type
name ends in -lssd
. When you create a compute instance using one of these
machine types, Local SSD disks are automatically attached to the instance.
These machine types are available with the C3 and C3D machine series. Other
machine series also support Local SSD disks but don't use a -lssd
machine
type. For more information about what machine types you can use with Local SSD
disks, see
Choose a valid number of Local SSD disks.
Bare metal machine types
Bare metal machine types are a special predefined machine type. The machine type
name ends in -metal
. When you create a compute instance using one of these
machine types, there is no hypervisor installed on the instance. You can attach
Hyperdisk storage to a bare metal instance, just as you would with a VM
instance. Bare metal instances can be used in VPC networks and subnetworks in
the same way as VM instances.
These machine types are available with the C3 and X4 machine series.
Custom machine types
If none of the predefined machine types match your workload needs, you can create a VM instance with a custom machine type for the N and E machine series in the general-purpose machine family. .
Custom machine types cost slightly more to use compared to an equivalent predefined machine type. Also, there are limitations in the amount of memory and vCPUs that you can select for a custom machine type. The on-demand prices for custom machine types include a 5% premium over the on-demand and commitment prices for predefined machine types.
With extended memory, available only with custom machine types, you can specify an amount of memory for the custom machine type with no vCPU-based limitation. Instead of using the default memory size based on the number of vCPUs specified, you can specify an amount of extended memory, up to the limit of the machine series.
For more information, see Create a VM with a custom machine type.
Shared-core machine types
The E2 and N1 series contain shared-core machine types. These machine types timeshare a physical core which can be a cost-effective method for running small, non-resource intensive apps.
E2: offers 2 vCPUs for short periods of bursting.
N1: offers
f1-micro
andg1-small
shared-core machine types which have up to 1 vCPU available for short periods of bursting.
Machine family and series recommendations
The following tables provide recommendations for different workloads.
General-purpose workloads | |||
---|---|---|---|
N4, N2, N2D, N1 | C4A, C4, C3, C3D | E2 | Tau T2D, Tau T2A |
Balanced price/performance across a wide range of machine types | Consistently high performance for a variety of workloads | Day-to-day computing at a lower cost | Best per-core performance/cost for scale-out workloads |
|
|
|
|
Optimized workloads |
|||
---|---|---|---|
Storage-optimized | Compute-optimized | Memory-optimized | Accelerator-optimized |
Z3 | H3, C2, C2D | X4, M3, M2, M1 | A3, A2, G2 |
Highest block storage to compute ratios for storage-intensive workloads | Ultra high performance for compute-intensive workloads | Highest memory to compute ratios for memory-intensive workloads | Optimized for accelerated high performance computing workloads |
|
|
|
|
After you create a compute instance, you can use rightsizing recommendations to optimize resource utilization based on your workload. For more information, see Applying machine type recommendations for VMs.
General-purpose machine family guide
The general-purpose machine family offers several machine series with the best price-performance ratio for a variety of workloads.
Compute Engine offers general-purpose machine series that run on either x86 or Arm architecture.
x86
- The C4 machine series is available on the Intel Emerald Rapids CPU platform
and powered by Titanium. C4 machine types are optimized to deliver
consistently high performance and scale up to 192 vCPUs at 1.5 TB of
DDR5 memory. C4 is available in
highcpu
(2 GB per vCPU),standard
(3.75 GB per vCPU), andhighmem
(7.75 GB per vCPU) configurations. - The N4 machine series is
available on the Intel Emerald Rapids CPU platform and powered by
Titanium. N4 machine types are
optimized for flexibility and cost with both predefined and custom shapes and can
scale up to 80 vCPUs at 640 GB of DDR5 memory. N4 is available in
highcpu
(2 GB per vCPU),standard
(4 GB per vCPU), andhighmem
(8 GB per vCPU) configurations. - The N2 machine series has up to 128 vCPUs, 8 GB of memory per vCPU, and is available on the Intel Ice Lake and Intel Cascade Lake CPU platforms.
- The N2D machine series has up to 224 vCPUs, 8 GB of memory per vCPU, and is available on second generation AMD EPYC Rome and third generation AMD EPYC Milan platforms.
- The C3 machine series offers up to 176 vCPUs and 2, 4, or 8 GB of memory per vCPU on the Intel Sapphire Rapids CPU platform and Titanium. C3 instances are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.
- The C3D machine series offers up to 360 vCPUs and 2, 4, or 8 GB of memory per vCPU on the AMD EPYC Genoa CPU platform and Titanium. C3D instances are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.
- The E2 machine series has up to 32 virtual cores (vCPUs) with up to 128 GB of memory with a maximum of 8 GB per vCPU, and the lowest cost of all machine series. The E2 machine series has a predefined CPU platform, running either an Intel processor or the second generation AMD EPYC™ Rome processor. The processor is selected for you when you create the instance. This machine series provides a variety of compute resources for the lowest price on Compute Engine, especially when paired with committed-use discounts.
- The Tau T2D machine series provides an optimized feature set for scaling out. Each VM instance can have up to 60 vCPUs, 4 GB of memory per vCPU, and is available on third generation AMD EPYC Milan processors. The Tau T2D machine series doesn't use cluster-threading, so a vCPU is equivalent to an entire core.
- The N1 machine series VMs can have up to 96 vCPUs, up to 6.5 GB of memory per vCPU, and are available on Intel Sandy Bridge, Ivy Bridge, Haswell, Broadwell, and Skylake CPU platforms.
Arm
The C4A machine series is the second machine series in Google Cloud to run on Arm processors and the first to run on Google Axion CPU, which supports Arm V9 architecture. C4A VMs are powered by the Titanium IPU with network offloads; this improves VM performance by reducing on-host processing.
C4A VMs can have up to 72 vCPUs with up to 8 GB of memory per vCPU and support single UMA domain. C4A VMs don't use simultaneous multithreading (SMT) and a vCPU in a C4A instance is equivalent to an entire core.
The Tau T2A machine series is the first machine series in Google Cloud to run on Arm processors. Tau T2A machines are optimized to deliver compelling price for performance. Each VM can have up to 48 vCPUs with 4 GB of memory per vCPU. The Tau T2A machine series runs on a 64 core Ampere Altra processor with an Arm instruction set and an all-core frequency of 3 GHz. Tau T2A machine types support a single NUMA node and a vCPU is equivalent to an entire core.
Storage-optimized machine family guide
The storage-optimized machine family is ideal for useful for horizontal, scale out databases, log analytics, data warehouse offerings, and other database workloads. This family offers high density, high performing Local SSD.
- Z3 instances can have up to 176 vCPUs, 1,408 GB of memory, and 36 TiB of Local SSD. Z3 runs on the Intel Xeon Scalable processor (code name Sapphire Rapids) with DDR5 memory and Titanium offload processors. Z3 brings together the latest compute, networking, and storage innovations into one platform. Z3 instances are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.
Compute-optimized machine family guide
The compute-optimized machine family is optimized for running compute-bound applications by providing the highest performance per core.
- H3 instances offer 88 vCPUs and 352 GB of DDR5 memory. H3 instances run on the Intel Sapphire Rapids CPU platform and Titanium offload processors. H3 instances are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance. H3 delivers performance improvements for a wide variety of HPC workloads such as molecular dynamics, computational geoscience, financial risk analysis, weather modeling, frontend and backend EDA, and computational fluid dynamics.
- C2 instances offer up to 60 vCPUs, 4 GB of memory per vCPU, and are available on the Intel Cascade Lake CPU platform.
- C2D instances offer up to 112 vCPUs, up to 8 GB of memory per vCPU, and are available on the third generation AMD EPYC Milan platform.
Memory-optimized machine family guide
The memory-optimized machine family has machine series that are ideal for OLAP and OLTP SAP workloads, genomic modeling, electronic design automation, and your most memory intensive HPC workloads. This family offers more memory per core than any other machine family, with up to 32 TB of memory.
- X4 bare metal instances offer up to 1,920 vCPUs, with 17 GB of memory per vCPU. X4 has machine types with 16, 24, and 32 TB of memory, and is available on the Intel Sapphire Rapids CPU platform.
- M3 instances offer up to 128 vCPUs, with up to 30.5 GB of memory per vCPU, and are available on the Intel Ice Lake CPU platform.
- M2 instances are available as 6 TB, 9 TB, and 12 TB machine types, and are available on the Intel Cascade Lake CPU platform.
- M1 instances offer up to 160 vCPUs, 14.9 GB to 24 GB of memory per vCPU, and are available on the Intel Skylake and Broadwell CPU platforms.
Accelerator-optimized machine family guide
The accelerator-optimized machine family is ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This family is the optimal choice for workloads that require GPUs.
- A3 instances offer up to 208 vCPUs with 9 GB of memory per vCPU. Each A3 machine type has either 1, 2, 4, or 8 NVIDIA H100 GPUs attached. A3s have a maximum network bandwidth of up to 1,800 Gbps and are available on the Intel Sapphire Rapids CPU platform.
- A2 instances offer 12 to 96 vCPUs, and up to 1,360 GB of memory. Each A2 machine type has either 1, 2, 4, 8, or 16 NVIDIA A100 GPUs attached. A2 instances have a maximum network bandwidth of up to 100 Gbps and are available on the Intel Cascade Lake CPU platform.
- G2 instances offer 4 to 96 vCPUs and up to 432 GB of memory. Each G2 machine type has either 1, 2, 4, or 8 NVIDIA L4 GPUs attached. G2 instances have a maximum network bandwidth of up to 100 Gbps and are available on the Intel Cascade Lake CPU platform.
Machine series comparison
Use the following table to compare each machine family and determine which one is appropriate for your workload. If, after reviewing this section, you are still unsure which family is best for your workload, start with the general-purpose machine family. For details about all supported processors, see CPU platforms.
To learn how your selection affects the performance of disk volumes attached to your compute instances, see:
- Persistent Disk: Disk performance by machine type and vCPU count
- Google Cloud Hyperdisk: Hyperdisk performance limits
Compare the characteristics of different machine series, from C4A to G2. You can select specific properties in the Choose instance properties to compare field to compare those properties across all machine series in the following table.
General purpose | General purpose | General purpose | General purpose | General purpose | General purpose | General purpose | General purpose | General purpose | General purpose | Cost optimized | Storage optimized | Compute optimized | Compute optimized | Compute optimized | Memory optimized | Memory optimized | Memory optimized | Memory optimized | Accelerator optimized | Accelerator optimized | Accelerator optimized | Accelerator optimized |
Google Axion | Intel Emerald Rapids | Intel Sapphire Rapids | AMD EPYC Genoa | Intel Emerald Rapids | Intel Cascade Lake and Ice Lake | AMD EPYC Rome and EPYC Milan | Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge | AMD EPYC Milan | Ampere Altra | Intel Skylake, Broadwell, and Haswell, AMD EPYC Rome and EPYC Milan | Intel Sapphire Rapids | Intel Sapphire Rapids | Intel Cascade Lake | AMD EPYC Milan | Intel Sapphire Rapids | Intel Ice Lake | Intel Cascade Lake | Intel Skylake and Broadwell | Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge | Intel Sapphire Rapids | Intel Cascade Lake | Intel Cascade Lake |
Arm | x86 | x86 | x86 | x86 | x86 | x86 | x86 | x86 | Arm | x86 | x86 | x86 | x86 | x86 | x86 | x86 | x86 | x86 | x86 | x86 | x86 | x86 |
1 to 72 | 2 to 192 | 4 to 176 | 4 to 360 | 2 to 80 | 2 to 128 | 2 to 224 | 1 to 96 | 1 to 60 | 1 to 48 | 0.25 to 32 | 88 or 176 | 88 | 4 to 60 | 2 to 112 | 960 to 1,920 | 32 to 128 | 208 to 416 | 40 to 160 | 1 to 96 | 208 | 12 to 96 | 4 to 96 |
Core | Thread | Thread | Thread | Thread | Thread | Thread | Thread | Core | Core | Thread | Thread | Core | Thread | Thread | Thread | Thread | Thread | Thread | Thread | Thread | Thread | Thread |
2 to 576 GB | 2 to 1,488 GB | 8 to 1,408 GB | 8 to 2,880 GB | 2 to 640 GB | 2 to 864 GB | 2 to 896 GB | 1.8 to 624 GB | 4 to 240 GB | 4 to 192 GB | 1 to 128 GB | 704 or 1,408 GB | 352 GB | 16 to 240 GB | 4 to 896 GB | 16,384 to 32,768 GB | 976 to 3904 GB | 5888 to 11776 GB | 961 to 3844 GB | 3.75 to 624 GB | 1872 GB | 85 to 1360 GB | 16 to 432 GB |
— | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | |||||||
— | — | — | — | — | — | — | — | |||||||||||||||
VM | VM | VM and bare metal | VM | VM | VM | VM | VM | VM | VM | VM | VM | VM | VM | VM | Bare metal | VM | VM | VM | VM | VM | VM | VM |
— | — | — | — | — | — | — | — | — | — | — | — | |||||||||||
— | — | — | — | — | — | — | — | — | — | — | — | — | — | — | ||||||||
— | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | ||||
NVMe | NVMe | NVMe | NVMe | NVMe | SCSI and NVMe | SCSI and NVMe | SCSI and NVMe | SCSI and NVMe | NVMe | SCSI | NVMe | NVMe | SCSI and NVMe | SCSI and NVMe | NVMe | NVMe | SCSI | SCSI and NVMe | SCSI and NVMe | NVMe | SCSI and NVMe | NVMe |
— | — | — | — | — | — | — | — | — | — | — | — | — | ||||||||||
— | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | |||||
— | — | — | — | — | — | — | — | — | — | — | — | — | — | |||||||||
Zonal and regional (Preview) | Zonal | Zonal and regional (Preview) | Zonal and regional (Preview) | Zonal | — | — | — | — | — | — | — | Zonal | — | — | Zonal | Zonal and regional (Preview) | Zonal | Zonal | — | — | — | — |
— | — | — | — | — | — | — | — | — | ||||||||||||||
0 | 0 | 12 TiB | 12 TiB | 0 | 9 TiB | 9 TiB | 9 TiB | 0 | 0 | 0 | 36 TiB | 0 | 3 TiB | 3 TiB | 0 | 3 TiB | 0 | 3 TiB | 9 TiB | 6 TiB | 3 TiB | 3 TiB |
— | — | — | — | — | Zonal and Regional | Zonal and Regional | Zonal and Regional | Zonal | Zonal | Zonal and Regional | — | — | Zonal | Zonal | — | — | Zonal | Zonal | Zonal and Regional | — | Zonal | — |
— | — | Zonal | Zonal | — | Zonal and Regional | Zonal and Regional | Zonal and Regional | Zonal | Zonal | Zonal and Regional | Zonal | Zonal | Zonal | Zonal | — | Zonal | Zonal | Zonal | Zonal and Regional | Zonal | Zonal | Zonal |
— | — | Zonal | Zonal | — | Zonal and Regional | Zonal and Regional | Zonal and Regional | Zonal | Zonal | Zonal and Regional | Zonal | — | Zonal | Zonal | — | Zonal | Zonal | Zonal | Zonal and Regional | Zonal | Zonal | Zonal |
— | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | ||||
gVNIC | gVNIC | gVNIC and IDPF | gVNIC | gVNIC | gVNIC and VirtIO-Net | gVNIC and VirtIO-Net | gVNIC and VirtIO-Net | gVNIC and VirtIO-Net | gVNIC | gVNIC and VirtIO-Net | gVNIC | gVNIC | gVNIC and VirtIO-Net | gVNIC and VirtIO-Net | IDPF | gVNIC | gVNIC and VirtIO-Net | gVNIC and VirtIO-Net | gVNIC and VirtIO-Net | gVNIC | gVNIC and VirtIO-Net | gVNIC and VirtIO-Net |
10 to 50 Gbps | 10 to 100 Gbps | 23 to 100 Gbps | 20 to 100 Gbps | 10 to 50 Gbps | 10 to 32 Gbps | 10 to 32 Gbps | 2 to 32 Gbps | 10 to 32 Gbps | 10 to 32 Gbps | 1 to 16 Gbps | 23 to 100 Gbps | up to 200 Gbps | 10 to 32 Gbps | 10 to 32 Gbps | up to 100 Gbps | up to 32 Gbps | up to 32 Gbps | up to 32 Gbps | 2 to 32 Gbps | up to 1,800 Gbps | 24 to 100 Gbps | 10 to 100 Gbps |
50 to 100 Gbps | 50 to 200 Gbps | 50 to 200 Gbps | 50 to 200 Gbps | — | 50 to 100 Gbps | 50 to 100 Gbps | — | — | — | — | 50 to 200 Gbps | — | 50 to 100 Gbps | 50 to 100 Gbps | — | 50 to 100 Gbps | — | — | 50 to 100 Gbps | up to 1,800 Gbps | 50 to 100 Gbps | 50 to 100 Gbps |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | 8 | 16 | 8 |
— | — | — | — | — | — | — | — | — | — | — | — | — | — | — | — | |||||||
Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs | — | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs and Flexible CUDs | Resource-based CUDs | Resource-based CUDs | Resource-based CUDs | Resource-based CUDs | Resource-based CUDs | Resource-based CUDs | Resource-based CUDs | Resource-based CUDs |
— | — | — | ||||||||||||||||||||
1.28 | 1.46 | 1.00 | 2.29 | 1.04 | 1.43 | 1.50 | 1.00 | 0.96 |
GPUs and compute instances
GPUs are used to accelerate workloads, and are supported for N1, A3, A2, and G2 VM instances. For VMs that use N1 machine types, you can attach GPUs to the VM during, or after VM creation. For VMs that use A3, A2, or G2 machine types, the GPUs are automatically attached when you create the VM. GPUs can't be used with any other machine series.
VM instances with lower numbers of GPUs are limited to a maximum number of vCPUs. In general, a higher number of GPUs lets you create instances with a higher number of vCPUs and memory. For more information, see GPUs on Compute Engine.
What's next
- Learn how to create and start a VM
- Learn how to create a VM with a custom machine type.
- Complete the Quickstart using a Linux VM
- Complete the Quickstart using a Windows VM
- Learn more about attaching block storage to your VMs