If your workloads need high performance, low latency, temporary storage, consider using Local solid-state drive (Local SSD) disks when you create your compute instance. Local SSD disks are always-encrypted temporary solid-state storage for Compute Engine. To learn about the other disks available in Compute Engine, see Choose a disk type.
Local SSD disks are ideal when you need storage for any of the following use cases:
- Caches or storage for transient data
- Scratch processing space for high performance computing or data analytics
- Temporary data storage like for the
tempdb
system database for Microsoft SQL Server
Local SSD disks offer superior I/O operations per second (IOPS), and very low latency compared to the persistent storage provided by Google Cloud Hyperdisk and Persistent Disk. This low latency is because Local SSD disks are physically attached to the server that hosts your instance. For this same reason, Local SSD disks can provide only temporary storage.
Because Local SSD is suitable only for temporary storage, you must store data that isn't temporary or ephemeral in nature on a Hyperdisk or Persistent Disk volume.
To use Local SSD disks with a compute instance, add Local SSD disks when you create the instance. You can't add Local SSD disks to an instance after you create it.
Types of local SSD disks
Local SSD disks come in two types:
Titanium SSD: Titanium SSD is a custom-designed local SSD that uses Titanium I/O offload processing and offers enhanced SSD security, performance and management. Titanium offers higher storage IOPS, throughput and lower latency than previous generation Local SSD. The storage-optimized Z3 machine series and the general-purpose C4A machine series offer local SSD storage using Titanium SSD.
Titanium SSD disks are directly attached to the compute instances inside their host server.
Local SSD: Local SSD is the original local SSD feature for Google Cloud. Each Local SSD disk attached to an instance provides 375 GiB of capacity. These disks provide higher performance than Hyperdisk or Persistent Disk. You can use either the NVMe or SCSI interface to mount Local SSD disks.
Local SSD disks are directly attached to the instances inside their host server.
Unless Titanium SSD is specifically mentioned, the term "Local SSD" applies to both Local SSD and Titanium SSD when describing features of local SSD disks.
Performance
Local SSD performance depends on several factors, including the number of attached Local SSD disks, the selected disk interface (NVMe or SCSI), and the instance's machine type. The available performance increases as you attach more Local SSD disks to your instance.
Local SSD performance by number of attached disks
The following tables list the maximum IOPS and throughput for NVMe- and SCSI-attached Local SSD disks. The metrics are listed by the total capacity of Local SSD disks attached to the instance.
NVMe Local SSD performance
# of attached Local SSD disks |
Total storage space (GiB) | Capacity per disk (GiB) | IOPS | Throughput (MiBps) |
||
---|---|---|---|---|---|---|
Read | Write | Read | Write | |||
1 | 375 | 375 | 170,000 | 90,000 | 660 | 350 |
2 | 750 | 375 | 340,000 | 180,000 | 1,320 | 700 |
3 | 1,125 | 375 | 510,000 | 270,000 | 1,980 | 1,050 |
4 | 1,500 | 375 | 680,000 | 360,000 | 2,650 | 1,400 |
5 | 1,875 | 375 | 680,000 | 360,000 | 2,650 | 1,400 |
6 | 2,250 | 375 | 680,000 | 360,000 | 2,650 | 1,400 |
7 | 2,625 | 375 | 680,000 | 360,000 | 2,650 | 1,400 |
8 | 3,000 | 375 | 680,000 | 360,000 | 2,650 | 1,400 |
16 | 6,000 | 375 | 1,600,000 | 800,000 | 6,240 | 3,120 |
24 | 9,000 | 375 | 2,400,000 | 1,200,000 | 9,360 | 4,680 |
32 | 12,000 | 375 | 3,200,000 | 1,600,000 | 12,480 | 6,240 |
Z3 instances using Titanium SSD | ||||||
12 | 36,000 | 3,000 | 6,000,000 | 6,000,000 | 36,000 | 30,000 |
SCSI Local SSD performance
# of combined Local SSD disks |
Storage space (GiB) | IOPS | Throughput (MiBps) |
||
---|---|---|---|---|---|
Read | Write | Read | Write | ||
1 | 375 | 100,000 | 70,000 | 390 | 270 |
2 | 750 | 200,000 | 140,000 | 780 | 550 |
3 | 1,125 | 300,000 | 210,000 | 1,170 | 820 |
4 | 1,500 | 400,000 | 280,000 | 1,560 | 1,090 |
5 | 1,875 | 400,000 | 280,000 | 1,560 | 1,090 |
6 | 2,250 | 400,000 | 280,000 | 1,560 | 1,090 |
7 | 2,625 | 400,000 | 280,000 | 1,560 | 1,090 |
8 | 3,000 | 400,000 | 280,000 | 1,560 | 1,090 |
16 | 6,000 | 900,000 | 800,000 | 6,240 | 3,120 |
24 | 9,000 | 900,000 | 800,000 | 9,360 | 4,680 |
Configure your instance to maximize performance
To reach the stated performance levels, you must configure your compute instance as follows:
Attach the Local SSD disks with the NVMe interface. Disks attached with the SCSI interface have lower performance.
The following machine types also require a minimum number of vCPUs to reach these maximums:
If your instance uses a custom Linux image, the image must use version 4.14.68 or later of the Linux kernel. If you use the public images provided by Compute Engine, you don't have to take any further action.
For additional instance and disk configuration settings that can improve Local SSD performance, see Optimizing local SSD performance.
For more information about selecting a disk interface, see Choose a disk interface.
Local SSD data persistence
Compute Engine preserves the data on Local SSD disks in certain scenarios, and in other cases, Compute Engine does not guarantee Local SSD data persistence.
The following information describes these scenarios and applies to each Local SSD disk attached to an instance.
Scenarios where Compute Engine persists Local SSD data
Data on Local SSD disks persist only through the following events:
- If you reboot the guest operating system.
- If you configure your instance for live migration and the instance goes through a host maintenance event.
- If you opt to preserve the Local SSD data when you stop or suspend the instance. This feature is in Preview.
Scenarios where Compute Engine might not persist Local SSD data
Data on Local SSD disks might be lost if a host error occurs on the instance and Compute Engine can't reconnect the instance to the Local SSD disk within a specified time.
You can control how much time, if any, is spent attempting to recover the data with the Local SSD recovery timeout. If Compute Engine can't reconnect to the disk before the timeout expires, the instance is restarted. When the instance restarts, the Local SSD data is unrecoverable. Compute Engine attaches a blank Local SSD disk to the restarted instance.
The Local SSD recovery timeout is part of an instance's host maintenance policy. For more information, see Local SSD recovery timeout.
Scenarios where Compute Engine does not persist Local SSD data
Data on Local SSD disks does not persist through the following events:
- If you shut down the guest operating system and force the instance to stop.
- If you create a Spot VM or preemptible VM and the VM goes through the preemption process.
- If you configure the instance to stop on host maintenance events and the instance goes through a host maintenance event.
- If you misconfigure the Local SSD so that it becomes unreachable.
- If you disable project billing, causing the instance to stop.
If Compute Engine was unable to recover an instance's Local SSD data, Compute Engine restarts the instance with a mounted and attached Local SSD disk for each previously attached Local SSD disk.
Machine series support
You can use Local SSD disks with the following machine series.
Select a machine series to display its support for Local SSD.
Machine series | Local SSD support |
---|---|
C4A | |
C4 | — |
C3 | |
C3D | |
N4 | — |
N2 | |
N2D | |
N1 | |
T2D | — |
T2A | — |
E2 | — |
Z3 | |
H3 | — |
C2 | |
C2D | |
X4 | — |
M3 | |
M2 | — |
M1 | |
N1+GPU | |
A3 | |
A2 | |
G2 |
However, there are constraints around how many Local SSD disks you can attach based on each machine type. For more information, see Choose a valid number of Local SSD disks.
Limitations
Local SSD has the following limitations:
You can't use Local SSD disks with VMs with shared-core machine types.
You can't attach Local SSD disks to instances that use N4, H3, M2 E2, or Tau T2A machine types.
You can't use customer-supplied encryption keys or customer-managed encryption keys with Local SSD disks. Compute Engine automatically encrypts your data when it's written to Local SSD storage.
You can't back up Local SSD disks with snapshots, clones, machine images, or images. Store important data on Hyperdisk or Persistent Disk volumes.
Local SSD encryption
Compute Engine automatically encrypts your data when it is written to Local SSD storage space. You can't use customer-supplied encryption keys with Local SSD disk.
Local SSD data backup
Since you can't back up Local SSD data with disk images, standard snapshots, or disk clones, Google recommends that you always store valuable data on a durable storage option.
If you need to preserve the data on a Local SSD disk, attach a Persistent Disk or Google Cloud Hyperdisk to the instance. After you mount the Persistent Disk or Hyperdisk copy the data from the Local SSD disk to the newly attached disk.
Choose a disk interface
To achieve the highest Local SSD performance, you must attach your disks to the instance with the NVMe interface. Performance is lower if you use the SCSI interface.
The disk interface you choose also depends on the machine type and OS that your instance uses. Some of the available machine types in Compute Engine allow you to choose between NVMe and SCSI interfaces, while others support either only NVMe or only SCSI. Similarly, some of the public OS images provided by Compute Engine might support both NVMe and SCSI, or only one of the two.
Disk interface support by machine type and OS image
The following pages provide more information about available machine types and supported public images, as well as performance details.
Supported interfaces by machine types: See Machine series comparison. In the Choose VM properties to compare list, select Disk interface type.
OS image: For a list of which public OS images provided by Compute Engine support SCSI or NVMe, see the Interfaces tab for each table in the operating system details documentation.
Considerations for NVMe for custom images
If your instance uses a custom Linux image, you must use version 4.14.68 or later of the Linux kernel for optimal NVMe performance.
Considerations for SCSI for custom images
If you have an existing setup that requires using a SCSI interface, consider using multi-queue SCSI to achieve better performance over the standard SCSI interface.
If you are using a custom image that you imported, see Enable multi-queue SCSI.
Choose a valid number of Local SSD disks
Most machine types available on Compute Engine support Local SSD disks. Some machine types always include a fixed number of Local SSD disks by default, while others allow you to add specific numbers of disks. You can only add Local SSD disks when you create the instance. You can't add Local SSD disks to an instance after you create it.
For instances created using a storage-optimized Z3 machine type, each attached Titanium SSD disk has 3,000 GiB of capacity. For all other machine series, each Local SSD disk that you attach has 375 GiB of capacity.
Machine types that automatically attach Local SSD disks
The following table lists the machine types that include Local SSD disks by default, and shows how many disks are attached when you create the instance.
Machine type | Number of Local SSD disks automatically attached per instance |
---|---|
C3 machine types | |
Only the
-lssd variants of the C3 machine types
support Local SSD.
|
|
c3-standard-4-lssd |
1 |
c3-standard-8-lssd |
2 |
c3-standard-22-lssd |
4 |
c3-standard-44-lssd |
8 |
c3-standard-88-lssd |
16 |
c3-standard-176-lssd |
32 |
C3D machine types | |
Only the
-lssd variants of the C3D machine types
support Local SSD.
|
|
c3d-standard-8-lssd |
1 |
c3d-standard-16-lssd |
1 |
c3d-standard-30-lssd |
2 |
c3d-standard-60-lssd |
4 |
c3d-standard-90-lssd |
8 |
c3d-standard-180-lssd |
16 |
c3d-standard-360-lssd |
32 |
c3d-highmem-8-lssd |
1 |
c3d-highmem-16-lssd |
1 |
c3d-highmem-30-lssd |
2 |
c3d-highmem-60-lssd |
4 |
c3d-highmem-90-lssd |
8 |
c3d-highmem-180-lssd |
16 |
c3d-highmem-360-lssd |
32 |
A3 Mega machine types | |
a3-megagpu-8g |
16 |
A3 High machine types | |
a3-highgpu-1g |
2 |
a3-highgpu-2g |
4 |
a3-highgpu-4g |
8 |
a3-highgpu-8g |
16 |
A3 Edge machine types | |
a3-edgegpu-8g |
16 |
A2 Ultra machine types | |
a2-ultragpu-1g |
1 |
a2-ultragpu-2g |
2 |
a2-ultragpu-4g |
4 |
a2-ultragpu-8g |
8 |
Z3 instances using Titanium SSD | |
Each disk is 3 TiB in size. | |
z3-highmem-88 |
12 |
z3-highmem-176 |
12 |
Machine types that require you to choose a number of Local SSD disks
The machine types listed in the following table don't automatically attach Local SSD disks to a newly created instance. Because you can't add Local SSD disks to a instance after you create it, use the information in this section to determine how many Local SSD disks to attach when you create an instance.
N1 machine types | Number of Local SSD disks allowed per instance | |
---|---|---|
All N1 machine types | 1 to 8, 16, or 24 | |
N2 machine types | ||
Machine types with 2 to 10 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 | |
Machine types with 12 to 20 vCPUs, inclusive | 2, 4, 8, 16, or 24 | |
Machine types with 22 to 40 vCPUs, inclusive | 4, 8, 16, or 24 | |
Machine types with 42 to 80 vCPUs, inclusive | 8, 16, or 24 | |
Machine types with 82 to 128 vCPUs, inclusive | 16 or 24 | |
N2D machine types | ||
Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, 8, 16, or 24 | |
Machine types with 32 or 48 vCPUs | 2, 4, 8, 16, or 24 | |
Machine types with 64 or 80 vCPUs | 4, 8, 16, or 24 | |
Machine types with 96 to 224 vCPUs, inclusive | 8, 16, or 24 | |
C2 machine types | ||
Machine types with 4 or 8 vCPUs | 1, 2, 4, or 8 | |
Machine types with 16 vCPUs | 2, 4, or 8 | |
Machine types with 30 vCPUs | 4 or 8 | |
Machine types with 60 vCPUs | 8 | |
C2D machine types | ||
Machine types with 2 to 16 vCPUs, inclusive | 1, 2, 4, 8 | |
Machine types with 32 vCPUs | 2, 4, 8 | |
Machine types with 56 vCPUs | 4, 8 | |
Machine types with 112 vCPUs | 8 | |
A2 Standard machine types | ||
a2-highgpu-1g |
1, 2, 4, or 8 | |
a2-highgpu-2g |
2, 4, or 8 | |
a2-highgpu-4g |
4 or 8 | |
a2-highgpu-8g or a2-megagpu-16g |
8 | |
G2 machine types | ||
g2-standard-4 |
1 | |
g2-standard-8 |
1 | |
g2-standard-12 |
1 | |
g2-standard-16 |
1 | |
g2-standard-24 |
2 | |
g2-standard-32 |
1 | |
g2-standard-48 |
4 | |
g2-standard-96 |
8 | |
M1 machine types | ||
m1-ultramem-40 |
Not available | |
m1-ultramem-80 |
Not available | |
m1-megamem-96 |
1 to 8 | |
m1-ultramem-160 |
Not available | |
M3 machine types | ||
m3-ultramem-32 |
4, 8 | |
m3-megamem-64 |
4, 8 | |
m3-ultramem-64 |
4, 8 | |
m3-megamem-128 |
8 | |
m3-ultramem-128 |
8 | |
E2, C3-metal, M2, N4, Tau T2D, Tau T2A, and X4 machine types | These machine types don't support Local SSD disks. |
Pricing
For each Local SSD disk you create, you are billed for the total capacity of the disk for the lifetime of the instance that it is attached to.
For detailed information about Local SSD pricing and available discounts, see Local SSD pricing.
Local SSD disks and Spot VM instances
If you start a Spot VM or preemptible VM with a Local SSD disk, Compute Engine charges discounted spot prices for the Local SSD usage. Local SSD disks that are attached to Spot VMs or preemptible VMs work like normal Local SSD disks, retain the same data persistence characteristics, and remain attached for the life of the VM.
Compute Engine doesn't charge you for Local SSD disk usage on a Spot VM or preemptible VM if the VM is preempted within a minute after it starts running.
Reserving Local SSD disks with committed use discounts
To reserve Local SSD resources in a specific zone, see Reservations of Compute Engine zonal resources.
To receive committed use discounts for Local SSD disks in a specific zone, you must purchase resource-based commitments for the Local SSD resources and also attach reservations that specify matching Local SSD resources to your commitments. For more information, see Attach reservations to resource-based commitments.
Use Local SSD disks with an instance
To use a Local SSD disk with a compute instance, you must complete the following steps:
- Add Local SSD disks when you create an instance.
- Format and mount Local SSD disks that you added to your instance.
Device naming on Linux instances
The Linux device names for the disks attached to your instance depend on the
interface that you choose when creating the disks. When you use the lsblk
operating system command to view your disk devices, it displays the prefix
nvme
for disks attached with the NVMe interface, and the prefix sd
for
disks attached with the SCSI interface.
The ordering of the disk numbers or NVMe controllers is not predictable or
consistent across instance restarts. On the first boot, a disk might be
nvme0n1
(or sda
for SCSI). On the second boot, the device name for the same
disk might be nvme2n1
or nvme0n3
(or sdc
for SCSI).
When accessing attached disks, you should use the symbolic links created in
/dev/disk/by-id/
instead. These names persist across reboots.
For more information about symlinks, see
Symbolic links for disks attached to an instance.
For more information about device names, see Device naming on Linux instances.
Stop an instance with Local SSD
When you stop or suspend an instance, Compute Engine discards the data of any Local SSD disks attached to the instance by default.
If you want to preserve the data of the Local SSD disks attached to the
instance, you must stop
or suspend
the instance by using the gcloud CLI and including the
--discard-local-ssd=false
flag. This begins a managed migration of Local SSD
data to persistent storage, and you're charged for the additional storage
utilization until you
restart the instance. You
might have to remount the Local SSD disk into the file system after restarting
the instance.
Restrictions
--discard-local-ssd=false
is in public preview only and not covered under the GA terms for Compute Engine.- Compute Engine only supports using
--discard-local-ssd=false
in instances with at most 32 Local SSD disks attached. - You can't preserve Local SSD data if you stop or suspend an instance from the Google Cloud console. You must use the Google Cloud CLI, the Cloud Client Libraries, or the Compute Engine API.
- Saving the Local SSD data is a slow process. Copying the Local SSD data
begins only after the
suspend
orstop
request is received. - When using Spot VMs or preemptible VMs, preemption may happen at any time, and might interrupt a suspend or resume attempt. In this case, the VM is STOPPED (preempted), not SUSPENDED, and no Local SSD data is retained in persistent storage when the VM resumes or restarts.
Remove Local SSD disks
To remove or delete Local SSD disks, you delete the instance that they are attached to. Before you delete an instance with Local SSD disks, make sure that you migrate any critical data from the Local SSD disk to a Persistent Disk, Hyperdisk, or to another instance.