About Persistent Disk


By default, each Compute Engine instance has a single boot disk that contains the operating system. When your apps require additional storage space, one possible solution is to attach additional Persistent Disk or Hyperdisk volumes to your instance.

Persistent Disk volumes aren't attached to the physical machine hosting the instance. Instead they are attached to the instance as network block devices. However, your VM instance can access Persistent Disk volumes like physical disks in a desktop or a server. When you read or write from a Persistent Disk, data is transmitted over the network.

The data on each persistent disk is distributed across several physical disks. Compute Engine manages the physical disks and the data distribution for you to ensure redundancy and optimal performance.

Persistent Disk volumes are located independently from your VM instances, so you can detach or move the volumes to keep your data even after you delete your instances. Persistent Disk performance scales automatically with size, so you can resize your existing Persistent Disk volumes or add more Persistent Disk volumes to a VM to meet your performance and storage space requirements.

Add a non-boot disk to your instance when you need reliable and affordable storage with consistent performance characteristics.

Add a persistent disk to your instance

257 TiB maximum capacity

Persistent Disk volumes can be up to 64 TiB in size. You can add up to 127 secondary, non-boot zonal Persistent Disk volumes to a VM instance. However, the combined total capacity of all Persistent Disk volumes attached to a single VM can't exceed 257 TiB.

You can create single logical volumes of up to 257 TiB using logical volume management inside your VM. For information about how to ensure maximum performance with large volumes, see Logical volume size.

Storage interface types

The storage interface is chosen automatically for you when you create your instance or add Persistent Disk volumes to a VM. Tau T2A and third generation VMs (such as M3) use the NVMe interface for Persistent Disk.

Confidential VM instances also use NVMe Persistent Disk. All other Compute Engine machine series use the SCSI disk interface for Persistent Disk.

Most public images include both NVMe and SCSI drivers. Most images include a kernel with optimized drivers that allow your VM to achieve the best performance using NVMe. Your imported Linux images achieve the best performance with NVMe if they include kernel version 4.14.68 or later.

To determine if an operating system version supports NVMe, see the operating system details page.

Performance

Block storage resources have different performance characteristics. Consider your storage size and performance requirements to help you determine the correct block storage type for your instances. Persistent Disk performance is predictable and scales linearly with provisioned capacity until the limits for an instance's provisioned vCPUs are reached. For more information about Persistent Disk performance limits, see Performance limits for Persistent Disk.

Persistent Disk and Colossus

Persistent Disk is designed to run in tandem with Google's file system, Colossus, which is a distributed block storage system. Persistent Disk drivers automatically encrypt data on the VM before it's transmitted from the VM onto the network. Then, Colossus persists the data. When Colossus reads the data, the driver decrypts the incoming data.

image

Persistent Disk volumes use Colossus for the storage backend.

Having disks as a service is useful in a number of cases, for example:

  • Resizing the disks while the instance is running becomes easier than stopping the instance first. You can increase the disk size without stopping the instance.
  • Attaching and detaching disks becomes easier when disks and VMs don't have to share the same lifecycle or be co-located. It's possible to stop a VM and use its Persistent Disk boot disk to boot another VM.
  • High availability features like replication become easier because the disk driver can hide replication details and provide automatic write-time replication.

What's next