To add disks to your VMs, choose one of the block storage options Compute Engine offers. Each of the following storage options has unique price and performance characteristics:
- Google Cloud Hyperdisk volumes are network storage for Compute Engine, with configurable performance and volumes that can be dynamically resized. They offer substantially higher performance, flexibility and efficiency compared to Persistent Disk. Hyperdisk Balanced High Availability (Preview) can synchronously replicate data between disks located in two zones, providing protection if a zone becomes unavailable.
- Hyperdisk Storage Pools enable you to purchase Hyperdisk capacity and performance in aggregate and then create disks for your VMs from this pool of storage.
- Persistent Disk volumes provide high-performance and redundant network
storage. Each Persistent Disk volume is
striped across hundreds of physical disks.
- By default, VMs use zonal Persistent Disk, and store your
data on volumes located within a single zone, such as
us-west1-c
. - You can also create regional Persistent Disk volumes, which synchronously replicate data between disks located in two zones and provide protection if a zone becomes unavailable.
- By default, VMs use zonal Persistent Disk, and store your
data on volumes located within a single zone, such as
- Local SSD disks are physical drives attached directly to the same server as your VM. They can offer better performance, but are ephemeral.
For cost comparisons, see disk pricing. If you are not sure which option to use, for earlier generation machine series, the most common solution is to add a Balanced Persistent Disk volume to your VM and for the latest machine series, add a Hyperdisk volume to your compute instance.
In addition to block storage, Compute Engine offers file and object storage options. To review and compare the storage options, see Review the storage options.
Introduction
By default, each Compute Engine VM has a single boot disk that contains the operating system. The boot disk data is typically stored on a Persistent Disk or Hyperdisk Balanced volume. When your applications require additional storage space, you can provision one or more of the following storage volumes to your VM.
To learn more about each storage option, review the following table:
Balanced Persistent Disk |
SSD Persistent Disk |
Standard Persistent Disk |
Extreme Persistent Disk |
Hyperdisk Balanced | Hyperdisk ML | Hyperdisk Extreme | Hyperdisk Throughput | Local SSDs | |
---|---|---|---|---|---|---|---|---|---|
Storage type | Cost-effective and reliable block storage | Fast and reliable block storage | Efficient and reliable block storage | Highest performance Persistent Disk block storage option with customizable IOPS | High performance for demanding workloads with a lower cost | Highest throughput storage optimized for machine learning workloads. | Fastest block storage option with customizable IOPS | Cost-effective and throughput-oriented block storage with customizable throughput | High performance local block storage |
Minimum capacity per disk | Zonal: 10 GiB Regional: 10 GiB |
Zonal: 10 GiB Regional: 10 GiB |
Zonal: 10 GiB Regional: 200 GiB |
500 GiB | Zonal and regional: 4 GiB | 4 GiB | 64 GiB | 2 TiB | 375 GiB, 3 TiB with Z3 |
Maximum capacity per disk | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 64 TiB | 32 TiB | 375 GiB, 3 TiB with Z3 |
Capacity increment | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | 1 GiB | Depends on the machine type† |
Maximum capacity per VM | 257 TiB* | 257 TiB* | 257 TiB* | 257 TiB* | 512 TiB* | 512 TiB* | 512 TiB* | 512 TiB* | 36 TiB |
Scope of access | Zone | Zone | Zone | Zone | Zone | Zone | Zone | Zone | Instance |
Data redundancy | Zonal and multi-zonal | Zonal and multi-zonal | Zonal and multi-zonal | Zonal | Zonal and multi-zonal | Zonal | Zonal | Zonal | None |
Encryption at rest | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Custom encryption keys | Yes | Yes | Yes | Yes | Yes‡ | Yes | Yes | Yes | No |
How-to | Add an extreme Persistent Disk | Add a Local SSD |
In addition to the storage options that Google Cloud provides, you can deploy alternative storage solutions on your VMs.
- Create a file server or distributed file system on Compute Engine to use as a network file system with NFSv3 and SMB3 capabilities.
- Mount a RAM disk within the VM memory to create a block storage volume with high throughput and low latency.
Block storage resources have different performance characteristics. Consider your storage size and performance requirements when determining the correct block storage type for your VMs.
For information about performance limits, see:
Persistent Disk
Persistent Disk volumes are durable network storage devices that your virtual machine (VM) instances can access like physical disks in a desktop or a server. The data on each Persistent Disk volume is distributed across several physical disks. Compute Engine manages the physical disks and the data distribution for you to ensure redundancy and optimal performance.
Persistent Disk volumes are located independently from your VM, so you can detach or move Persistent Disk volumes to keep your data even after you delete your VMs. Persistent Disk performance scales automatically with size, so you can resize your existing Persistent Disk volumes or add more Persistent Disk volumes to a VM to meet your performance and storage space requirements.
Persistent Disk types
When you configure a persistent disk, you can select one of the following disk types:
- Balanced persistent disks (
pd-balanced
)- An alternative to performance (pd-ssd) persistent disks
- Balance of performance and cost. For most VM shapes, except very large ones, these disks have the same maximum IOPS as SSD persistent disks and lower IOPS per GiB. This disk type offers performance levels suitable for most general-purpose applications at a price point between that of standard and performance (pd-ssd) persistent disks.
- Backed by solid-state drives (SSD).
- Performance (SSD) persistent disks (
pd-ssd
)- Suitable for enterprise applications and high-performance databases that require lower latency and more IOPS than standard persistent disks provide.
- Backed by solid-state drives (SSD).
- Standard persistent disks (
pd-standard
)- Suitable for large data processing workloads that primarily use sequential I/Os.
- Backed by standard hard disk drives (HDD).
- Extreme persistent disks (
pd-extreme
)- Offer consistently high performance for both random access workloads and bulk throughput.
- Designed for high-end database workloads.
- Allow you to provision the target IOPS.
- Backed by solid-state drives (SSD).
- Available with a limited number of machine types.
If you create a disk in the Google Cloud console, the default disk type is
pd-balanced
. If you create a disk using the gcloud CLI or the
Compute Engine API, the default disk type is pd-standard
.
For information about machine type support, refer to the following:
Durability of Persistent Disk
Disk durability represents the probability of data loss, by design, for a typical disk in a typical year, using a set of assumptions about hardware failures, the likelihood of catastrophic events, isolation practices and engineering processes in Google data centers, and the internal encodings used by each disk type. Persistent Disk data loss events are extremely rare and have historically been the result of coordinated hardware failures, software bugs, or a combination of the two. Google also takes many steps to mitigate the industry-wide risk of silent data corruption. Human error by a Google Cloud customer, such as when a customer accidentally deletes a disk, is outside the scope of Persistent Disk durability.
There is a very small risk of data loss occurring with a regional persistent disk due to its internal data encodings and replication. Regional persistent disks provide twice as many replicas as zonal Persistent Disk, with their replicas distributed between two zones in the same region, so they provide high availability and can be used for disaster recovery if an entire data center is lost and cannot be recovered (although that has never happened). The additional replicas in a second zone can be accessed immediately if a primary zone becomes unavailable during a long outage.
Note that durability is in the aggregate for each disk type, and does not represent a financially-backed service level agreement (SLA).
The table below shows durability for each disk type's design. 99.999% durability means that with 1,000 disks, you would likely go a hundred years without losing a single one.
Zonal standard Persistent Disk | Zonal balanced Persistent Disk | Zonal SSD Persistent Disk | Zonal extreme Persistent Disk | Regional standard Persistent Disk | Regional balanced Persistent Disk | Regional SSD Persistent Disk |
---|---|---|---|---|---|---|
Better than 99.99% | Better than 99.999% | Better than 99.999% | Better than 99.9999% | Better than 99.999% | Better than 99.9999% | Better than 99.9999% |
Zonal Persistent Disk
Ease of use
Compute Engine handles most disk management tasks for you so that you do not need to deal with partitioning, redundant disk arrays, or subvolume management. Generally, you don't need to create larger logical volumes, but you can extend your secondary attached Persistent Disk capacity to 257 TiB per VM and apply these practices to your Persistent Disk volumes if you want. You can save time and get the best performance if you format your Persistent Disk volumes with a single file system and no partition tables.
If you need to separate your data into multiple unique volumes, create additional disks rather than dividing your existing disks into multiple partitions.
When you require additional space on your Persistent Disk volumes, resize your disks rather than repartitioning and formatting.
Performance
Persistent Disk performance is predictable and scales linearly with provisioned capacity until the limits for an VM's provisioned vCPUs are reached. For more information about performance scaling limits and optimization, see Configure disks to meet performance requirements.
Standard Persistent Disk volumes are efficient and economical for handling sequential read/write operations, but they aren't optimized to handle high rates of random input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD or extreme Persistent Disk. SSD Persistent Disk is designed for single-digit millisecond latencies. Observed latency is application specific.
Compute Engine optimizes performance and scaling on Persistent Disk volumes automatically. You don't need to stripe multiple disks together or pre-warm disks to get the best performance. When you need more disk space or better performance, resize your disks and possibly add more vCPUs to add more storage space, throughput, and IOPS. Persistent Disk performance is based on the total Persistent Disk capacity attached to a VM and the number of vCPUs that the VM has.
For boot devices, you can reduce costs by using a standard Persistent Disk. Small, 10 GiB Persistent Disk volumes can work for basic boot and package management use cases. However, to ensure consistent performance for more general use of the boot device, use a balanced Persistent Disk as your boot disk.
Each Persistent Disk write operation contributes to the cumulative network egress traffic for your VM. This means that Persistent Disk write operations are capped by the network egress cap for your VM.
Reliability
Persistent Disk has built-in redundancy to protect your data against equipment failure and to ensure data availability through datacenter maintenance events. Checksums are calculated for all Persistent Disk operations, so we can ensure that what you read is what you wrote.
Additionally, you can create snapshots of Persistent Disk to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running VMs.
Multi-writer mode
You can attach an SSD Persistent Disk in multi-writer mode to up to two N2 VMs simultaneously so that both VMs can read and write to the disk.
Persistent Disk in multi-writer mode provides a shared block storage capability and presents an infrastructural foundation for building highly-available shared file systems and databases. These specialized file systems and databases should be designed to work with shared block storage and handle cache coherence between VMs by using tools such as SCSI Persistent Reservations.
However, Persistent Disk with multi-writer mode should generally not be used directly and you should be aware that many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage. For more information about the best practices when sharing Persistent Disk between VMs, see Best practices.
If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.
To enable multi-writer mode for new Persistent Disk volumes, create a new
Persistent Disk and specify the --multi-writer
flag in the gcloud CLI
or the multiWriter
property in the Compute Engine API. For more information, see
Share Persistent Disk volumes between VMs.
Persistent Disk encryption
Compute Engine automatically encrypts your data before it travels outside of your VM to the Persistent Disk storage space. Each Persistent Disk remains encrypted either with system-defined keys or with customer-supplied keys. Google distributes Persistent Disk data across multiple physical disks in a manner that users do not control.
When you delete a Persistent Disk volume, Google discards the cipher keys, rendering the data irretrievable. This process is irreversible.
If you want to control the encryption keys that are used to encrypt your data, create your disks with your own encryption keys.
Restrictions
You cannot attach a Persistent Disk volume to an VM in another project.
You can attach a balanced Persistent Disk to a maximum of 10 VMs in read-only mode.
For custom machine types or predefined machine types with a minimum of 1 vCPU, you can attach up to 128 Persistent Disk volumes.
Each Persistent Disk volume can be up to 64 TiB in size, so there is no need to manage arrays of disks to create large logical volumes. Each VM can attach only a limited amount of total Persistent Disk space and a limited number of individual Persistent Disk volumes. Predefined machine types and custom machine types have the same Persistent Disk limits.
Most VMs can have up to 128 Persistent Disk volumes and up to 257 TiB of total disk space attached. Total disk space for a VM includes the size of the boot disk.
Shared-core machine types are limited to 16 Persistent Disk volumes and 3 TiB of total Persistent Disk space.
Creating logical volumes larger than 64 TiB might require special consideration. For more information about larger logical volume performance see logical volume size.
Regional Persistent Disk
Regional Persistent Disk volumes have storage qualities that are similar to zonal Persistent Disk. However, regional Persistent Disk volumes provide durable storage and replication of data between two zones in the same region.
About synchronous disk replication
When you create a new Persistent Disk, you can either create the disk in one zone, or replicate it across two zones within the same region.
For example, if you create one disk in a zone, such as in us-west1-a
, you
have one copy of the disk. This is referred to as a zonal disk.
You can increase the disk's availability by storing another
copy of the disk in a different zone within the region, such as in
us-west1-b
.
Persistent Disk replicated across two zones in the same region are called Regional Persistent Disk. You can also use Hyperdisk Balanced High Availability for cross-zonal synchronous replication of Google Cloud Hyperdisk.
It is unlikely for a region to fail altogether, but zonal failures can happen. Replicating within the region to different zones, as shown in the following image, helps with availability and reduces disk latency. If both replication zones fail, it is considered a region-wide failure.
Disk is replicated in two zones.
In the replicated scenario, the data is available in the local zone
(us-west1-a
) which is the zone the virtual machine (VM) is running in. Then,
the data is replicated to another zone (us-west1-b
). One of the zones must be
the same zone that the VM is running in.
If a zonal outage occurs, you can usually failover your workload running on Regional Persistent Disk to another zone. To learn more, see Regional Persistent Disk failover.
Design considerations for Regional Persistent Disk
If you are designing robust systems or high availability services on Compute Engine, use Regional Persistent Disk combined with other best practices such as backing up your data using snapshots. Regional Persistent Disk volumes are also designed to work with regional managed instance groups.
Performance
Regional Persistent Disk volumes are designed for workloads that require a lower Recovery Point Objective (RPO) and Recovery Time Objective (RTO) compared to using Persistent Disk snapshots.
Regional Persistent Disk are an option when write performance is less critical than data redundancy across multiple zones.
Like zonal Persistent Disk, Regional Persistent Disk can achieve greater IOPS and throughput performance on VMs with a greater number of vCPUs. For more information about this and other limitations, see Configure disks to meet performance requirements.
When you need more disk space or better performance, you can resize your regional disks to add more storage space, throughput, and IOPS.
Reliability
Compute Engine replicates data of your regional Persistent Disk to the zones you selected when you created your disks. The data of each replica is spread across multiple physical machines within the zone to ensure redundancy.
Similar to zonal Persistent Disk, you can create snapshots of Persistent Disk to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running VMs.
Limitations
- You can attach regional Persistent Disk only to VMs that use E2, N1, N2, and N2D machine types.
- You can attach Hyperdisk Balanced High Availability only to supported machine types.
- You cannot create a regional Persistent Disk from an image, or from a disk that was created from an image.
- When using read-only mode, you can attach a regional balanced Persistent Disk to a maximum of 10 VM instances.
- The minimum size of a regional standard Persistent Disk is 200 GiB.
- You can only increase the size of a regional Persistent Disk or Hyperdisk Balanced High Availability volume; you can't decrease its size.
- Regional Persistent Disk and Hyperdisk Balanced High Availability volumes have different performance characteristics than their corresponding zonal disks. For more information, see Block storage performance.
- You can't use a Hyperdisk Balanced High Availability volume that's in multi-writer mode as a boot disk.
- If you create a replicated disk by cloning a zonal disk, then the two zonal replicas aren't fully in sync at the time of creation. After creation, you can use the regional disk clone within 3 minutes, on average. However, you might need to wait for tens of minutes before the disk reaches a fully replicated state and the recovery point objective (RPO) is close to zero. Learn how to check if your replicated disk is fully replicated.
Google Cloud Hyperdisk
Google Cloud Hyperdisk is Google's next generation block storage. By offloading and dynamically scaling out storage processing, it decouples storage performance from the VM type and size. Hyperdisk offers substantially higher performance, flexibility and efficiency when compared to Persistent Disk.
Hyperdisk Balanced
Hyperdisk Balanced for Compute Engine is a good fit for a wide range of use cases such as line of business (LOB) applications, web applications, and medium-tier databases not requiring the performance of Hyperdisk Extreme. You can also use Hyperdisk Balanced for applications where multiple VMs in the same zone simultaneously require write access to the same disk.
Hyperdisk Balanced volumes let you dynamically tune the capacity, IOPS, and throughput for your workloads.
Hyperdisk ML
Workloads using accelerators to train or serve machine learning models should use Hyperdisk ML. Hyperdisk ML volumes offer the fastest customizable throughput and are best for models larger than 20 GiB. Hyperdisk ML also supports simultaneous read access to the same volume from multiple VMs.
You can dynamically tune the capacity and throughput of a Hyperdisk ML volume.
Hyperdisk Extreme
Hyperdisk Extreme offers the fastest block storage available. It is suitable for high-end workloads that need the highest throughput and IOPS.
Hyperdisk Extreme volumes let you dynamically tune the capacity and IOPS for your workloads.
Hyperdisk Throughput
Hyperdisk Throughput is a good fit for scale-out analytics including Hadoop and Kafka, data drives for cost sensitive apps, and cold storage.
Hyperdisk Throughput volumes let you dynamically tune the capacity and throughput for your workloads. You can change the provisioned throughput level without downtime or interruption to your workloads.
Hyperdisk Balanced High Availability (Preview)
Hyperdisk Balanced High Availability enables synchronous replication on third generation or later machine series. Hyperdisk Balanced High Availability enables data resilience with RPO=0 replication across two zones, similar to Regional Persistent Disk.
Hyperdisk Balanced High Availability volumes let you dynamically tune the capacity, IOPS and throughput for your workloads. You can change the provisioned performance and capacity levels without downtime or interruption to your workloads. Use Hyperdisk Balanced High Availability for when different VMs in the same region simultaneously require write access to the same disk.
Hyperdisk volumes are created and managed like Persistent Disk, with the additional ability to set the provisioned IOPS or throughput level and change that value at any time. There is no direct migration path from Persistent Disk to Hyperdisk. Instead, you can create a snapshot and restore the snapshot to a new Hyperdisk volume.
For more information about Hyperdisk, see About Hyperdisk.
Durability of Hyperdisk
Disk durability represents the probability of data loss, by design, for a typical disk in a typical year. Durability is calculated using a set of assumptions about hardware failures, such as:
- The likelihood of catastrophic events
- Isolation practices
- Engineering processes in Google data centers
- The internal encodings used by each disk type
Hyperdisk data loss events are extremely rare. Google also takes many steps to mitigate the industry-wide risk of silent data corruption.
Human error by a Google Cloud customer, such as when a customer accidentally deletes a disk, is outside the scope of Hyperdisk durability.
The table below shows durability for each disk type's design. 99.999% durability means that with 1,000 disks, you would likely go a hundred years without losing a single one.
Hyperdisk Balanced | Hyperdisk Extreme | Hyperdisk ML | Hyperdisk Throughput |
---|---|---|---|
Better than 99.999% | Better than 99.9999% | Better than 99.999% | Better than 99.999% |
Hyperdisk encryption
Compute Engine automatically encrypts your data upon writing to a Hyperdisk volume. You can also customize the encryption with Customer-managed encryption keys.
Hyperdisk Balanced High Availability
Hyperdisk Balanced High Availability disks provide durable storage and replication of data between two zones in the same region. Hyperdisk Balanced High Availability volumes have storage limits that are similar to non-replicated Hyperdisk Balanced disks.
If you are designing robust systems or high availability services on Compute Engine, use Hyperdisk Balanced High Availability disks combined with other best practices such as backing up your data using snapshots. Hyperdisk Balanced High Availability disks are also designed to work with regional managed instance groups.
In the unlikely event of a zonal outage, you can usually failover your workload
running on Hyperdisk Balanced High Availability disks to another zone by using the
--force-attach
flag. The --force-attach
flag lets you attach the Hyperdisk Balanced High Availability disk to
a standby instance even if the disk can't be detached from the original compute
instance due to its unavailability. To learn more, see
Synchronously replicated disk failover.
Performance
Hyperdisk Balanced High Availability disks are designed for workloads that require a lower Recovery Point Objective (RPO) and Recovery Time Objective (RTO) compared to using Hyperdisk snapshots for recovery.
Hyperdisk Balanced High Availability disks are an option when write performance is less critical than data redundancy across multiple zones.
Hyperdisk Balanced High Availability disks have customizable IOPS and throughput performance. For more information about the performance and limitations of Hyperdisk Balanced High Availability, see About Hyperdisk.
When you need more disk space or better performance, you can modify the Hyperdisk Balanced High Availability disks to add more storage space, throughput, and IOPS.
Reliability
Compute Engine replicates data of your Hyperdisk Balanced High Availability disks to the zones you specified when you created the disks. The data of each replica is spread across multiple physical machines within the zone to ensure redundancy.
Similar to Hyperdisk, you can create snapshots of Hyperdisk Balanced High Availability disks to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if you snapshot disks that are attached to running VMs.
Sharing Hyperdisk volumes between VMs
For certain Hyperdisk volumes, you can enable concurrent access to the volume from multiple VMs by enabling disk sharing. Disk sharing is useful for a variety of use cases, such as building a highly available applications or large machine learning workloads where multiple VMs need access to the same model or training data.
For more information, see Share a disk between VMs.
Hyperdisk Storage Pools
Hyperdisk Storage Pools make it easier to lower your block storage Total Cost of Ownership (TCO) and simplify block storage management. With Hyperdisk Storage Pools, you can share a pool of capacity and performance across a maximum of 1,000 disks in a single project. Because storage pools offer thin-provisioning and data reduction, you can achieve higher efficiency. Storage pools simplify migrating your on-premises SAN to the cloud, and also make it easier to provide your workloads with the capacity and performance that they need.
You create a storage pool with the estimated capacity and performance for all workloads in a project in a specific zone. You then create disks in this storage pool and attach the disks to existing VMs. You can also create a disk in the storage pool as part of creating a new VM. Each storage pool contains one type of disk, such as Hyperdisk Throughput. There are two types of Hyperdisk Storage Pools:
- Hyperdisk Balanced Storage Pool: for general purpose workloads that are best served by Hyperdisk Balanced disks
- Hyperdisk Throughput Storage Pool: for streaming, cold data, and analytics workloads that are best served by Hyperdisk Throughput disks
Capacity provisioning options
Hyperdisk Storage Pool capacity can be provisioned in one of two ways:
- Standard capacity provisioning
- With Standard capacity provisioning, you create disks in the storage pool until the total size of all disks reaches the storage pool's provisioned capacity. The disks in a storage pool with Standard capacity provisioning consume capacity similarly to non-pool disks, where capacity is consumed when you create the disks.
- Advanced capacity provisioning
Advanced capacity provisioning lets you share a pool of thin provisioned and data-reduced storage capacity across all disks in a storage pool. You are billed for the storage pool provisioned capacity.
You can provision up to 500% of the storage pool provisioned capacity to disks in an Advanced capacity storage pool. Only the amount of data written to a disk in the storage pool consumes storage pool capacity. Automatic data reduction can reduce consumption of storage pool capacity even further.
If the capacity utilization of an Advanced capacity storage pool reaches 80% of the provisioned capacity, then Hyperdisk Storage Pools attempts to automatically add capacity to the storage pool to prevent errors related to insufficient capacity.
Example
Assume that you have a storage pool with 10 TiB of provisioned capacity.
With Standard capacity provisioning:
- You can provision up to 10 TiB of aggregate Hyperdisk capacity when creating disks in the storage pool. You are charged for the 10 TiB of storage pool provisioned capacity.
- If you create a single disk in the storage pool that is 5 TiB in size and write 2 TiB to the disk, the used capacity of the storage pool is 5 TiB.
With Advanced capacity provisioning:
- You can provision up to 50 TiB of aggregate Hyperdisk capacity when creating disks in the storage pool. You are charged for the 10 TiB of storage pool provisioned capacity.
- If you create a single disk in the storage pool that is 5 TiB in size, write 3 TiB of data to the disk, and data reduction reduces the amount of data written to 2 TiB, the used capacity of the storage pool is 2 TiB.
Performance provisioning options
Hyperdisk Storage Pool performance can be provisioned in one of two ways:
- Standard performance provisioning
Standard performance is the best option for the following types of workloads:
- Workloads that can't succeed if performance is limited by storage pool resources
- Workloads where the disks in the storage pool are likely to have correlated performance spikes, for example, data disks for databases that are at peak utilization every morning.
Standard performance storage pool don't benefit from thin-provisioning and won't materially lower your performance TCO. With Standard performance provisioning, you create disks in the storage pool until the total provisioned IOPS or throughput of all disks reaches the storage pool's provisioned amount. The disks in a storage pool with Standard performance provisioning consume IOPS and throughput similarly to non-pool disks, where you provision the amount of IOPS and throughput when you create the disks. You are billed for the total IOPS and throughput provisioned for the storage pool.
In a Hyperdisk Balanced Storage Pool with Standard performance, the first 3,000 IOPS and 140 MiBps of throughput of each disk in the storage pool (the baseline) don't consume storage pool resources. When you create disks in the storage pool, any IOPS and throughput in excess of the baseline values consume IOPS and throughput from the storage pool.
Disks created in a Standard performance storage pool don't share performance resources with the rest of the storage pool. The aggregate amount of performance of all disks in the storage pool can't exceed the total provisioned IOPS or throughput of the storage pool.
- Advanced performance provisioning
Storage pools with Advanced performance provisioning leverage thin-provisioning to increase your performance efficiency and lower your block storage performance TCO. Advanced performance provisioning lets you share a pool of provisioned performance across all disks in a storage pool. The storage pool dynamically allocates performance resources as the disks in the storage pool read and write data. Only the amount of IOPS and throughput used by a disk in the storage pool consumes storage pool IOPS and throughput. Because Advanced performance storage pools are thinly provisioned, you can allocate more IOPS or throughput to the disks in the storage pool than you have provisioned for the storage pool—up to 500% of the IOPS or throughput provisioned for the storage pool. Similar to Standard performance, you are billed for the storage pool provisioned IOPS and throughput.
In a Hyperdisk Balanced Storage Pool with Advanced performance provisioning, the disks don't have baseline performance. Every read and write operation of a Hyperdisk Balanced disk in the storage pool consumes pool resources.
When the aggregate performance utilization of all the disks in the storage pool reaches the total amount of performance provisioned for the storage pool, the disks can face performance contention. As a result, Advanced performance provisioning is best suited for workloads that don't have highly correlated peak usage times. If your workloads all peak at the same time, the Advanced performance storage pool can reach the performance limits of the storage pool, resulting in contention for performance resources.
If contention for performance resources is detected in an Advanced performance storage pool for any disks in the pools, then the auto-grow feature attempts to automatically increase the IOPS or throughput available to disks in the storage pool to prevent performance issues.
Example
Assume that you have a Hyperdisk Balanced Storage Pool with 100,000 provisioned IOPS.
With Standard performance provisioning:
- You can provision up to 100,000 of aggregate IOPS when creating Hyperdisk Balanced disks in the storage pool.
- You are charged for the 100,000 IOPS of Hyperdisk Balanced Storage Pool provisioned performance.
Like disks created outside of a storage pool, Hyperdisk Balanced disks in Standard performance storage pools are automatically provisioned with up to 3,000 baseline IOPS and 140 MiB/s of baseline throughput. This baseline performance isn't counted against the provisioned performance for the storage pool. Only when you add disks to the storage pool with provisioned performance that's above the baseline does it count against the provisioned performance for the storage pool, for example:
- A disk provisioned with 3,000 IOPS uses 0 pool IOPS and the pool still has 100,000 provisioned IOPS available for other disks.
- A disk provisioned with 13,000 IOPS uses 10,000 pool IOPS and the pool has 90,000 provisioned IOPS remaining that you can allocate to other disks in the storage pool.
With Advanced performance provisioning:
- You can provision up to 500,000 IOPS of aggregate Hyperdisk performance when creating disks in the storage pool.
- You are charged for 100,000 IOPS provisioned by the storage pool.
- If you create a single disk (
Disk1
) in the storage pool that has 5,000 IOPS, you don't consume any IOPS from the storage pool provisioned IOPS. However, the amount of IOPS that you can provision to new disks created in the storage pool is now 495,000. - If
Disk1
starts to read and write data, and if it uses its maximum of 5,000 IOPS in a given minute, then 5,000 IOPS is consumed from the storage pool provisioned IOPS. Any other disks that you created in the same storage pool can use an aggregated maximum of 95,000 IOPS in that same minute without running into contention.
Changing Hyperdisk Storage Pool provisioned capacity and performance
You can increase or decrease the provisioned capacity, IOPS, and throughput for your storage pool as your workloads scale. With an Advanced capacity or Advanced performance storage pool, any additional capacity or performance is available to all existing and new disks in the storage pool. Additionally, Compute Engine attempts to automatically modify the storage pool as follows:
- Advanced capacity: When the storage pool reaches 80% of the storage pool provisioned capacity used, Compute Engine attempts to automatically add more capacity to the storage pool.
- Advanced performance: If the storage pool experiences extended contention due to overutilization, Compute Engine attempts to increase the IOPS or throughput of the storage pool.
Additional information about Hyperdisk Storage Pool
For information about using Hyperdisk Storage Pools, use the following links:
- About storage pools
- Create storage pools
- Add disks to VMs using a storage pool
- Manage storage pools
- Review storage pool metrics
Local SSD disks
Local SSD disks are physically attached to the server that hosts your VM. Local SSD disks have higher throughput and lower latency than standard Persistent Disk or SSD Persistent Disk. The data that you store on a Local SSD disk persists only until the VM is stopped or deleted. You can attach multiple Local SSD disks to your VM, depending on the number of vCPUs.
The size of each Local SSD disk is fixed at 375 GiB, except for Z3 VMs which use Local SSD disk of 3 TiB in size. For more storage, add multiple Local SSD disks to your VM when creating the VM. The maximum number of Local SSD disks you can attach to a VM depends on the machine type and the number of vCPUs in use.
Data persistence on Local SSD disks
Review Local SSD data persistence to learn what events preserve your Local SSD data and what events can cause your Local SSD data to be unrecoverable.
Local SSDs and machine types
You can attach Local SSD disks to most machine types available on Compute Engine, as shown in the Machine series comparison table. However, there are constraints around how many Local SSD disks you can attach based on each machine type. For more information, see Choose a valid number of Local SSD disks.
Capacity limits with Local SSD disks
The maximum Local SSD disk capacity you can have for a VM is:
Machine type | Local SSD disk size | Number of disks | Max capacity |
---|---|---|---|
Z3 | 3 TiB | 12 | 36 TiB |
c3d-standard-360-lssd |
375 GiB | 32 | 12 TiB |
c3d-highmem-360-lssd |
375 GiB | 32 | 12 TiB |
c3-standard-176-lssd |
375 GiB | 32 | 12 TiB |
N1, N2, and N2D | 375 GiB | 24 | 9 TiB |
N1, N2, and N2D | 375 GiB | 16 | 6 TiB |
A3 | 375 GiB | 16 | 6 TiB |
C2, C2D, A2 Standard, M1, and M3 | 375 GiB | 8 | 3 TiB |
A2 Ultra | 375 GiB | 8 | 3 TiB |
Limitations of Local SSD disks
Local SSD has the following limitations:
- To reach the maximum IOPS limits, use a VM with 32 or more vCPUs.
- VMs with shared-core machine types can't attach Local SSD disks.
- You can't attach Local SSD disks to N4, H3, M2 E2, and Tau T2A machine types.
- You can't use customer-supplied encryption keys with Local SSD disks. Compute Engine automatically encrypts your data when it is written to Local SSD storage space.
Performance
Local SSD disks offer very high IOPS and low latency. Unlike Persistent Disk, you must manage the striping on Local SSD disks yourself.
Local SSD performance depends on several factors. For more information, see Local SSD performance and Optimizing Local SSD performance.
Cloud Storage buckets
Cloud Storage buckets are the most flexible, scalable, and durable storage option for your VMs. If your apps don't require the lower latency of Hyperdisk, Persistent Disks, and Local SSDs, you can store your data in a Cloud Storage bucket.
Connect your VM to a Cloud Storage bucket when latency and throughput aren't a priority and when you must share data easily between multiple VMs or zones.
Properties of Cloud Storage buckets
Review the following sections to understand the behavior and characteristics of Cloud Storage buckets.
Performance
The performance of Cloud Storage buckets depends on the storage class that you select and the location of the bucket relative to your VM.
Using the Cloud Storage Standard storage class in the same location as your VM gives performance that is comparable to Hyperdisk or Persistent Disks but with higher latency and less consistent throughput characteristics. Using the Standard storage class in a dual-region stores your data redundantly across two regions. For optimal performance when using a dual-region, your VMs should be located in one of the regions that is part of the dual-region.
Nearline storage, Coldline storage, and Archive storage classes are primarily for long-term data archival. Unlike the Standard storage class, these classes have minimum storage durations and incur data retrieval costs. Consequently, they are best for long-term storage of data that is accessed infrequently.
Reliability
All Cloud Storage buckets have built-in redundancy to protect your data against equipment failure and to ensure data availability through datacenter maintenance events. Checksums are calculated for all Cloud Storage operations to help ensure that what you read is what you wrote.
Flexibility
Unlike Hyperdisk or Persistent Disks, Cloud Storage buckets aren't restricted to the zone where your VM is located. Additionally, you can read and write data to a bucket from multiple VMs simultaneously. For example, you can configure VMs in multiple zones to read and write data in the same bucket rather than replicate the data to a Hyperdisk or Persistent Disks volume in multiple zones.
Cloud Storage encryption
Compute Engine automatically encrypts your data before it travels outside of your VM to Cloud Storage buckets. You don't need to encrypt files on your VMs before you write them to a bucket.
Just like Persistent Disk volumes, you can encrypt buckets with your own encryption keys.Writing and reading data from Cloud Storage buckets
Write and read files from Cloud Storage buckets by using the
gcloud storage
command-line tool
or a Cloud Storage client library.
gcloud storage
By default, the gcloud storage
command-line tool is installed on most VMs
that use public images.
If your VM doesn't have the gcloud storage
command-line tool, you can
install it.
Connect to your Linux VMs or Connect to your Windows VMs using SSH or another connection method.
- In the Google Cloud console, go to the VM instances page.
- In the list of virtual machine instances, click SSH in the row of the instance that you want to connect to.
If you have never used
gcloud storage
on this VM before, use the gcloud CLI to set up credentials.gcloud init
Alternatively, if your VM is configured to use a service account with a Cloud Storage scope, you can skip this step.
Use the
gcloud storage
tool to create a bucket, write data to buckets, and read data from those buckets. To write or read data from a specific bucket, you must have access to the bucket. You can read data from any bucket that is publicly accessible.Optionally, you can also stream data to Cloud Storage.
Client library
If you configured your VM to use a service account with a Cloud Storage scope, you can use the Cloud Storage API to write and read data from Cloud Storage buckets.
- In the Google Cloud console, go to the VM instances page.
- In the list of virtual machine instances, click SSH in the row of the instance that you want to connect to.
Install and configure a client library for your preferred language.
If necessary, follow the insert code samples to create a Cloud Storage bucket on the VM.
Follow the insert code samples to write data and read data, and include code in your app that writes or reads a file from a Cloud Storage bucket.
What's next
- Add a Hyperdisk volume to your VM.
- Add a Persistent Disk volume to your VM.
- Add a synchronously replicated disk to your VM.
- Create a VM with Local SSD disks.
- Create a file server or distributed file system.
- Review the quotas for disks.
- Mount a RAM disk on your VM.