Manage auto-tiering

This page provides details on managing auto-tiering.

About auto-tiering

Auto-tiering reduces the overall storage costs of volume usage. NetApp Volumes automatically moves infrequently used data to a slower, cost-efficient storage tier.

You can enable auto-tiering at the per volume level. When auto-tiering is enabled for a volume, NetApp Volumes identifies data that is infrequently used and moves that cold data from the primary hot tier to a cheaper but slower cold tier. Your active data stays on the hot tier.

As a user, you create a volume with the right size to hold all of your data. Whether the data is located on the hot tier or cold tier, it is managed automatically by the volume and is transparent to an application or user accessing the volume using NFS or SMB. You can always see the full dataset.

NetApp Volumes determines whether to move cold data to the hot tier based on the access pattern. Reading the cold data with sequential reads, such as those associated with data copy, file-based backups, indexing, and antivirus scans, leaves the data on the cold tier. Reading the cold data with random reads moves the data back to the hot tier. This data stays in the hot tier until it cools off again.

Note that regularly reading the data from the hot tier in non-sequential ways, might block data from getting cold, which might affect antivirus full scans or file-based full backups, depending on their data access pattern.

Auto-tiering can be enabled for Premium and Extreme pools during storage pool creation or on existing pools. Once auto-tiering is enabled at the pool level, you must explicitly enable it for each volume. After auto-tiering is enabled on the pool or a volume, it can't be disabled.

Auto-tiering feature for the Flex service level is available only for custom-performance Flex zonal pools. To use auto-tiering, you must enable it during pool creation and it can't be enabled afterward. For pools with auto-tiering enabled, it's automatically enabled for all volumes and can't be disabled.

You can configure how long data remains hot before being considered cold by using a volume-specific Cooling threshold parameter. The cooling threshold can be set between 2 to 183 days for the Flex, Premium, and Extreme service levels. The default cooling threshold is 31 days. Data that is cooled beyond the cooling threshold moves to the cold tier once a day.

Auto-tiering considerations

The following considerations apply while you use auto-tiering:

  • Auto-tiering is in Preview for the Flex service level and enabling it on the pool automatically enables it for all the volumes in the pool. You can pause auto-tiering for any individual volumes in the pool that shouldn't use it.

    For Premium and Extreme service levels, auto-tiering is generally available (GA). In pools where auto-tiering is enabled, enabling it for individual volumes is optional.

  • When you use volume replication, the auto-tiering capability is controlled independently for the source and destination volume.

  • On storage pools with auto-tiering enabled, the cold block tracking will occur on existing volumes that don't have auto-tiering enabled. If you enable auto-tiering on these volumes, the old data immediately becomes eligible for tiering and might get moved to the cold tier on the next day.

Performance and hot tier sizing

As a volume administrator, you must manage the auto-tiering parameters to achieve your capacity, performance, and cost objectives. This sizing workflow depends on the service level of the storage pool you want to auto-tier.

Premium or Extreme type volumes

The performance of an auto-tiered volume depends on the sizes of the hot and cold tiers. Each GiB of hot tier size adds throughput capability of 64 KiBps for the Premium service level or 128 KiBps for the Extreme service level to the volume. While each GiB of cold tier size adds 2 KiBps of throughput capability to the volume with increased access latency. If you need more performance, just add additional capacity to the volume, as empty space in the volume is counted as hot tier capacity.

The hot tier consistently stores all inode metadata required for file and folder listing, as well as timestamp information.

Flex type volumes

Auto-tiering is available in Preview only for custom-performance Flex zonal pools. You can enable it during pool creation only. All volumes in the pool share the performance capability set at the pool level.

When creating a storage pool as an administrator, you must specify the size of the hot tier threshold as an additional parameter. The minimum size of the hot tier is 1 TiB. You must ensure that the hot tier has sufficient capacity to store your active data. If the hot tier is too small and data is written faster than the rate at which it cools and moves to the cold tier, the hot tier might become full. This can cause your applications to receive out-of-space write errors. Therefore, proper sizing and continuous monitoring are essential.

You can monitor hot tier size manually or enable a hot tier auto-increase on the pool. When enabled, the hot tier automatically increases size by 10% when full, up to the storage pool's total size. Note that the size of a pool's hot tier can't be decreased. Therefore, avoid applying workloads that migrate large datasets of cold data into the pool. Otherwise, your hot tier will continue to grow, reducing your auto-tiering cost savings.

To prevent the hot tier from filling up from large cold data writes into a volume, especially during data migration, a hot tier bypass flag can be enabled to direct writes straight to the cold tier, bypassing the hot tier. Once the migration is finished, you can disable the flag and resume normal operation, where writes go to the hot tier first. Active data that bypassed the hot tier during migration is moved back to the hot tier upon the first random access.

The hot tier consistently stores all inode metadata required for file and folder listing, as well as timestamp information.

Choose the right cooling threshold

The data block access frequency is monitored in 4 KiB intervals. Each day without access to a block makes it colder, and its temperature is measured in days. When a block's temperature reaches the cooling threshold value configured on the volume, the block is eligible to be moved to the cold tier. Only eligible blocks are moved to the cold tier daily. If the block is randomly accessed, the temperature is set to 0 days and the cooling cycle starts again.

The cooling threshold is defined at the volume level and can be set between 2 to 183 days for the Flex, Premium, and Extreme volumes.

Data on the cold tier is priced lower than data on the hot tier. Using a shorter cooling threshold moves data to the cold tier sooner, which can reduce the overall costs. Although setting the lowest cooling threshold appears beneficial, setting it too short might yield limited cost savings.

Data on the cold tier is slower to access than data on the hot tier. Using a cooling threshold which is too short can make access to your data slower. The overall volume performance depends on the hot tier capabilities.

Moving the data to and from the cold tier has data transfer costs. If you choose a short cooling threshold, data can move more frequently between the hot and cold tiers, which reduces the volume performance and increases the overall cost.

Using a cooling threshold that is too short can cause problems. The default cooling threshold value is 31 days.

Auto-tiering pricing

Your billing is based on the allocated hot tier and utilized cold tier, each charged at their respective rate. For more information about pricing, see Storage pool pricing.

The storage pool billing is based on the total cold tier size of all volumes within that pool. This total cold tier storage is then charged at the cold tier pricing.

The hot tier pricing depends on the type of service level:

  • For the Premium and Extreme service levels: the hot tier size is calculated as the total pool capacity minus the cold tier size and is charged at the pool's hot tier pricing. If the pool's hot tier size is smaller than 2 TiB, then a minimum of 2 TiB is charged at the hot tier rate, and the remaining capacity is charged at the cold tier rate.

  • For the Flex service level: pricing for auto-tiering isn't available in Preview.

Additionally, there is a charge for network traffic associated with moving data to or from the cold tier. For more information about pricing, see Storage pool pricing.

Enable auto-tiering

For volumes in storage pools with auto-tiering enabled, an Auto-tiering tab is displayed in the volume level details page. This indicates if tiering is enabled or disabled for the volume. If you click this tab, you can look up statistics, manage auto-tiering settings, and pause or unpause the tiering.

Look up tiering statistics

NetApp Volumes provides auto-tiering statistics at both the volume and storage pool levels.

Volume level statistics

You can view the tiering statistics of a volume using Google Cloud console, Google Cloud CLI, or Cloud Monitoring.

Use the following instructions to look up tiering statistics of a volume using the Google Cloud console or Google Cloud CLI.

Console

  1. Go to the NetApp Volumes page in the Google Cloud console.

    Go to NetApp Volumes

  2. Click the name of the volume.

  3. If tiering is enabled on the volume, an Auto-tiering tab is displayed. Select the tab.

  4. Review the Current hot tier data and Current cold tier data fields.

gcloud

Look up tiering statistics of a volume:

  gcloud netapp volumes describe VOLUME_NAME \
   --project=PROJECT_ID \
   --location=LOCATION \
   --format="table(capacityGib,coldTierSizeGib)"

Replace the following information:

  • VOLUME_NAME: the name of the volume.

  • PROJECT_ID: the name of the project the volume is in.

  • LOCATION: the location of the volume.

The size of data in the cold tier is the result of coldTierSizeGib. The size of the hot tier for Premium and Extreme service levels is capacityGib minus coldTierSizeGib.

For more information on additional optional flags, see Google Cloud SDK documentation on volumes.

Cloud Monitoring

Cloud Monitoring receives auto-tiering statistics for Premium and Extreme volumes, which can be visualized using metrics explorer or dashboards. The available metrics include the following:

  • /volume/auto_tiering/tiered_bytes: specifies the number of bytes stored in the hot tier and the cold tier. You can filter this metric using the tier label to view data for the hot tier or the cold tier.

  • /volume/auto_tiering/cold_tier_read_byte_count: the number of bytes read from the cold tier.

  • /volume/auto_tiering/cold_tier_write_byte_count: the number of bytes written to the cold tier.

For more information, see Cloud Monitoring.

Storage pool level statistics

Cloud Monitoring receives auto-tiering statistics for Premium and Extreme pools, which can be visualized using metrics explorer or dashboards. The available metrics include the following:

  • /storage_pool/auto_tiering/tiered_bytes: specifies the number of bytes stored in the hot tier and the cold tier. You can filter this metric using the tier label to view data for the hot tier or the cold tier.

  • /storage_pool/auto_tiering/cold_tier_read_byte_count: the number of bytes read from the cold tier.

  • /storage_pool/auto_tiering/cold_tier_write_byte_count: the number of bytes written to the cold tier.

For more information, see Cloud Monitoring.

Edit auto-tiering parameters

Use the following instructions to edit the tiering parameters of a volume using the Google Cloud console or Google Cloud CLI.

Console

  1. Go to the NetApp Volumes page in the Google Cloud console.

    Go to NetApp Volumes

  2. Click the name of the volume.

  3. If tiering is enabled on the volume, an Auto-tiering tab is shown. Select the tab.

    • If tiering is enabled, click Pause auto-tiering to pause it.

    • If tiering is paused, click Resume auto-tiering to resume it.

  4. To change the cooling threshold, click Edit cooling threshold and enter a new value. Click Save.

gcloud

Edit the auto-tiering parameters using the following command:

  gcloud netapp volumes update VOLUME_NAME \
   --project=PROJECT_ID \
   --location=LOCATION \
   --tiering-policy=tier-action=ENABLED_OR_DISABLED,cooling-threshold-days=DAYS

Replace the following information:

  • VOLUME_NAME: the name of the volume.

  • PROJECT_ID: the name of the project the volume is in.

  • LOCATION: the location of the volume.

For more information on additional optional flags, see Google Cloud SDK documentation on volumes.

What's next