Stay organized with collections
Save and categorize content based on your preferences.
Choose between SSD and HDD storage
When you create a Bigtable instance, you choose whether its clusters
store data on solid-state drives (SSD) or hard disk drives (HDD):
SSD storage is the most efficient and cost-effective choice for most use
cases.
HDD storage is sometimes appropriate for large datasets that
are not latency-sensitive or are infrequently accessed.
Regardless of which type of storage you choose, your data is stored on a
distributed, replicated file system that spans across many physical drives.
The guidelines on this page can help you choose between SSD and HDD.
When in doubt, choose SSD storage
There are several reasons why it's usually best to use SSD storage for your
Bigtable cluster:
SSD is significantly faster and has more predictable performance than HDD.
In a Bigtable cluster, SSD storage delivers significantly lower
latencies for both reads and writes than HDD storage.
HDD throughput is much more limited than SSD throughput. In a cluster that
uses HDD storage, it's possible to reach the maximum throughput before
CPU usage reaches 100%, a situation you can monitor using the disk load metric. To increase throughput, you must add more nodes, but
the cost of the additional nodes might exceed your savings from using HDD
storage. SSD storage does not have this limitation, because it offers much
more throughput per node—generally, a cluster that uses SSD storage reaches
maximum throughput only when it is using all available CPU and memory.
Individual row reads on HDD are very slow. Because of disk seek time, HDD
storage supports only 5% of the read rows per second of SSD storage. Large
multi-row scans, however, are not as adversely impacted.
The cost savings from HDD are minimal, relative to the cost of the nodes in
your Bigtable cluster, unless you're storing large amounts of
data. For this reason, as a rule of thumb, you shouldn't consider using HDD
storage unless you're storing at least 10 TB of data and your workload is not
latency-sensitive.
One potential drawback of SSD storage is that it requires more nodes in your
clusters based on the amount of data that you store. In
practice, though, you might need those extra nodes so that your clusters can
keep up with incoming traffic, not only to support the amount of data that
you're storing.
Use cases for HDD storage
HDD storage is suitable for use cases that meet all of the following criteria:
You expect to store at least 10 TB of data.
You will not use the data to back a user-facing or latency-sensitive
application.
Your workload falls into one of the following categories:
Batch workloads with scans and writes, and no more than occasional random
reads of a small number of rows or point reads.
Data archival, where you write large amounts of data and rarely read
that data.
For example, if you plan to store extensive historical data for a large number
of remote-sensing devices and then use the data to generate daily reports, the
cost savings for HDD storage might justify the performance tradeoff. On the
other hand, if you plan to use the data to display a real-time dashboard, it
probably would not make sense to use HDD storage—reads would be much more
frequent in this case, and reads that are not scans are much slower
with HDD storage.
Switching between SSD and HDD storage
When you create a Bigtable instance, your choice of
SSD or HDD storage for the instance is permanent. You cannot use the
Google Cloud console to change the type of storage that is used for the instance.
If you want to change the storage type that a table is stored on, use the
backups feature:
Create or plan to use an instance that uses the storage type you want.
Create a backup of the table.
Restore from the backup to a new table in the other instance.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eSSD storage is generally the preferred choice for Bigtable instances due to its superior speed, performance predictability, and higher throughput compared to HDD.\u003c/p\u003e\n"],["\u003cp\u003eHDD storage can be suitable for large datasets (at least 10 TB) that are not latency-sensitive and are infrequently accessed, such as for batch workloads or data archival.\u003c/p\u003e\n"],["\u003cp\u003eThe cost savings of using HDD storage might be minimal when considering the overall cost of nodes in a Bigtable cluster, making SSD storage a better value in most cases.\u003c/p\u003e\n"],["\u003cp\u003eChoosing between SSD and HDD storage is permanent for a Bigtable instance; switching storage types requires using the backup and restore process to move data to a new instance.\u003c/p\u003e\n"],["\u003cp\u003eHDD storage has significantly slower individual row read performance compared to SSD, making it less suitable for applications requiring frequent or real-time data retrieval.\u003c/p\u003e\n"]]],[],null,["# Choose between SSD and HDD storage\n==================================\n\nWhen you create a Bigtable instance, you choose whether its clusters\nstore data on solid-state drives (SSD) or hard disk drives (HDD):\n\n- SSD storage is the most efficient and cost-effective choice for most use cases.\n- HDD storage is sometimes appropriate for large datasets that are not latency-sensitive or are infrequently accessed.\n\nRegardless of which type of storage you choose, your data is stored on a\ndistributed, replicated file system that spans across many physical drives.\n\nThe guidelines on this page can help you choose between SSD and HDD.\n\nWhen in doubt, choose SSD storage\n---------------------------------\n\nThere are several reasons why it's usually best to use SSD storage for your\nBigtable cluster:\n\n- **SSD is significantly faster and has more predictable performance than HDD.** In a Bigtable cluster, SSD storage delivers significantly lower latencies for both reads and writes than HDD storage.\n- **HDD throughput is much more limited than SSD throughput.** In a cluster that uses HDD storage, it's possible to reach the maximum throughput before CPU usage reaches 100%, a situation you can monitor using the [disk load](/bigtable/docs/monitoring-instance#disk) metric. To increase throughput, you must add more nodes, but the cost of the additional nodes might exceed your savings from using HDD storage. SSD storage does not have this limitation, because it offers much more throughput per node---generally, a cluster that uses SSD storage reaches maximum throughput only when it is using all available CPU and memory.\n- **Individual row reads on HDD are very slow.** Because of disk seek time, HDD storage supports only 5% of the read rows per second of SSD storage. Large multi-row scans, however, are not as adversely impacted.\n- **The cost savings from HDD are minimal, relative to the cost of the nodes in\n your Bigtable cluster, unless you're storing large amounts of\n data.** For this reason, as a rule of thumb, you shouldn't consider using HDD storage unless you're storing at least 10 TB of data and your workload is not latency-sensitive.\n\nOne potential drawback of SSD storage is that it [requires more nodes in your\nclusters](/bigtable/quotas#storage-per-node) based on the amount of data that you store. In\npractice, though, you might need those extra nodes so that your clusters can\nkeep up with incoming traffic, not only to support the amount of data that\nyou're storing.\n\nUse cases for HDD storage\n-------------------------\n\nHDD storage is suitable for use cases that meet all of the following criteria:\n\n- You expect to store at least 10 TB of data.\n- You will not use the data to back a user-facing or latency-sensitive application.\n- You don't plan to enable [2x node\n scaling](/bigtable/docs/scaling#node-scaling-factor).\n- Your workload falls into one of the following categories:\n\n - Batch workloads with scans and writes, and no more than occasional random reads of a small number of rows or point reads.\n - Data archival, where you write large amounts of data and rarely read that data.\n\nFor example, if you plan to store extensive historical data for a large number\nof remote-sensing devices and then use the data to generate daily reports, the\ncost savings for HDD storage might justify the performance tradeoff. On the\nother hand, if you plan to use the data to display a real-time dashboard, it\nprobably *would not* make sense to use HDD storage---reads would be much more\nfrequent in this case, and reads that are not scans are much slower\nwith HDD storage.\n\nSwitching between SSD and HDD storage\n-------------------------------------\n\nWhen you create a Bigtable instance, your choice of\nSSD or HDD storage for the instance is permanent. You cannot use the\nGoogle Cloud console to change the type of storage that is used for the instance.\n\nIf you want to change the storage type that a table is stored on, use the\n[backups feature](/bigtable/docs/managing-backups):\n\n1. Create or plan to use an instance that uses the storage type you want.\n2. Create a backup of the table.\n3. Restore from the backup to a new table in the other instance.\n\nWhat's next\n-----------\n\n[Create an instance with SSD or HDD storage](/bigtable/docs/creating-instance)."]]