Size and capacity directly affects the performance, reliability, and
cost-effectiveness of your AlloyDB Omni instance. When you
migrate an existing database, the CPU and memory resources required are similar
to the requirements of the source database system.
Plan to start with a deployment using matching CPU, RAM and disk resources,
and use the source system configuration as the AlloyDB Omni
baseline configuration. You might be able to reduce your resource
consumption after you perform sufficient testing of your
AlloyDB Omni instance.
Sizing a AlloyDB Omni environment includes the following steps:
Define your workload.
Data volume: Estimate the total amount of data you'll store in
AlloyDB Omni. Consider both current data and projected
growth over time.
Transaction rate: Determine the expected number of transactions per
second (TPS), including reads, writes, updates, and deletes.
Concurrency: Estimate the number of concurrent users or connections
accessing the database.
Performance requirements: Define your acceptable response times for
different types of queries and operations.
Ensure that your hardware supports sizing requirements.
CPU: AlloyDB Omni benefits from systems with multiple
CPU cores, and AlloyDB Omni scales linearly, depending on the
workload. However, open source PostgreSQL
generally doesn't benefit from greater than 16 vCPUs. Take the
following into consideration:
The number of cores based on workload concurrency and computation
needs.
Any gains that might be present due to a change in CPU generation or
platform.
Memory: Allocate sufficient RAM for AlloyDB Omni's shared
buffers for caching data and working memory for query processing. The
exact requirement depends on the workload. Start with 8 GB of RAM per
vCPU.
Storage
Type: Based on your needs, choose between local NVMe storage for
performance or SAN storage for scalability and data sharing.
Capacity: Ensure enough storage for your data volume, indexes,
Write-Ahead Log (WAL), backups, and future growth.
IOPS: Estimate the required input/output operations per second
(IOPS) based on your workload's read and write patterns. When
running AlloyDB Omni in a public cloud, consider
the performance characteristics of your storage type to
understand if you need to increase storage capacity to meet a
specific IOPS target.
Prerequisites for running AlloyDB Omni
Before you run AlloyDB Omni, make sure that you meet the
following hardware and software requirements.
We recommend that you use a dedicated solid-state drive (SSD) storage
device for storing your data. If you use a physical device for this
purpose, we recommend that you attach it directly to the host machine.
Linux kernel version 6.1+, or any Linux kernel version
older than 5.3 that has support for the MADV_COLLAPSE
and MADV_POPULATE_WRITE directives
Cgroupsv2 enabled
Docker Engine 25.0.0+ or Podman 5.0.0+
macOS
Docker Desktop 4.20+
Docker Desktop 4.30+
AlloyDB Omni assumes that SELinux, when present, is
configured on the host to permit the container to run, including access to
the file system (or SELinux is set to permissive).
Supported storage types
AlloyDB Omni supports file systems on block storage volumes
in database instances. For smaller development or trial systems, use the local
file system of the host where the container is running. For enterprise
workloads, use storage that is reserved for AlloyDB Omni
instances. Depending on the demands set by your database workload, configure
your storage devices either in a singleton configuration with one disk device
for each container, or in a consolidated configuration where multiple containers
read and write from the same disk device.
Local NVMe or SAN storage
Both local Non-Volatile Memory Express (NVMe) storage and Storage Area Network
(SAN) storage offer distinct advantages. Choosing the right solution depends on
your specific workload requirements, budget, and future scalability needs.
To determine the best storage option, consider the following:
To prioritize absolute performance, choose local NVMe.
If you need large-scale, shared storage, choose SAN.
If you need to balance performance and sharing, consider SAN with NVMe over
Fabrics for faster access.
Local NVMe storage
NVMe is a high-performance protocol designed for solid-state drives (SSDs). For
applications that need fast data access, local NVMe storage offers the following
benefits:
NVMe SSDs connect directly to the Peripheral Component Interconnect express
(PCIe) bus to deliver fast read and write speeds.
Local NVMe storage provides the lowest latency.
Local NVMe storage provides the highest throughput.
Scaling local NVMe storage requires adding more drives to individual servers.
However, adding more drives to individual servers leads to fragmented storage
pools and potential management complexities. Local NVMe storage isn't designed
for data sharing between multiple servers. Since local NVMe storage is local,
server administrators must protect against disk failures using hardware or
software
Redundant Array of Inexpensive Disks (RAID).
Otherwise, the failure of a single NVMe device will result in data loss.
SAN storage
SAN is a dedicated storage network that connects multiple servers to a shared
pool of storage devices, often SSDs or centralized NVMe storage. While SAN isn't
as fast as local NVMe, modern SANs, especially those using NVMe over Fabrics,
still deliver excellent performance for most enterprise workloads.
SANs are highly scalable. To add more storage capacity or
performance, add new storage arrays or upgrading existing ones. SANs provide
redundancy at the storage layer, providing protection against storage media
failures.
SANs excel at data sharing. For enterprise environments that require high
availability, multiple servers can access and share data stored on the SAN.
In the event of a server failure, you can present SAN storage to another
server in the data center, allowing for faster recovery.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[[["\u003cp\u003eAlloyDB Omni's performance, reliability, and cost-effectiveness are directly impacted by its size and capacity, so matching the CPU, RAM, and disk resources of your existing database when migrating is recommended as a baseline.\u003c/p\u003e\n"],["\u003cp\u003eWhen sizing an AlloyDB Omni environment, you must define your workload by estimating data volume, transaction rates, concurrency, and performance requirements, and you must also ensure that your hardware meets these sizing requirements.\u003c/p\u003e\n"],["\u003cp\u003eThe minimum hardware requirements for AlloyDB Omni include an x86-64 or ARM CPU with AVX2 support, 2GB of RAM, and 10GB of disk space for Linux or macOS, while the recommended hardware is 8GB of RAM per vCPU and 20GB+ of disk space.\u003c/p\u003e\n"],["\u003cp\u003eFor software, Linux requires one of several specified OS versions, a Linux kernel version 4.18+ (or 6.1+ for recommended), Cgroups v1 or v2 enabled (v2 recommended), and Docker Engine 20.10+ or Podman 4.2.0+, while macOS requires Docker Desktop 4.20+ (4.30+ recommended).\u003c/p\u003e\n"],["\u003cp\u003eFor storage, local NVMe is recommended for absolute performance with direct connection to the PCIe bus, while SAN storage is ideal for large-scale shared storage and offers redundancy and high availability for enterprise environments.\u003c/p\u003e\n"]]],[],null,["# Plan your AlloyDB Omni installation on a VM\n\nSelect a documentation version: Current (16.8.0)keyboard_arrow_down\n\n- [Current (16.8.0)](/alloydb/omni/current/docs/plan-installation)\n- [16.8.0](/alloydb/omni/16.8.0/docs/plan-installation)\n- [16.3.0](/alloydb/omni/16.3.0/docs/plan-installation)\n- [15.12.0](/alloydb/omni/15.12.0/docs/plan-installation)\n- [15.7.1](/alloydb/omni/15.7.1/docs/plan-installation)\n- [15.7.0](/alloydb/omni/15.7.0/docs/plan-installation)\n\n\u003cbr /\u003e\n\nThis document describes how to prepare for running AlloyDB Omni in any Linux environment that supports container runtimes.\n\n\u003cbr /\u003e\n\nFor an overview of AlloyDB Omni, see\n[AlloyDB Omni overview](/alloydb/omni/current/docs/overview).\n\nSize and capacity\n-----------------\n\nSize and capacity directly affects the performance, reliability, and\ncost-effectiveness of your AlloyDB Omni instance. When you\nmigrate an existing database, the CPU and memory resources required are similar\nto the requirements of the source database system.\n\nPlan to start with a deployment using matching CPU, RAM and disk resources,\nand use the source system configuration as the AlloyDB Omni\nbaseline configuration. You might be able to reduce your resource\nconsumption after you perform sufficient testing of your\nAlloyDB Omni instance.\n\nSizing a AlloyDB Omni environment includes the following steps:\n\n1. Define your workload.\n\n - Data volume: Estimate the total amount of data you'll store in\n AlloyDB Omni. Consider both current data and projected\n growth over time.\n\n - Transaction rate: Determine the expected number of transactions per\n second (TPS), including reads, writes, updates, and deletes.\n\n - Concurrency: Estimate the number of concurrent users or connections\n accessing the database.\n\n - Performance requirements: Define your acceptable response times for\n different types of queries and operations.\n\n2. Ensure that your hardware supports sizing requirements.\n\n - CPU: AlloyDB Omni benefits from systems with multiple\n CPU cores, and AlloyDB Omni scales linearly, depending on the\n workload. However, open source PostgreSQL\n generally doesn't benefit from greater than 16 vCPUs. Take the\n following into consideration:\n\n - The number of cores based on workload concurrency and computation needs.\n - Any gains that might be present due to a change in CPU generation or platform.\n - Memory: Allocate sufficient RAM for AlloyDB Omni's shared\n buffers for caching data and working memory for query processing. The\n exact requirement depends on the workload. Start with 8 GB of RAM per\n vCPU.\n\n - Storage\n\n - Type: Based on your needs, choose between local NVMe storage for\n performance or SAN storage for scalability and data sharing.\n\n - Capacity: Ensure enough storage for your data volume, indexes,\n Write-Ahead Log (WAL), backups, and future growth.\n\n - IOPS: Estimate the required input/output operations per second\n (IOPS) based on your workload's read and write patterns. When\n running AlloyDB Omni in a public cloud, consider\n the performance characteristics of your storage type to\n understand if you need to increase storage capacity to meet a\n specific IOPS target.\n\nPrerequisites for running AlloyDB Omni\n--------------------------------------\n\nBefore you run AlloyDB Omni, make sure that you meet the\nfollowing hardware and software requirements.\n\n### Hardware requirements\n\n1. We recommend that you use a dedicated solid-state drive (SSD) storage device for storing your data. If you use a physical device for this purpose, we recommend that you attach it directly to the host machine.\n\n| **Tip:** To install AlloyDB Omni on a cloud platform, we recommend that you use instances that maintain the ratio of 8GB of RAM per vCPU.\n| **Note:** AlloyDB Omni is compiled to run directly on Linux systems. Running AlloyDB Omni in a Docker container on macOS utilizes Docker's compatibility layer, which results in reduced performance compared to running directly on Linux.\n\n### Software requirements\n\n1. AlloyDB Omni assumes that SELinux, when present, is configured on the host to permit the container to run, including access to the file system (or SELinux is set to permissive).\n\nSupported storage types\n-----------------------\n\nAlloyDB Omni supports file systems on block storage volumes\nin database instances. For smaller development or trial systems, use the local\nfile system of the host where the container is running. For enterprise\nworkloads, use storage that is reserved for AlloyDB Omni\ninstances. Depending on the demands set by your database workload, configure\nyour storage devices either in a singleton configuration with one disk device\nfor each container, or in a consolidated configuration where multiple containers\nread and write from the same disk device.\n\n### Local NVMe or SAN storage\n\nBoth local Non-Volatile Memory Express (NVMe) storage and Storage Area Network\n(SAN) storage offer distinct advantages. Choosing the right solution depends on\nyour specific workload requirements, budget, and future scalability needs.\n\nTo determine the best storage option, consider the following:\n\n- To prioritize absolute performance, choose local NVMe.\n- If you need large-scale, shared storage, choose SAN.\n- If you need to balance performance and sharing, consider SAN with NVMe over Fabrics for faster access.\n\n### Local NVMe storage\n\nNVMe is a high-performance protocol designed for solid-state drives (SSDs). For\napplications that need fast data access, local NVMe storage offers the following\nbenefits:\n\n- NVMe SSDs connect directly to the Peripheral Component Interconnect express (PCIe) bus to deliver fast read and write speeds.\n- Local NVMe storage provides the lowest latency.\n- Local NVMe storage provides the highest throughput.\n\nScaling local NVMe storage requires adding more drives to individual servers.\nHowever, adding more drives to individual servers leads to fragmented storage\npools and potential management complexities. Local NVMe storage isn't designed\nfor data sharing between multiple servers. Since local NVMe storage is local,\nserver administrators must protect against disk failures using hardware or\nsoftware\n[Redundant Array of Inexpensive Disks (RAID)](https://en.wikipedia.org/wiki/Standard_RAID_levels).\nOtherwise, the failure of a single NVMe device will result in data loss.\n\n### SAN storage\n\nSAN is a dedicated storage network that connects multiple servers to a shared\npool of storage devices, often SSDs or centralized NVMe storage. While SAN isn't\nas fast as local NVMe, modern SANs, especially those using NVMe over Fabrics,\nstill deliver excellent performance for most enterprise workloads.\n\n- SANs are highly scalable. To add more storage capacity or\n performance, add new storage arrays or upgrading existing ones. SANs provide\n redundancy at the storage layer, providing protection against storage media\n failures.\n\n- SANs excel at data sharing. For enterprise environments that require high\n availability, multiple servers can access and share data stored on the SAN.\n In the event of a server failure, you can present SAN storage to another\n server in the data center, allowing for faster recovery.\n\nWhat's next\n-----------\n\n- [Install AlloyDB Omni](/alloydb/omni/current/docs/quickstart)"]]