Linux 内核版本 6.1+,或是支持 MADV_COLLAPSE 和 MADV_POPULATE_WRITE 指令的任何低于 5.3 的 Linux 内核版本
启用了 Cgroupsv2
Docker Engine 25.0.0+ 或 Podman 5.0.0+
macOS
Docker Desktop 4.20+
Docker Desktop 4.30+
AlloyDB Omni 假定 SELinux(如果存在)已在主机上配置为允许容器运行,包括访问文件系统(或者 SELinux 设置为宽容模式)。
支持的存储类型
AlloyDB Omni 支持数据库实例中的块存储卷上的文件系统。对于较小的开发或试用系统,请使用运行容器的主机的本地文件系统。对于企业工作负载,请使用为 AlloyDB Omni 实例预留的存储。根据数据库工作负载设置的需求,可将存储设备配置为采用单例配置(每个容器一个磁盘设备),也可以配置为采用合并配置(多个容器从同一磁盘设备读写数据)。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-22。"],[],[],null,["# Plan your AlloyDB Omni installation on a VM\n\nSelect a documentation version: 15.12.0keyboard_arrow_down\n\n- [Current (16.8.0)](/alloydb/omni/current/docs/plan-installation)\n- [16.8.0](/alloydb/omni/16.8.0/docs/plan-installation)\n- [16.3.0](/alloydb/omni/16.3.0/docs/plan-installation)\n- [15.12.0](/alloydb/omni/15.12.0/docs/plan-installation)\n- [15.7.1](/alloydb/omni/15.7.1/docs/plan-installation)\n- [15.7.0](/alloydb/omni/15.7.0/docs/plan-installation)\n\n\u003cbr /\u003e\n\nThis document describes how to prepare for running AlloyDB Omni in any Linux environment that supports container runtimes.\n\n\u003cbr /\u003e\n\nFor an overview of AlloyDB Omni, see\n[AlloyDB Omni overview](/alloydb/omni/15.12.0/docs/overview).\n\nSize and capacity\n-----------------\n\nSize and capacity directly affects the performance, reliability, and\ncost-effectiveness of your AlloyDB Omni instance. When you\nmigrate an existing database, the CPU and memory resources required are similar\nto the requirements of the source database system.\n\nPlan to start with a deployment using matching CPU, RAM and disk resources,\nand use the source system configuration as the AlloyDB Omni\nbaseline configuration. You might be able to reduce your resource\nconsumption after you perform sufficient testing of your\nAlloyDB Omni instance.\n\nSizing a AlloyDB Omni environment includes the following steps:\n\n1. Define your workload.\n\n - Data volume: Estimate the total amount of data you'll store in\n AlloyDB Omni. Consider both current data and projected\n growth over time.\n\n - Transaction rate: Determine the expected number of transactions per\n second (TPS), including reads, writes, updates, and deletes.\n\n - Concurrency: Estimate the number of concurrent users or connections\n accessing the database.\n\n - Performance requirements: Define your acceptable response times for\n different types of queries and operations.\n\n2. Ensure that your hardware supports sizing requirements.\n\n - CPU: AlloyDB Omni benefits from systems with multiple\n CPU cores, and AlloyDB Omni scales linearly, depending on the\n workload. However, open source PostgreSQL\n generally doesn't benefit from greater than 16 vCPUs. Take the\n following into consideration:\n\n - The number of cores based on workload concurrency and computation needs.\n - Any gains that might be present due to a change in CPU generation or platform.\n - Memory: Allocate sufficient RAM for AlloyDB Omni's shared\n buffers for caching data and working memory for query processing. The\n exact requirement depends on the workload. Start with 8 GB of RAM per\n vCPU.\n\n - Storage\n\n - Type: Based on your needs, choose between local NVMe storage for\n performance or SAN storage for scalability and data sharing.\n\n - Capacity: Ensure enough storage for your data volume, indexes,\n Write-Ahead Log (WAL), backups, and future growth.\n\n - IOPS: Estimate the required input/output operations per second\n (IOPS) based on your workload's read and write patterns. When\n running AlloyDB Omni in a public cloud, consider\n the performance characteristics of your storage type to\n understand if you need to increase storage capacity to meet a\n specific IOPS target.\n\nPrerequisites for running AlloyDB Omni\n--------------------------------------\n\nBefore you run AlloyDB Omni, make sure that you meet the\nfollowing hardware and software requirements.\n\n### Hardware requirements\n\n1. We recommend that you use a dedicated solid-state drive (SSD) storage device for storing your data. If you use a physical device for this purpose, we recommend that you attach it directly to the host machine.\n\n| **Tip:** To install AlloyDB Omni on a cloud platform, we recommend that you use instances that maintain the ratio of 8GB of RAM per vCPU.\n| **Note:** AlloyDB Omni is compiled to run directly on Linux systems. Running AlloyDB Omni in a Docker container on macOS utilizes Docker's compatibility layer, which results in reduced performance compared to running directly on Linux.\n\n### Software requirements\n\n1. AlloyDB Omni assumes that SELinux, when present, is configured on the host to permit the container to run, including access to the file system (or SELinux is set to permissive).\n\nSupported storage types\n-----------------------\n\nAlloyDB Omni supports file systems on block storage volumes\nin database instances. For smaller development or trial systems, use the local\nfile system of the host where the container is running. For enterprise\nworkloads, use storage that is reserved for AlloyDB Omni\ninstances. Depending on the demands set by your database workload, configure\nyour storage devices either in a singleton configuration with one disk device\nfor each container, or in a consolidated configuration where multiple containers\nread and write from the same disk device.\n\n### Local NVMe or SAN storage\n\nBoth local Non-Volatile Memory Express (NVMe) storage and Storage Area Network\n(SAN) storage offer distinct advantages. Choosing the right solution depends on\nyour specific workload requirements, budget, and future scalability needs.\n\nTo determine the best storage option, consider the following:\n\n- To prioritize absolute performance, choose local NVMe.\n- If you need large-scale, shared storage, choose SAN.\n- If you need to balance performance and sharing, consider SAN with NVMe over Fabrics for faster access.\n\n### Local NVMe storage\n\nNVMe is a high-performance protocol designed for solid-state drives (SSDs). For\napplications that need fast data access, local NVMe storage offers the following\nbenefits:\n\n- NVMe SSDs connect directly to the Peripheral Component Interconnect express (PCIe) bus to deliver fast read and write speeds.\n- Local NVMe storage provides the lowest latency.\n- Local NVMe storage provides the highest throughput.\n\nScaling local NVMe storage requires adding more drives to individual servers.\nHowever, adding more drives to individual servers leads to fragmented storage\npools and potential management complexities. Local NVMe storage isn't designed\nfor data sharing between multiple servers. Since local NVMe storage is local,\nserver administrators must protect against disk failures using hardware or\nsoftware\n[Redundant Array of Inexpensive Disks (RAID)](https://en.wikipedia.org/wiki/Standard_RAID_levels).\nOtherwise, the failure of a single NVMe device will result in data loss.\n\n### SAN storage\n\nSAN is a dedicated storage network that connects multiple servers to a shared\npool of storage devices, often SSDs or centralized NVMe storage. While SAN isn't\nas fast as local NVMe, modern SANs, especially those using NVMe over Fabrics,\nstill deliver excellent performance for most enterprise workloads.\n\n- SANs are highly scalable. To add more storage capacity or\n performance, add new storage arrays or upgrading existing ones. SANs provide\n redundancy at the storage layer, providing protection against storage media\n failures.\n\n- SANs excel at data sharing. For enterprise environments that require high\n availability, multiple servers can access and share data stored on the SAN.\n In the event of a server failure, you can present SAN storage to another\n server in the data center, allowing for faster recovery.\n\nWhat's next\n-----------\n\n- [Install AlloyDB Omni](/alloydb/omni/15.12.0/docs/quickstart)"]]