This page provides an overview of the AlloyDB Omni Kubernetes Operator, with instructions for using it to deploy AlloyDB Omni onto a Kubernetes cluster. This page assumes basic familiarity with Kubernetes operation.
For instructions on installing AlloyDB Omni onto a standard Linux environment, see Install AlloyDB Omni.
Overview
To deploy AlloyDB Omni onto a Kubernetes cluster, install the AlloyDB Omni Operator, an extension to the Kubernetes API provided by Google.
You configure and control a Kubernetes-based AlloyDB Omni database
cluster by pairing declarative manifest files with the kubectl
utility, just
like any other Kubernetes-based deployment. You do not use the
AlloyDB Omni CLI, which is intended for
deployments onto individual Linux machines and not Kubernetes clusters.
Before you begin
You need access to the following:
- A Kubernetes cluster, running the following software:
- Kubernetes version 1.21 or later.
- The
cert-manager
service.
- The
kubectl
utility. - The
helm
package manager. The Google Cloud CLI.
If you do need to install the gcloud CLI, note that the step of running
gcloud init
is optional. Installing AlloyDB Omni does not require authentication with a Google Account.
Each node in the Kubernetes cluster must have the following:
- A minimum of two x86 or AMD64 CPUs.
- At least 8GB of RAM.
- Linux kernel version 4.18 or later.
- Control group v2 (cgroup v2) enabled.
Install the AlloyDB Omni Operator
To install the AlloyDB Omni Operator, follow these steps:
Define several environment variables:
export GCS_BUCKET=alloydb-omni-operator
export HELM_PATH=$(gsutil cat gs://$GCS_BUCKET/latest)
export OPERATOR_VERSION="${HELM_PATH%%/*}"
Download the AlloyDB Omni Operator:
gsutil cp -r gs://$GCS_BUCKET/$HELM_PATH ./
Install the AlloyDB Omni Operator:
helm install alloydbomni-operator alloydbomni-operator-${OPERATOR_VERSION}.tgz \ --create-namespace \ --namespace alloydb-omni-system \ --atomic \ --timeout 5m
Successful installation displays the following output:
NAME: alloydbomni-operator LAST DEPLOYED: CURRENT_TIMESTAMP NAMESPACE: alloydb-omni-system STATUS: deployed REVISION: 1 TEST SUITE: None
Clean up by deleting the downloaded AlloyDB Omni Operator installation file. The file is named
alloydbomni-operator-VERSION_NUMBER.tgz
, and is located in your current working directory.
Create a database cluster
An AlloyDB Omni database cluster contains all the storage and compute resources needed to run an AlloyDB Omni server, including the primary server, any replicas, and all of your data.
After you install the AlloyDB Omni Operator on your Kubernetes cluster, you can create an AlloyDB Omni database cluster on the Kubernetes cluster by applying a manifest similar to the following:
apiVersion: v1
kind: Secret
metadata:
name: db-pw-DB_CLUSTER_NAME
type: Opaque
data:
DB_CLUSTER_NAME: "ENCODED_PASSWORD"
---
apiVersion: alloydbomni.dbadmin.goog/v1
kind: DBCluster
metadata:
name: DB_CLUSTER_NAME
spec:
databaseVersion: "15.5.4"
primarySpec:
adminUser:
passwordRef:
name: db-pw-DB_CLUSTER_NAME
resources:
cpu: CPU_COUNT
memory: MEMORY_SIZE
disks:
- name: DataDisk
size: DISK_SIZE
storageClass: standard
Replace the following:
DB_CLUSTER_NAME
: the name of this database cluster—for example,my-db-cluster
.ENCODED_PASSWORD
: the database login password for the defaultpostgres
user role, encoded as a base64 string—for example,Q2hhbmdlTWUxMjM=
forChangeMe123
.CPU_COUNT
: the number of CPUs available to each database instance in this database cluster.MEMORY_SIZE
: the amount of memory per database instance of this database cluster. We recommend setting this to 8 gigabytes per CPU. For example, if you setcpu
to2
earlier in this manifest, then we recommend settingmemory
to16Gi
.DISK_SIZE
: the disk size per database instance—for example,10Gi
.
After you apply this manifest, your Kubernetes cluster contains an
AlloyDB Omni database cluster with the specified memory, CPU,
and storage configuration. To establish a test connection with the new
database cluster, see Connect using the preinstalled psql
.
For more information about Kubernetes manifests and how to apply them, see Managing resources.