This page describes how to enable and use the PgBouncer connection pooler for AlloyDB Omni using the AlloyDB Omni Kubernetes operator.
Many applications, especially web applications, open and close database connections frequently, which can put a significant load on the database instance. PgBouncer helps reduce instance load by managing connections more efficiently. By reusing connections, PgBouncer minimizes the number of connections to the database instance, freeing up resources on the instance.
Create a PgBouncer service
The AlloyDB Omni Kubernetes operator lets you create a dedicated PgBouncer service for your database. You can then access the database through the PgBouncer service to benefit from connection pooling.
There is a dedicated custom resource definition (CRD) for configuring your PgBouncer service as needed.
To create a PgBouncer service for your database, follow these steps:
Create a
PgBouncercustom resource in your Kubernetes cluster as follows:apiVersion: alloydbomni.dbadmin.goog/v1 kind: PgBouncer metadata: name: PGBOUNCER_NAME spec: dbclusterRef: DB_CLUSTER_NAME allowSuperUserAccess: true podSpec: resources: memory: 1Gi cpu: 1 image: "gcr.io/alloydb-omni/operator/g-pgbouncer:1.4.0" parameters: pool_mode: POOL_MODE ignore_startup_parameters: IGNORE_STARTUP_PARAMETERS default_pool_size: DEFAULT_POOL_SIZE max_client_conn: MAXIMUM_CLIENT_CONNECTIONS max_db_connections: MAXIMUM_DATABASE_CONNECTIONS serviceOptions: type: "ClusterIP"
Replace the following:
PGBOUNCER_NAME: the name of your PgBouncer custom resource.DB_CLUSTER_NAME: the name of your AlloyDB Omni database cluster. It's the same database cluster name you declared when you created it.POOL_MODE: specifies when a database connection can be reused by other clients. Set the parameter to one of the following:session: database connection is released back to the pool after client disconnects. Used by default.transaction: database connection is released back to pool after transaction finishes.statement: database connection is released back to pool after query finishes. Transactions spanning multiple statements are disallowed in this mode.
IGNORE_STARTUP_PARAMETERS: specifies parameters that PgBouncer does not allow so that they are ignored during startup—for example,extra_float_digits. For more information, see PgBouncer configuration.DEFAULT_POOL_SIZE: the number of database connections to allow per user-database pair—for example,8.MAXIMUM_CLIENT_CONNECTIONS: the maximum number of client connections—for example,800.MAXIMUM_DATABASE_CONNECTIONS: the maximum number of database connections—for example,160.
Apply the manifest:
kubectl apply -f PATH_TO_MANIFEST -n NAMESPACEReplace PATH_TO_MANIFEST with the path to your manifest file—for example,
/fleet/config/pgbouncer.yaml.To verify that the PgBouncer object you created is ready, execute the following query:
kubectl get pgbouncers.alloydbomni.dbadmin.goog PGBOUNCER_NAME -n NAMESPACE -wReplace
NAMESPACEwith the name of the Kubernetes namespace for your PgBouncer object.The output is similar to the following:
NAMESPACE NAME ENDPOINT STATE dbv2 mypgbouncer dbv2 mypgbouncer dbv2 mypgbouncer dbv2 mypgbouncer WaitingForDeploymentReady dbv2 mypgbouncer Acquiring IP dbv2 mypgbouncer 10.138.15.231 Ready
Connect to the connection pooler endpoint
You can connect to the PgBouncer connection pooler from inside or outside a Kubernetes cluster.
Connect from within a Kubernetes cluster
To connect to the connection pooler endpoint using the psql client, follow these steps:
Create a Pod as follows:
apiVersion: v1 kind: Pod metadata: name: postgres spec: containers: - image: "docker.io/library/postgres:latest" command: - "sleep" - "604800" name: db-clientApply the manifest:
kubectl apply -f PATH_TO_MANIFEST -n NAMESPACEConnect to the containerized application:
kubectl exec -it postgres -n NAMESPACE -- bashVerify the SSL connection to the PgBouncer endpoint using the
psqlclient:export PGSSLMODE="require"; psql -h HOST -d postgres -U postgres -p PORTReplace the following:
HOST: the connection pooler endpoint that you obtain usingkubectl get pgbouncers.alloydbomni.dbadmin.goog -n NAMESPACEcommand. If PgBouncer isn't exposed as a service, use its IP address.PORT: the port PgBouncer listens on.
The terminal window displays
psqllogin text that ends with apostgres=#prompt.
Connect from outside a Kubernetes cluster
To access the PgBouncer connection pooler from outside a Kubernetes cluster, set the type field in the serviceOptions attribute to LoadBalancer, which creates a loadbalancer service.
Create a
PgBouncercustom resource in your Kubernetes cluster as follows:apiVersion: alloydbomni.dbadmin.goog/v1 kind: PgBouncer metadata: name: PGBOUNCER_NAME spec: dbclusterRef: DB_CLUSTER_NAME allowSuperUserAccess: true replicaCount: 2 parameters: pool_mode: POOL_MODE ignore_startup_parameters: IGNORE_STARTUP_PARAMETERS default_pool_size: DEFAULT_POOL_SIZE max_client_conn: MAXIMUM_CLIENT_CONNECTIONS max_db_connections: MAXIMUM_DATABASE_CONNECTIONS podSpec: resources: memory: 1Gi cpu: 1 image: "gcr.io/alloydb-omni/operator/g-pgbouncer:1.4.0" serviceOptions: type: "LoadBalancer"
Apply the manifest:
kubectl apply -f PATH_TO_MANIFESTTo verify that the PgBouncer object you created is ready, execute the following query:
kubectl get pgbouncers.alloydbomni.dbadmin.goog PGBOUNCER_NAME -n NAMESPACE -wThe output is similar to the following:
NAME ENDPOINT STATE mypgbouncer 10.138.15.207 Ready
Configure PgBouncer settings
Use the following parameters to configure PGBouncer settings:
| Parameter | Description | Default value |
|---|---|---|
pool_mode |
Specifies when a database connection can be reused by other clients. Allowed values are as follows:
|
session |
ignore_startup_parameters |
Specifies parameters that PgBouncer does not allow so that they are ignored during startup. | |
default_pool_size |
The number of database connections to allow per user-database pair. | 20 |
max_client_conn |
The maximum number of client connections. | 100 |
max_db_connections |
The maximum number of database connections. | 0 |
Customize PgBouncer deployment
AlloyDB Omni uses custom resources to manage its components. To customize the PgBouncer deployment within your AlloyDB Omni on Kubernetes, you modify the PgBouncer custom resource as follows:
List the
PgBouncercustom resources:kubectl get pgbouncers -n NAMESPACEReplace NAMESPACE with the namespace where you deployed AlloyDB Omni.
To modify the resource, open the declaration file for the
PgBouncerresource in your default editor:kubectl edit pgbouncers PGBOUNCER_NAME -n NAMESPACEIn the declaration file, find the
podSpecsection that contains the configuration and modify any of the following fields as needed:resources: thecpuandmemorydefined for your container:apiVersion: alloydbomni.dbadmin.goog/v1 kind: PgBouncer metadata: name: PGBOUNCER_NAME spec: dbclusterRef: DB_CLUSTER_NAME replicaCount: 2 parameters: pool_mode: POOL_MODE ignore_startup_parameters: IGNORE_STARTUP_PARAMETERS default_pool_size: DEFAULT_POOL_SIZE max_client_conn: MAXIMUM_CLIENT_CONNECTIONS max_db_connections: MAXIMUM_DATABASE_CONNECTIONS podSpec: resources: memory: 1Gi cpu: 1 ...image: the path to the PgBouncer image tag:... podSpec: resources: memory: 1Gi cpu: 1 image: IMAGE ...schedulingconfig: include thenodeaffinitysection to control where the PgBouncer Pods are scheduled:... podSpec: resources: memory: 1Gi cpu: 1 image: IMAGE schedulingconfig: nodeaffinity: NODE_AFFINITY_TYPE: nodeSelectorTerms: - matchExpressions: - key: LABEL_KEY operator: OPERATOR_VALUE values: - pgbouncer ...Replace the following:
NODE_AFFINITY_TYPE: set the parameter to one of the following:requiredDuringSchedulingIgnoredDuringExecution: Kubernetes schedules the Pod based exactly on the defined rules.preferredDuringSchedulingIgnoredDuringExecution: the Kubernetes scheduler tries to find a node that meets the defined rule for scheduling. However, if there is no such node, Kubernetes schedules to a different node in the cluster.
LABEL_KEY: the node's label for the key that serves as a location indicator and facilitates even Pod distribution across the cluster—for example,nodetype.OPERATOR_VALUE: represents a key's relationship to a set of values. Set the parameter to one of the following:In: the values array must be non-empty.NotIn: the values array must be non-empty.Exists: the values array must be empty.DoesNotExist: the values array must be empty.Gt: the values array must have a single element, which is interpreted as an integer.Lt: the values array must have a single element, which is interpreted as an integer.
After making the changes, save the declaration file. The AlloyDB Omni Kubernetes operator automatically applies the changes to your PgBouncer deployment.
Schedule with node affinity
You can use node affinity to control where PgBouncer pods are scheduled, providing more precise placement than regular scheduling. Node affinity ensures that PgBouncer pods are placed on nodes that meet specific criteria, such as having particular labels.
To specify node affinity for PgBouncer pods, modify the PgBouncer custom resource to add the schedulingconfig section to the podSpec section:
apiVersion: alloydbomni.dbadmin.goog/v1
kind: PgBouncer
metadata:
name: PGBOUNCER_NAME
spec:
# ... other PgBouncer configuration ...
podSpec:
# ... other podSpec settings ...
schedulingconfig:
nodeaffinity:
NODE_AFFINITY_TYPE:
nodeSelectorTerms:
- matchExpressions:
- key: LABEL_KEY
operator: OPERATOR_VALUE
values:
- pgbouncer
Replace the following:
NODE_AFFINITY_TYPE: Set the parameter to one of the following:requiredDuringSchedulingIgnoredDuringExecution: Kubernetes schedules the Pod based exactly on the defined rules.preferredDuringSchedulingIgnoredDuringExecution: The Kubernetes scheduler tries to find a node that meets the defined rule for scheduling. However, if there is no such node, Kubernetes schedules to a different node in the cluster.
LABEL_KEY: The node label key. For example,disktype=ssd.OPERATOR_VALUE: Represents a key's relationship to a set of values. Set the parameter to one of the following:In: The node label value must match a value in thevaluesarray.NotIn: The node label value must not match any value in thevaluesarray.Exists: The labelLABEL_KEYmust exist on the node. Thevaluesarray must be empty.DoesNotExist: The labelLABEL_KEYmust not exist on the node. Thevaluesarray must be empty.Gt: The node label value must be greater than the single value in thevaluesarray, which is interpreted as an integer.Lt: The node label value must be less than the single value in thevaluesarray, which is interpreted as an integer.
The following example illustrates scheduling PgBouncer pods on nodes labeled nodetype=pgbouncer:
apiVersion: alloydbomni.dbadmin.goog/v1
kind: PgBouncer
metadata:
name: mypgbouncer
spec:
# ... other PgBouncer configuration ...
podSpec:
# ... other podSpec settings ...
schedulingconfig:
nodeaffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodetype
operator: In
values:
- pgbouncer
In this example, the requiredDuringSchedulingIgnoredDuringExecution setting ensures that PgBouncer pods are only scheduled on nodes with the label nodetype=pgbouncer.
Delete the PgBouncer resource
To delete a PgBouncer custom resource, run the following command:
kubectl delete pgbouncers.alloydbomni.dbadmin.goog PGBOUNCER_NAME -n NAMESPACEThe output is similar to the following:
pgbouncer.alloydbomni.dbadmin.goog "mypgbouncer" deleted
View PgBouncer logs
You can view and analyze logs of any PgBouncer replica instance in your AlloyDB Omni deployment on Kubernetes as follows:
Get a list of all PgBouncer Pods in your namespace:
kubectl get pods -n NAMESPACEThe output is similar to the following:
NAME READY STATUS RESTARTS AGE al-092d-dbcluster-sample-0 3/3 Running 0 3d1h mypgbouncer-pgb-deployment-659869f95c-4kbgv 1/1 Running 0 27mView logs for a particular Pod:
kubectl logs -f POD_NAME -n NAMESPACEThe output is similar to the following:
2025-01-21 06:57:39.549 UTC [7] LOG kernel file descriptor limit: 1048576 (hard: 1048576); max_client_conn: 800, max expected fd use: 812 2025-01-21 06:57:39.550 UTC [7] LOG listening on 0.0.0.0:6432 2025-01-21 06:57:39.550 UTC [7] LOG listening on [::]:6432 2025-01-21 06:57:39.550 UTC [7] LOG listening on unix:/tmp/.s.PGSQL.6432 2025-01-21 06:57:39.550 UTC [7] LOG process up: PgBouncer 1.23.0, libevent 2.1.12-stable (epoll), adns: evdns2, tls: OpenSSL 3.0.13 30 Jan 2024 2025-01-21 06:58:17.012 UTC [7] LOG C-0x55f2b1b322a0: (nodb)/(nouser)@10.138.15.215:48682 registered new auto-database: alloydbadmin 2025-01-21 06:58:17.012 UTC [7] LOG S-0x55f2b1b4ecb0: alloydbadmin/alloydbpgbouncer@10.138.0.48:5432 new connection to server (from 10.12.1.113:53156) 2025-01-21 06:58:17.042 UTC [7] LOG S-0x55f2b1b4ecb0: alloydbadmin/alloydbpgbouncer@10.138.0.48:5432 SSL established: TLSv1.3/TLS_AES_256_GCM_SHA384/ECDH=prime256v1 2025-01-21 06:58:17.052 UTC [7] LOG C-0x55f2b1b322a0: pgbouncer/statsuser@10.138.15.215:48682 login attempt: db=pgbouncer user=statsuser tls=TLSv1.3/TLS_AES_256_GCM_SHA384 replication=no 2025-01-21 06:58:19.526 UTC [7] LOG C-0x55f2b1b322a0: pgbouncer/statsuser@10.138.15.215:48682 closing because: client close request (age=2s) 2025-01-21 06:58:20.344 UTC [7] LOG C-0x55f2b1b322a0: pgbouncer/statsuser@10.138.15.215:46796 login attempt: db=pgbouncer user=statsuser tls=TLSv1.3/TLS_AES_256_GCM_SHA384 replication=no
Monitor PgBouncer performance and activity
You can view PgBouncer connection pool metrics using only a dedicated statsuser to access PgBouncer internal stats database. The statsuser is used for authentication when connecting to the PgBouncer database.
Connect to your AlloyDB Omni as a superuser or a user with the
CREATE ROLEprivilege using thepsqlclient:export PGPASSWORD="ChangeMe123"; export PGSSLMODE="require"; psql -h HOST -d postgres -U postgres -p PORTThe output is similar to the following:
psql (16.6 (Ubuntu 16.6-0ubuntu0.24.04.1), server 15.7) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help.Create the
statsuserin your AlloyDB Omni:postgres=# CREATE USER "statsuser" WITH PASSWORD 'tester';The output is similar to the following:
CREATE ROLE postgres=#Connect to the PgBouncer database as the
statsuser:export PGPASSWORD="ChangeMe123"; export PGSSLMODE="require"; psql -h HOST -d pgbouncer -U statsuser -p PORTThe output is similar to the following:
psql (16.6 (Ubuntu 16.6-0ubuntu0.24.04.1), server 1.23.0/bouncer) WARNING: psql major version 16, server major version 1.23. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help.Run the
SHOW STATScommand to view PgBouncer performance and identify potential issues:pgbouncer=# SHOW STATS;The output is similar to the following:
database | total_server_assignment_count | total_xact_count | total_query_count | total_received | total_sent | total_xact_time | total_query_time | total_wait_time | avg_server_assignment_count | avg_xact_count | avg_query_count | avg_recv | avg_sent | avg_xact_time | avg_query_time | avg_wait_time -------------+-------------------------------+------------------+-------------------+----------------+------------+-----------------+------------------+-----------------+-----------------------------+----------------+-----------------+----------+----------+---------------+----------------+--------------- alloydbadmin | 1 | 0 | 0 | 0 | 330 | 0 | 0 | 41730 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 41730 pgbouncer | 0 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 (2 rows)
Configure network access to your AlloyDB Omni instance
To ensure the security of your AlloyDB Omni deployment on Kubernetes, you can limit network access to the PgBouncer connection pooler by defining specific IP address ranges using CIDR (Classless Inter-Domain Routing) notation.
CIDR notation combines an IP address with a prefix length, for example, the CIDR block 192.168.1.0/24 specifies 192.168.1.0 for the address and 24 for the prefix length. The prefix length specifies the number of bits in the IP address that are used for the network portion and defines the subnet mask.
In your deployment's declaration file, locate the serviceOptions section and add or modify the loadBalancerSourceRanges setting:
serviceOptions:
type: "LoadBalancer"
loadBalancerSourceRanges:
- "CIDR_BLOCK"
Replace CIDR_BLOCK with a CIDR string that specifies the IP address ranges allowed for incoming connections to the LoadBalancer service. The loadBalancerSourceRanges setting filters incoming network traffic and controls which clients can access your database instance.
To verify that your loadBalancerSourceRanges configuration is applied correctly, use the following command to check the external IP address assigned to your LoadBalancer service:
kubectl get svc -n NAMESPACE -w
Replace NAMESPACE with the namespace where your AlloyDB Omni deployment is located.
The output is similar to the following:
NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
al-mypgbouncer-pgb-svc LoadBalancer 34.118.230.149 10.138.0.26 6432:31367/TCP 3m39s
Once the LoadBalancer is ready, the EXTERNAL_IP column is populated with the external IP address assigned to the LoadBalancer service.
Connections to this external IP are filtered based on the loadBalancerSourceRanges setting.
After verifying the external IP, test connections from applications within your allowed CIDR ranges to ensure they can connect.
Disable the PgBouncer connection pooler
If you need to deactivate or rollback the PgBouncer connection pooler, follow these steps:
Check the status of the connection pooler:
kubectl get deployment fleet-controller-manager --namespace alloydb-omni-system -o json | jq '.spec.template.spec.containers[0].args'This command shows the deployment manifest of the primary controller for managing the AlloyDB Omni fleet. Find the
--enable-pgbounceroption in theargssection of thecontainersarray:... spec: containers: - name: fleet-controller-manager image:... args: --health-probe-bind-address=:8081", --metrics-bind-address=127.0.0.1:8080", --leader-elect", --image-registry=gcr.io", --data-plane-image-repository=alloydb-omni-staging", --control-plane-agents-image-repository=aedi-gbox", --control-plane-agents-tag=jan19v3", --additional-db-versions-for-test-only=latest", --enable-multiple-backup-solutions=true", --enable-pgbouncer=trueEdit the configuration of the controller deployment to disable the connection pooler:
kubectl edit deployment fleet-controller-manager --namespace alloydb-omni-systemIn the deployment manifest, change the value of the
--enable-pgbounceroption fromtruetofalse:... --enable-pgbouncer=falseSave the file and exit the editor.
kubectlautomatically applies the change.