Deploy a PostgreSQL vector database on GKE


This tutorial shows you how to deploy a PostgreSQL vector database cluster on Google Kubernetes Engine (GKE).

PostgreSQL comes with a range of modules and extensions that extend the database's functionality. In this tutorial, you install the pgvector extension on an existing PostgreSQL cluster deployed to GKE. The Pgvector extension lets you store vectors in the database tables by adding vector types to PostgreSQL. Pgvector also provides similarity searches by running common SQL queries.

We simplify the PGvector extension deployment by first deploying the CloudnativePG operator, as the operator provides a bundled version of the extension.

This tutorial is intended for cloud platform administrators and architects, ML engineers, and MLOps (DevOps) professionals interested in deploying PostgreSQL database clusters on GKE.

Objectives

In this tutorial, you learn how to:

  • Deploy GKE infrastructure for PostgreSQL.
  • Install pgvector extension on the PostgreSQL cluster deployed to GKE.
  • Deploy and configure the CloudNativePG PostgreSQL operator with Helm.
  • Upload a demo dataset and run search queries with Jupyter Notebook.

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.

Before you begin

In this tutorial, you use Cloud Shell to run commands. Cloud Shell is a shell environment for managing resources hosted on Google Cloud. It comes preinstalled with the Google Cloud CLI, kubectl, Helm and Terraform command-line tools. If you don't use Cloud Shell, you must install the Google Cloud CLI.

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. Install the Google Cloud CLI.
  3. To initialize the gcloud CLI, run the following command:

    gcloud init
  4. Create or select a Google Cloud project.

    • Create a Google Cloud project:

      gcloud projects create PROJECT_ID

      Replace PROJECT_ID with a name for the Google Cloud project you are creating.

    • Select the Google Cloud project that you created:

      gcloud config set project PROJECT_ID

      Replace PROJECT_ID with your Google Cloud project name.

  5. Make sure that billing is enabled for your Google Cloud project.

  6. Enable the Cloud Resource Manager, Compute Engine, GKE, and IAM Service Account Credentials APIs:

    gcloud services enable cloudresourcemanager.googleapis.com compute.googleapis.com container.googleapis.com iamcredentials.googleapis.com
  7. Install the Google Cloud CLI.
  8. To initialize the gcloud CLI, run the following command:

    gcloud init
  9. Create or select a Google Cloud project.

    • Create a Google Cloud project:

      gcloud projects create PROJECT_ID

      Replace PROJECT_ID with a name for the Google Cloud project you are creating.

    • Select the Google Cloud project that you created:

      gcloud config set project PROJECT_ID

      Replace PROJECT_ID with your Google Cloud project name.

  10. Make sure that billing is enabled for your Google Cloud project.

  11. Enable the Cloud Resource Manager, Compute Engine, GKE, and IAM Service Account Credentials APIs:

    gcloud services enable cloudresourcemanager.googleapis.com compute.googleapis.com container.googleapis.com iamcredentials.googleapis.com
  12. Grant roles to your user account. Run the following command once for each of the following IAM roles: roles/compute.securityAdmin, roles/compute.viewer, roles/container.clusterAdmin, roles/container.admin, roles/iam.serviceAccountAdmin, roles/iam.serviceAccountUser

    gcloud projects add-iam-policy-binding PROJECT_ID --member="user:USER_IDENTIFIER" --role=ROLE
    • Replace PROJECT_ID with your project ID.
    • Replace USER_IDENTIFIER with the identifier for your user account. For example, user:myemail@example.com.

    • Replace ROLE with each individual role.

Set up your environment

To set up your environment with Cloud Shell, follow these steps:

  1. Set environment variables for your project, region, and a Kubernetes cluster resource prefix:

    export PROJECT_ID=PROJECT_ID
    export KUBERNETES_CLUSTER_PREFIX=postgres
    export REGION=us-central1
    
    • Replace PROJECT_ID with your Google Cloud project ID.

    This tutorial uses the us-central1 region.

  2. Clone the sample code repository from GitHub:

    git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
    
  3. Navigate to the postgres-pgvector directory:

    cd kubernetes-engine-samples/databases/postgres-pgvector
    

Create your cluster infrastructure

In this section, you run a Terraform script to create a private, highly-available, regional GKE cluster to deploy your PostgreSQL database.

You can choose to deploy PostgreSQL using a Standard or Autopilot cluster. Each has its own advantages and different pricing models.

Autopilot

To deploy the Autopilot cluster infrastructure, run the following commands in the Cloud Shell:

export GOOGLE_OAUTH_ACCESS_TOKEN=$(gcloud auth print-access-token)
terraform -chdir=../postgresql-cloudnativepg/terraform/gke-autopilot init
terraform -chdir=../postgresql-cloudnativepg/terraform/gke-autopilot apply \
-var project_id=${PROJECT_ID} \
-var region=${REGION} \
-var cluster_prefix=${KUBERNETES_CLUSTER_PREFIX}

GKE replaces the following variables at runtime:

  • GOOGLE_OAUTH_ACCESS_TOKEN uses the gcloud auth print-access-token command to retrieve an access token that authenticates interactions with various Google Cloud APIs
  • PROJECT_ID, REGION, and KUBERNETES_CLUSTER_PREFIX are the environment variables defined in the Set up your environment section and assigned to the new relevant variables for the Autopilot cluster you are creating.

When prompted, type yes.

Terraform creates the following resources:

  • A custom VPC network and private subnet for the Kubernetes nodes.
  • A Cloud Router to access the internet through Network Address Translation (NAT).
  • A private GKE cluster in the us-central1 region.
  • A ServiceAccount with logging and monitoring permissions for the cluster.
  • Google Cloud Managed Service for Prometheus configuration for cluster monitoring and alerting.

The output is similar to the following:

...
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
...

Standard

To deploy the Standard cluster infrastructure, run the following commands in the Cloud Shell:

export GOOGLE_OAUTH_ACCESS_TOKEN=$(gcloud auth print-access-token)
terraform -chdir=../postgresql-cloudnativepg/terraform/gke-standard init
terraform -chdir=../postgresql-cloudnativepg/terraform/gke-standard apply \
-var project_id=${PROJECT_ID} \
-var region=${REGION} \
-var cluster_prefix=${KUBERNETES_CLUSTER_PREFIX}

GKE replaces the following variables at runtime:

  • GOOGLE_OAUTH_ACCESS_TOKEN uses the gcloud auth print-access-token command to retrieve an access token that authenticates interactions with various Google Cloud APIs.
  • PROJECT_ID, REGION, and KUBERNETES_CLUSTER_PREFIX are the environment variables defined in Set up your environment section and assigned to the new relevant variables for the Standard cluster that you are creating.

When prompted, type yes. It might take several minutes for these commands to complete and for the cluster to show a ready status.

Terraform creates the following resources:

  • A custom VPC network and private subnet for the Kubernetes nodes.
  • A Cloud Router to access the internet through Network Address Translation (NAT).
  • A private GKE cluster in the us-central1 region with autoscaling enabled (one to two nodes per zone).
  • A ServiceAccount with logging and monitoring permissions for the cluster.
  • Google Cloud Managed Service for Prometheus configuration for cluster monitoring and alerting.

The output is similar to the following:

...
Apply complete! Resources: 14 added, 0 changed, 0 destroyed.
...

Connect to the cluster

Configure kubectl to fetch credentials and communicate with your new GKE cluster:

gcloud container clusters get-credentials \
    ${KUBERNETES_CLUSTER_PREFIX}-cluster --region ${REGION} --project ${PROJECT_ID}

Deploy the CloudNativePG operator

Deploy the CloudNativePG to your Kubernetes cluster using a Helm chart:

  1. Check the version of Helm:

    helm version
    

    Update the version if it's older than 3.13:

    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
    
  2. Add the CloudNativePG operator Helm Chart repository:

    helm repo add cnpg https://cloudnative-pg.github.io/charts
    
  3. Deploy the CloudNativePG operator using the Helm command-line tool:

    helm upgrade --install cnpg \
        --namespace cnpg-system \
        --create-namespace \
        cnpg/cloudnative-pg
    

    The output is similar to the following:

    Release "cnpg" does not exist. Installing it now.
    NAME: cnpg
    LAST DEPLOYED: Fri Oct 13 13:52:36 2023
    NAMESPACE: cnpg-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    ...
    

Deploy the PostgreSQL vector database

In this section, you deploy the PostgreSQL vector database.

  1. Create a namespace pg-ns for the database:

    kubectl create ns pg-ns
    
  2. Apply the manifest to deploy PostgreSQL cluster. The cluster manifest enables the pgvector extension.

    kubectl apply -n pg-ns -f manifests/01-basic-cluster/postgreSQL_cluster.yaml
    

    The postgreSQL_cluster.yaml manifest describes the Deployment:

    apiVersion: postgresql.cnpg.io/v1
    kind: Cluster
    metadata:
      name: gke-pg-cluster
    spec:
      description: "Standard GKE PostgreSQL cluster"
      imageName: ghcr.io/cloudnative-pg/postgresql:16.2
      enableSuperuserAccess: true
      instances: 3
      startDelay: 300
      primaryUpdateStrategy: unsupervised
      postgresql:
        pg_hba:
          - host all all 10.48.0.0/20 md5
      bootstrap:
        initdb:
          postInitTemplateSQL:
            - CREATE EXTENSION IF NOT EXISTS vector;
          database: app
      storage:
        storageClass: premium-rwo
        size: 2Gi
      resources:
        requests:
          memory: "1Gi"
          cpu: "1000m"
        limits:
          memory: "1Gi"
          cpu: "1000m"
      affinity:
        enablePodAntiAffinity: true
        tolerations:
        - key: cnpg.io/cluster
          effect: NoSchedule
          value: gke-pg-cluster
          operator: Equal
        additionalPodAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.component
                  operator: In
                  values:
                  - "pg-cluster"
              topologyKey: topology.kubernetes.io/zone
      monitoring:
        enablePodMonitor: true
  3. Check the status of the cluster:

    kubectl get cluster -n pg-ns --watch
    

    Wait for the output to show a status of Cluster in healthy state before you move to the next step.

Upload demo dataset and run search queries with Jupyter Notebook

In this section, you upload vectors into a PostgreSQL table and run semantic search queries using SQL syntax.

In the following example, you use a dataset from a CSV file that contains a list of books in different genres. Pgvector serves as a search engine, and the Pod you create serves as a client querying the PostgreSQL database.

  1. Wait for the PostgreSQL leader Pod to be created and ready:

    while [[ $(kubectl get pod -l cnpg.io/cluster=gke-pg-cluster,role=primary -n pg-ns -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do
    sleep 5
    done
    
  2. Create the Configmap with the books-dataset and run the Jupyter Pod to interact with your PostgreSQL cluster:

    kubectl create -n pg-ns configmap books-dataset --from-file=manifests/02-notebook/dataset.csv
    kubectl create -n pg-ns configmap notebook --from-file=manifests/02-notebook/vector-database.ipynb
    kubectl apply -n pg-ns -f manifests/02-notebook/jupyter.yaml
    
    • The Secret named gke-pg-cluster-superuser that is created by the CloudNativePG operator is mounted to the client Pod as environment variables named CLIENTUSERNAMEandCLIENTPASSWORD.
    • The books-dataset ConfigMap contains a csv file with book data for the PostgreSQL database.
    • The demo-app ConfigMap contains Python code to create the PostgreSQL table from books-dataset.

    The jupyter.yaml manifest describes the notebook Deployment and its Service:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels: &labels
        app: jupyter-notebook
      name: notebook
    spec:
      ports:
      - port: 8888
      selector: *labels
      type: LoadBalancer
      # type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: notebook
      labels: &labels
        app: jupyter-notebook
    spec:
      selector:
        matchLabels: *labels
      template:
        metadata: 
          labels: *labels
        spec:
          containers:
          - name: jupyter
            image: tensorflow/tensorflow:2.15.0-jupyter
            resources:
              requests:
                memory: "4500Mi"
                cpu: "1"
              limits:
                memory: "4500Mi"
                cpu: "1"
            ports:
            - containerPort: 8888
            env:
            - name: CLIENTPASSWORD
              valueFrom:
                secretKeyRef:
                  name: gke-pg-cluster-superuser
                  key: password
            - name: CLIENTUSERNAME
              valueFrom:
                secretKeyRef:
                  name: gke-pg-cluster-superuser
                  key: username
            volumeMounts:
            - name: books-dataset
              mountPath: /usr/local/dataset
            - name: notebook
              mountPath: /tf
          volumes:
          - name: books-dataset
            configMap:
              name: books-dataset
          - name: notebook
            configMap:
              name: notebook
  3. Wait for GKE to start the Jupyter Pod:

    kubectl wait pods -l app=jupyter-notebook --for condition=Ready --timeout=300s -n pg-ns
    
  4. Get the URL with the access token to connect to Jupyter:

    export EXTERNAL_IP=$(kubectl -n pg-ns get svc notebook --output jsonpath='{.status.loadBalancer.ingress[0].ip}')
    kubectl logs deploy/notebook -n pg-ns| grep '^ .*http://127'|sed "s|127.0.0.1|${EXTERNAL_IP}|"
    

    The output is similar to the following:

    http://34.123.21.1:8888/tree?token=a1d48d3531c48328695d6901004c94060aa0aa3554ff7463
    
  5. Open this URL and click the vector-database.ipynb file.

  6. Click Run > Run all cells. Jupyter executes the code and performs a search query for the text drama about people and unhappy love.

    This query performs a semantic search against the documents table in PostgreSQL, retrieving a maximum of two results with highest match score relevant to your query.

    The output is similar to the following:

    Title: Romeo and Juliet, Author: William Shakespeare, Paul Werstine (Editor),
    Barbara A. Mowat (Editor), Paavo Emil Cajander (Translator)
    In Romeo and Juliet, Shakespeare creates a violent world, in which two young
    people fall in love. It is not simply that their families disapprove; the Montagues
    and the Capulets are engaged in a blood feud.In this death-filled setting, the
    movement from love at first sight to the lovers' final union in death seems
    almost inevitable. And yet, this play set in an extraordinary world has become
    the quintessential story of young love. In part because of its exquisite language,
    it is easy to respond as if it were about all young lovers.
    ---------
    Title: A Midsummer Night's Dream, Author: William Shakespeare, Paul Werstine (Editor),
    Barbara A. Mowat (Editor), Catherine Belsey (Contributor)
    Shakespeare's intertwined love polygons begin to get complicated from the start--Demetrius
    and Lysander both want Hermia but she only has eyes for Lysander. Bad news is,
    Hermia's father wants Demetrius for a son-in-law. On the outside is Helena,
    whose unreturned love burns hot for Demetrius. Hermia and Lysander plan to flee
    from the city under cover of darkness but are pursued by an enraged Demetrius
    (who is himself pursued by an enraptured Helena). In the forest, unbeknownst
    to the mortals, Oberon and Titania (King and Queen of the faeries) are having
    a spat over a servant boy. The plot twists up when Oberon's head mischief-maker,
    Puck, runs loose with a flower which causes people to fall in love with the
    first thing they see upon waking. Throw in a group of labourers preparing a
    play for the Duke's wedding (one of whom is given a donkey's head and Titania
    for a lover by Puck) and the complications become fantastically funny.
    ---------
    

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the project

The easiest way to avoid billing is to delete the project you created for this tutorial.

Delete a Google Cloud project:

gcloud projects delete PROJECT_ID

If you deleted the project, your clean up is complete. If you didn't delete the project, proceed to delete the individual resources.

Delete individual resources

  1. Set environment variables.

    export PROJECT_ID=${PROJECT_ID}
    export KUBERNETES_CLUSTER_PREFIX=postgres
    export REGION=us-central1
    
  2. Run the terraform destroy command:

    export GOOGLE_OAUTH_ACCESS_TOKEN=$(gcloud auth print-access-token)
    terraform  -chdir=../postgresql-cloudnativepg/terraform/FOLDER destroy \
    -var project_id=${PROJECT_ID} \
    -var region=${REGION} \
    -var cluster_prefix=${KUBERNETES_CLUSTER_PREFIX}
    

    Replace FOLDER with either gke-autopilot or gke-standard, depending on the type of GKE cluster you created.

    When prompted, type yes.

What's next