Connect from Cloud Run functions

This page contains information and examples for connecting to a Cloud SQL instance from a service running in Cloud Run functions.

For step-by-step instructions on running a Cloud Run functions sample web application connected to Cloud SQL, see the quickstart for connecting from Cloud Run functions.

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases in the cloud.

Cloud Run functions is a lightweight compute solution for developers to create single-purpose, standalone functions that respond to Cloud events without the need to manage a server or runtime environment.

Set up a Cloud SQL instance

  1. Enable the Cloud SQL Admin API in the Google Cloud project that you are connecting from, if you haven't already done so:

    Enable the API

  2. Create a Cloud SQL for SQL Server instance. We recommend that you choose a Cloud SQL instance location in the same region as your Cloud Run service for better latency, to avoid some networking costs, and to reduce cross region failure risks.

    By default, Cloud SQL assigns a public IP address to a new instance. You also have the option to assign a private IP address. For more information about the connectivity options for both, see the Connecting Overview page.

Configure Cloud Run functions

The steps to configure Cloud Run functions depend on the type of IP address that you assigned to your Cloud SQL instance.

Public IP (default)

Cloud Run functions supports connecting to Cloud SQL for SQL Server over public IP using the Go, Java and Python connectors.

To configure Cloud Run functions to enable connections to a Cloud SQL instance:
  • Confirm that the instance created above has a public IP address. You can confirm this on the Overview page for the instance in the Google Cloud console. If you need to add a public IP address, see Configure public IP.
  • Get the instance's INSTANCE_CONNECTION_NAME. This value is available:
    • On the Overview page for the instance, in the Google Cloud console, or
    • By running the following command: gcloud sql instances describe [INSTANCE_NAME]
  • Configure the service account for your function. If the authorizing service account belongs to a different project from the Cloud SQL instance, enable the Cloud SQL Admin API, and add the IAM permissions listed below, on both projects. Confirm that the service account has the appropriate Cloud SQL roles and permissions to connect to Cloud SQL.
    • To connect to Cloud SQL, the service account needs one of the following IAM roles:
      • Cloud SQL Client (preferred)
      • Cloud SQL Editor
      • Cloud SQL Admin
      Or, you can manually assign the following IAM permissions:
      • cloudsql.instances.connect
      • cloudsql.instances.get
  • If you're using Cloud Run functions and not Cloud Run functions (1st gen), the following are required (also see Configure Cloud Run):
    1. Initially deploy your function.
      When you first begin creating a Cloud Run function in the Google Cloud console, the underlying Cloud Run service hasn't been created yet. You can't configure a Cloud SQL connection until that service is created (by deploying the Cloud Run function).
    2. In the Google Cloud console, in the upper right of the Function details page, under Powered by Cloud Run, click the link to access the underlying Cloud Run service.
    3. On the Cloud Run Service details page, select the Edit and deploy new revision tab.
    4. Follow the standard steps (as in the case of any configuration change) for setting a new configuration for a Cloud SQL connection.
      This creates a new Cloud Run revision, and subsequent revisions automatically receive this Cloud SQL connection, unless you explicitly change it.

Private IP

If the authorizing service account belongs to a different project than the one containing the Cloud SQL instance, do the following:

  • In both projects, enable the Cloud SQL Admin API.
  • For the service account in the project that contains the Cloud SQL instance, add the IAM permissions.
A Serverless VPC Access connector uses private IP addresses to handle communication to your VPC network. To connect directly with private IP addresses, you must do the following:
  1. Make sure that the Cloud SQL instance created previously has a private IP address. If you need to add one, see Configure private IP for instructions.
  2. Create a Serverless VPC Access connector in the same VPC network as your Cloud SQL instance. Note the following conditions:
    • Unless you're using Shared VPC, your connector must be in the same project and region as the resource that uses it, but it can send traffic to resources in different regions.
    • Serverless VPC Access supports communication to VPC networks connected using Cloud VPN and VPC Network Peering.
    • Serverless VPC Access doesn't support legacy networks.
  3. Configure Cloud Run functions to use the connector.
  4. Connect using your instance's private IP address and port 1433.

Connect to Cloud SQL

After you configure Cloud Run functions, you can connect to your Cloud SQL instance.

Public IP (default)

For public IP paths, Cloud Run functions provides encryption and connects using the Cloud SQL connectors.

Connect with Cloud SQL connectors

The Cloud SQL connectors are language specific libraries that provide encryption and IAM-based authorization when connecting to a Cloud SQL instance.

Python

To see this snippet in the context of a web application, view the README on GitHub.

import os

from google.cloud.sql.connector import Connector, IPTypes
import pytds

import sqlalchemy


def connect_with_connector() -> sqlalchemy.engine.base.Engine:
    """
    Initializes a connection pool for a Cloud SQL instance of SQL Server.

    Uses the Cloud SQL Python Connector package.
    """
    # Note: Saving credentials in environment variables is convenient, but not
    # secure - consider a more secure solution such as
    # Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
    # keep secrets safe.

    instance_connection_name = os.environ[
        "INSTANCE_CONNECTION_NAME"
    ]  # e.g. 'project:region:instance'
    db_user = os.environ.get("DB_USER", "")  # e.g. 'my-db-user'
    db_pass = os.environ["DB_PASS"]  # e.g. 'my-db-password'
    db_name = os.environ["DB_NAME"]  # e.g. 'my-database'

    ip_type = IPTypes.PRIVATE if os.environ.get("PRIVATE_IP") else IPTypes.PUBLIC

    connector = Connector(ip_type)

    connect_args = {}
    # If your SQL Server instance requires SSL, you need to download the CA
    # certificate for your instance and include cafile={path to downloaded
    # certificate} and validate_host=False. This is a workaround for a known issue.
    if os.environ.get("DB_ROOT_CERT"):  # e.g. '/path/to/my/server-ca.pem'
        connect_args = {
            "cafile": os.environ["DB_ROOT_CERT"],
            "validate_host": False,
        }

    def getconn() -> pytds.Connection:
        conn = connector.connect(
            instance_connection_name,
            "pytds",
            user=db_user,
            password=db_pass,
            db=db_name,
            **connect_args
        )
        return conn

    pool = sqlalchemy.create_engine(
        "mssql+pytds://",
        creator=getconn,
        # ...
    )
    return pool

Java

To see this snippet in the context of a web application, view the README on GitHub.

Note:

  • CLOUD_SQL_CONNECTION_NAME should be represented as <MY-PROJECT>:<INSTANCE-REGION>:<INSTANCE-NAME>
  • See the JDBC socket factory version requirements for the pom.xml file here .


import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import javax.sql.DataSource;

public class ConnectorConnectionPoolFactory extends ConnectionPoolFactory {

  // Note: Saving credentials in environment variables is convenient, but not
  // secure - consider a more secure solution such as
  // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  // keep secrets safe.
  private static final String INSTANCE_CONNECTION_NAME =
      System.getenv("INSTANCE_CONNECTION_NAME");
  private static final String DB_USER = System.getenv("DB_USER");
  private static final String DB_PASS = System.getenv("DB_PASS");
  private static final String DB_NAME = System.getenv("DB_NAME");

  public static DataSource createConnectionPool() {
    // The configuration object specifies behaviors for the connection pool.
    HikariConfig config = new HikariConfig();

    // The following is equivalent to setting the config options below:
    // jdbc:sqlserver://;user=<DB_USER>;password=<DB_PASS>;databaseName=<DB_NAME>;
    // socketFactoryClass=com.google.cloud.sql.sqlserver.SocketFactory;
    // socketFactoryConstructorArg=<INSTANCE_CONNECTION_NAME>

    // See the link below for more info on building a JDBC URL for the Cloud SQL JDBC Socket Factory
    // https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory#creating-the-jdbc-url

    // Configure which instance and what database user to connect with.
    config
        .setDataSourceClassName("com.microsoft.sqlserver.jdbc.SQLServerDataSource");
    config.setUsername(DB_USER); // e.g. "root", "sqlserver"
    config.setPassword(DB_PASS); // e.g. "my-password"
    config.addDataSourceProperty("databaseName", DB_NAME);

    config.addDataSourceProperty("socketFactoryClass",
        "com.google.cloud.sql.sqlserver.SocketFactory");
    config.addDataSourceProperty("socketFactoryConstructorArg", INSTANCE_CONNECTION_NAME);

    // The Java Connector provides SSL encryption, so it should be disabled
    // at the driver level.
    config.addDataSourceProperty("encrypt", "false");

    // ... Specify additional connection properties here.
    // ...

    // Initialize the connection pool using the configuration object.
    return new HikariDataSource(config);
  }
}

Go

To see this snippet in the context of a web application, view the README on GitHub.

package cloudsql

import (
	"context"
	"database/sql"
	"fmt"
	"log"
	"net"
	"os"

	"cloud.google.com/go/cloudsqlconn"
	mssql "github.com/denisenkom/go-mssqldb"
)

type csqlDialer struct {
	dialer     *cloudsqlconn.Dialer
	connName   string
	usePrivate bool
}

// DialContext adheres to the mssql.Dialer interface.
func (c *csqlDialer) DialContext(ctx context.Context, network, addr string) (net.Conn, error) {
	var opts []cloudsqlconn.DialOption
	if c.usePrivate {
		opts = append(opts, cloudsqlconn.WithPrivateIP())
	}
	return c.dialer.Dial(ctx, c.connName, opts...)
}

func connectWithConnector() (*sql.DB, error) {
	mustGetenv := func(k string) string {
		v := os.Getenv(k)
		if v == "" {
			log.Fatalf("Fatal Error in connect_connector.go: %s environment variable not set.\n", k)
		}
		return v
	}
	// Note: Saving credentials in environment variables is convenient, but not
	// secure - consider a more secure solution such as
	// Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
	// keep secrets safe.
	var (
		dbUser                 = mustGetenv("DB_USER")                  // e.g. 'my-db-user'
		dbPwd                  = mustGetenv("DB_PASS")                  // e.g. 'my-db-password'
		dbName                 = mustGetenv("DB_NAME")                  // e.g. 'my-database'
		instanceConnectionName = mustGetenv("INSTANCE_CONNECTION_NAME") // e.g. 'project:region:instance'
		usePrivate             = os.Getenv("PRIVATE_IP")
	)

	dbURI := fmt.Sprintf("user id=%s;password=%s;database=%s;", dbUser, dbPwd, dbName)
	c, err := mssql.NewConnector(dbURI)
	if err != nil {
		return nil, fmt.Errorf("mssql.NewConnector: %w", err)
	}
	dialer, err := cloudsqlconn.NewDialer(context.Background())
	if err != nil {
		return nil, fmt.Errorf("cloudsqlconn.NewDailer: %w", err)
	}
	c.Dialer = &csqlDialer{
		dialer:     dialer,
		connName:   instanceConnectionName,
		usePrivate: usePrivate != "",
	}

	dbPool := sql.OpenDB(c)
	if err != nil {
		return nil, fmt.Errorf("sql.Open: %w", err)
	}
	return dbPool, nil
}

Private IP

For private IP paths, your application connects directly to your instance through a VPC network. This method uses TCP to connect directly to the Cloud SQL instance without using the Cloud SQL Auth Proxy.

Connect with TCP

Connect using the private IP address of your Cloud SQL instance as the host and port 1433.

Python

To see this snippet in the context of a web application, view the README on GitHub.

import os

import sqlalchemy


def connect_tcp_socket() -> sqlalchemy.engine.base.Engine:
    """Initializes a TCP connection pool for a Cloud SQL instance of SQL Server."""
    # Note: Saving credentials in environment variables is convenient, but not
    # secure - consider a more secure solution such as
    # Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
    # keep secrets safe.
    db_host = os.environ[
        "INSTANCE_HOST"
    ]  # e.g. '127.0.0.1' ('172.17.0.1' if deployed to GAE Flex)
    db_user = os.environ["DB_USER"]  # e.g. 'my-db-user'
    db_pass = os.environ["DB_PASS"]  # e.g. 'my-db-password'
    db_name = os.environ["DB_NAME"]  # e.g. 'my-database'
    db_port = os.environ["DB_PORT"]  # e.g. 1433

    pool = sqlalchemy.create_engine(
        # Equivalent URL:
        # mssql+pytds://<db_user>:<db_pass>@<db_host>:<db_port>/<db_name>
        sqlalchemy.engine.url.URL.create(
            drivername="mssql+pytds",
            username=db_user,
            password=db_pass,
            database=db_name,
            host=db_host,
            port=db_port,
        ),
        # ...
    )

    return pool

Java

To see this snippet in the context of a web application, view the README on GitHub.

Note:

  • CLOUD_SQL_CONNECTION_NAME should be represented as <MY-PROJECT>:<INSTANCE-REGION>:<INSTANCE-NAME>
  • Using the argument ipTypes=PRIVATE will force the SocketFactory to connect with an instance's associated private IP
  • See the JDBC socket factory version requirements for the pom.xml file here .


import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import javax.sql.DataSource;

public class TcpConnectionPoolFactory extends ConnectionPoolFactory {

  // Note: Saving credentials in environment variables is convenient, but not
  // secure - consider a more secure solution such as
  // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  // keep secrets safe.
  private static final String DB_USER = System.getenv("DB_USER");
  private static final String DB_PASS = System.getenv("DB_PASS");
  private static final String DB_NAME = System.getenv("DB_NAME");

  private static final String INSTANCE_HOST = System.getenv("INSTANCE_HOST");
  private static final String DB_PORT = System.getenv("DB_PORT");


  public static DataSource createConnectionPool() {
    // The configuration object specifies behaviors for the connection pool.
    HikariConfig config = new HikariConfig();

    // Configure which instance and what database user to connect with.
    config.setJdbcUrl(
        String.format("jdbc:sqlserver://%s:%s;databaseName=%s", INSTANCE_HOST, DB_PORT, DB_NAME));
    config.setUsername(DB_USER); // e.g. "root", "sqlserver"
    config.setPassword(DB_PASS); // e.g. "my-password"


    // ... Specify additional connection properties here.
    // ...

    // Initialize the connection pool using the configuration object.
    return new HikariDataSource(config);
  }
}

Node.js

To see this snippet in the context of a web application, view the README on GitHub.

const mssql = require('mssql');

// createTcpPool initializes a TCP connection pool for a Cloud SQL
// instance of SQL Server.
const createTcpPool = async config => {
  // Note: Saving credentials in environment variables is convenient, but not
  // secure - consider a more secure solution such as
  // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  // keep secrets safe.
  const dbConfig = {
    server: process.env.INSTANCE_HOST, // e.g. '127.0.0.1'
    port: parseInt(process.env.DB_PORT), // e.g. 1433
    user: process.env.DB_USER, // e.g. 'my-db-user'
    password: process.env.DB_PASS, // e.g. 'my-db-password'
    database: process.env.DB_NAME, // e.g. 'my-database'
    options: {
      trustServerCertificate: true,
    },
    // ... Specify additional properties here.
    ...config,
  };
  // Establish a connection to the database.
  return mssql.connect(dbConfig);
};

Go

To see this snippet in the context of a web application, view the README on GitHub.

package cloudsql

import (
	"database/sql"
	"fmt"
	"log"
	"os"
	"strings"

	_ "github.com/denisenkom/go-mssqldb"
)

// connectTCPSocket initializes a TCP connection pool for a Cloud SQL
// instance of SQL Server.
func connectTCPSocket() (*sql.DB, error) {
	mustGetenv := func(k string) string {
		v := os.Getenv(k)
		if v == "" {
			log.Fatalf("Fatal Error in connect_tcp.go: %s environment variable not set.\n", k)
		}
		return v
	}
	// Note: Saving credentials in environment variables is convenient, but not
	// secure - consider a more secure solution such as
	// Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
	// keep secrets safe.
	var (
		dbUser    = mustGetenv("DB_USER")       // e.g. 'my-db-user'
		dbPwd     = mustGetenv("DB_PASS")       // e.g. 'my-db-password'
		dbTCPHost = mustGetenv("INSTANCE_HOST") // e.g. '127.0.0.1' ('172.17.0.1' if deployed to GAE Flex)
		dbPort    = mustGetenv("DB_PORT")       // e.g. '1433'
		dbName    = mustGetenv("DB_NAME")       // e.g. 'my-database'
	)

	dbURI := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%s;database=%s;",
		dbTCPHost, dbUser, dbPwd, dbPort, dbName)


	// dbPool is the pool of database connections.
	dbPool, err := sql.Open("sqlserver", dbURI)
	if err != nil {
		return nil, fmt.Errorf("sql.Open: %w", err)
	}

	// ...

	return dbPool, nil
}

PHP

To see this snippet in the context of a web application, view the README on GitHub.

namespace Google\Cloud\Samples\CloudSQL\SQLServer;

use PDO;
use PDOException;
use RuntimeException;
use TypeError;

class DatabaseTcp
{
    public static function initTcpDatabaseConnection(): PDO
    {
        try {
            // Note: Saving credentials in environment variables is convenient, but not
            // secure - consider a more secure solution such as
            // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
            // keep secrets safe.
            $username = getenv('DB_USER'); // e.g. 'your_db_user'
            $password = getenv('DB_PASS'); // e.g. 'your_db_password'
            $dbName = getenv('DB_NAME'); // e.g. 'your_db_name'
            $instanceHost = getenv('INSTANCE_HOST'); // e.g. '127.0.0.1' ('172.17.0.1' for GAE Flex)

            // Connect using TCP
            $dsn = sprintf(
                'sqlsrv:server=%s;Database=%s',
                $instanceHost,
                $dbName
            );

            // Connect to the database
            $conn = new PDO(
                $dsn,
                $username,
                $password,
                # ...
            );
        } catch (TypeError $e) {
            throw new RuntimeException(
                sprintf(
                    'Invalid or missing configuration! Make sure you have set ' .
                        '$username, $password, $dbName, and $instanceHost (for TCP mode). ' .
                        'The PHP error was %s',
                    $e->getMessage()
                ),
                $e->getCode(),
                $e
            );
        } catch (PDOException $e) {
            throw new RuntimeException(
                sprintf(
                    'Could not connect to the Cloud SQL Database. Check that ' .
                        'your username and password are correct, that the Cloud SQL ' .
                        'proxy is running, and that the database exists and is ready ' .
                        'for use. For more assistance, refer to %s. The PDO error was %s',
                    'https://cloud.google.com/sql/docs/sqlserver/connect-external-app',
                    $e->getMessage()
                ),
                (int) $e->getCode(),
                $e
            );
        }

        return $conn;
    }
}

Best practices and other information

You can use the Cloud SQL Auth Proxy when testing your application locally. See the quickstart for using the Cloud SQL Auth Proxy for detailed instructions.

Connection Pools

Connections to underlying databases may be dropped, either by the database server itself, or by the infrastructure underlying Cloud Run functions. We recommend using a client library that supports connection pools that automatically reconnect broken client connections. Additionally, we recommend using a globally scoped connection pool to increase the likelihood that your function reuses the same connection for subsequent invocations of the function, and closes the connection naturally when the instance is evicted (auto-scaled down). For more detailed examples on how to use connection pools, see Managing database connections.

Connection Limits

Cloud SQL imposes a maximum limit on concurrent connections, and these limits may vary depending on the database engine chosen (see Cloud SQL Quotas and Limits). It's recommended to use a connection with Cloud Run functions, but it is important to set the maximum number of connections to 1.

Where possible, you should take care to only initialize a connection pool for functions that need access to your database. Some connection pools will create connections preemptively, which can consume excess resources and count towards your connection limits. For this reason, it's recommended to use Lazy Initialization to delay the creation of a connection pool until needed, and only include the connection pool in functions where it's used.

For more detailed examples on how to limit the number of connections, see Managing database connections.

API Quota Limits

Cloud Run functions provides a mechanism that connects using the Cloud SQL Auth Proxy, which uses the Cloud SQL Admin API. API quota limits apply to the Cloud SQL Auth Proxy. The Cloud SQL Admin API quota used is approximately two times the number of Cloud SQL instances configured times the total number of functions deployed. You can set the number of max concurrent invocations to modify the expected API quota consumed. Cloud Run functions also imposes rate limits on the number of API calls allowed per 100 seconds.

What's next