[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-25。"],[[["\u003cp\u003eThis document outlines methods for connecting to an AlloyDB for PostgreSQL cluster from outside its configured Virtual Private Cloud (VPC).\u003c/p\u003e\n"],["\u003cp\u003eFor development or low-cost production, an intermediary virtual machine (VM) within the VPC can act as a secure connection point to the AlloyDB cluster.\u003c/p\u003e\n"],["\u003cp\u003eIdentity-Aware Proxy (IAP) can be used to securely connect to the cluster through the intermediary VM without exposing its public IP address, suitable for non-production purposes.\u003c/p\u003e\n"],["\u003cp\u003eConnecting through a SOCKS proxy on the intermediary VM provides a flexible, scalable, and encrypted connection, making it suitable for production environments with proper configuration.\u003c/p\u003e\n"],["\u003cp\u003eFor high-availability production environments, utilizing Cloud VPN or Cloud Interconnect offers a robust, managed solution for establishing a permanent connection between the VPC and your application, removing the single point of failure risk.\u003c/p\u003e\n"]]],[],null,["# Connect to a cluster from outside its VPC\n\nThis page examines different ways to connect to an AlloyDB for PostgreSQL\ncluster from outside its configured Virtual Private Cloud (VPC). It\nassumes that you have already [created an AlloyDB\ncluster](/alloydb/docs/cluster-create).\n\nAbout external connections\n--------------------------\n\nYour AlloyDB cluster comprises a number of nodes within a\nGoogle Cloud VPC. When you create a cluster, you also [configure private\nservices\naccess](/alloydb/docs/configure-connectivity#about_network_connectivity)\nbetween one of your VPCs and the Google-managed VPC containing your new\ncluster. This peered connection lets you use private IP addresses\nto access resources on the cluster's VPC as if they are part of your own\nVPC, using private IP addresses.\n\nSituations exist where your application must connect to your cluster\nfrom outside this connected VPC:\n\n- Your application runs elsewhere within the Google Cloud ecosystem,\n outside of the VPC that you connected to your cluster through private\n services access.\n\n- Your application runs on a VPC that exists outside of Google's network.\n\n- Your application runs \"on-premises\", on a machine located somewhere\n else on the public internet.\n\nIn all of these cases, you must set up an additional service to enable\nthis kind of external connection to your AlloyDB cluster.\n\nSummary of external-connection solutions\n----------------------------------------\n\nWe recommend two general solutions for making external connections,\ndepending upon your needs:\n\n- For project development or prototyping, or for a relatively low-cost\n production environment, [set up an intermediary virtual machine\n (VM)](#vm)---also known as a *bastion*---within your VPC. A variety of\n methods exist to use this intermediary VM as a secure connection\n between an external application environment and your\n AlloyDB cluster.\n\n- For production environments that require high availability, consider\n [establishing a permanent connection between the VPC and your\n application](#vpn) through either Cloud VPN or Cloud Interconnect.\n\nThe next several sections describe these external-connection solutions\nin detail.\n\nConnect through an intermediary VM\n----------------------------------\n\nTo establish a connection to an AlloyDB cluster from\noutside its VPC using open-source tools and a minimum of additional\nresources, run a proxy service on [an intermediary VM set up within that\nVPC](/alloydb/docs/connect-psql#create-vm). You can set up a new VM for\nthis purpose, or use a VM already running within your\nAlloyDB cluster's VPC.\n\nAs a self-managed solution, using an intermediary VM generally costs\nless and has a faster set-up time than [using a Network Connectivity\nproduct](#vpn). It also has downsides: the connection's availability,\nsecurity, and data throughput all become dependent on the intermediary\nVM, which you must maintain as part of your project.\n\n### Connect through IAP\n\nUsing [Identity-Aware Proxy (IAP)](/iap/docs/concepts-overview), you can\nsecurely connect to your cluster without the need to expose the\nintermediary VM's public IP address. You use a combination of firewall\nrules and Identity and Access Management (IAM) to limit access through this route.\nThis makes IAP a good solution for non-production uses\nlike development and prototyping.\n\nTo set up IAP access to your cluster, follow these steps:\n\n1. [Install Google Cloud CLI](/sdk/docs/install) on your external client.\n\n2. [Prepare your project for IAP TCP forwarding](/iap/docs/using-tcp-forwarding#preparing_your_project_for_tcp_forwarding).\n\n When defining the new firewall rule, allow ingress TCP traffic to\n port `22` (SSH). If you are using [your project's default\n network](/vpc/docs/vpc#default-network) with its [pre-populated\n `default-allow-ssh`\n rule](/vpc/docs/firewalls#more_rules_default_vpc) enabled, then you\n don't need to define an additional rule.\n3. Set up port forwarding between your external client and the intermediary VM\n using [SSH through IAP](/iap/docs/using-tcp-forwarding#tunneling_ssh_connections).\n\n gcloud compute ssh my-vm \\\n --tunnel-through-iap \\\n --zone=\u003cvar translate=\"no\"\u003eZONE_ID\u003c/var\u003e \\\n --ssh-flag=\"-L \u003cvar translate=\"no\"\u003ePORT_NUMBER\u003c/var\u003e:\u003cvar translate=\"no\"\u003eALLOYDB_IP_ADDRESS\u003c/var\u003e:5432\"\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eZONE_ID\u003c/var\u003e: The ID of the zone where the cluster is located---for example, `us-central1-a`.\n - \u003cvar translate=\"no\"\u003eALLOYDB_IP_ADDRESS\u003c/var\u003e: The IP address of the AlloyDB instance you want to connect to.\n - \u003cvar translate=\"no\"\u003ePORT_NUMBER\u003c/var\u003e: The port number of your VM.\n\n | **Note:** If you're using [managed connection pooling](/alloydb/docs/configure-managed-connection-pooling), then change the port number from `5432` to `6432`. Apply this change of port throughout the examples on this page.\n4. Test your connection using [`psql`](/alloydb/docs/connect-psql) on\n your external client, having it connect to the local port you\n specified in the previous step. For example, to connect as the `postgres`\n user role to port `5432`:\n\n psql -h localhost -p 5432 -U \u003cvar translate=\"no\"\u003eUSERNAME\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eUSERNAME\u003c/var\u003e: The postgreSQL user you want to connect to the instance---for example, the default user `postgres`.\n\n### Connect through a SOCKS proxy\n\nRunning a SOCKS service on the intermediary VM provides a flexible and\nscalable connection to your AlloyDB cluster, with end-to-end encryption\nprovided by the AlloyDB Auth Proxy. With appropriate configuration, you can\nmake it suitable for production workloads.\n\nThis solution includes these steps:\n\n1. Install, configure, and run a SOCKS server on the intermediary VM. One\n example is [Dante](https://www.inet.no/dante/), a\n popular open-source solution.\n\n Configure the server to bind to the VM's `ens4` network interface\n for both external and internal connections. Specify any port you\n want for internal connections.\n2. [Configure your VPC's firewall](/vpc/docs/firewalls) to allow TCP\n traffic from the appropriate IP address or range to\n the SOCKS server's configured port.\n\n3. Install [the AlloyDB Auth Proxy](/alloydb/docs/auth-proxy/overview) on\n the external client.\n\n4. Run the AlloyDB Auth Proxy on your external client, with the `ALL_PROXY`\n environment variable set to the intermediary VM's IP address, and\n specifying the port that the SOCKS server uses.\n\n This example configures the AlloyDB Auth Proxy to connect to the database at\n `my-main-instance`, by way of a SOCKS server running at `198.51.100.1`\n on port `1080`: \n\n ALL_PROXY=socks5://\u003cvar translate=\"no\"\u003e198.51.100.1:1080\u003c/var\u003e ./alloydb-auth-proxy \\\n /projects/\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e/locations/\u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e/clusters/\u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e/instances/\u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e\n\n If you are connecting from a\n peered VPC, you can use the intermediary VM's internal IP address;\n otherwise, use its external IP address.\n5. Test your connection using [`psql`](/alloydb/docs/connect-psql) on\n your external client, having it connect to the port that the\n AlloyDB Auth Proxy listens on. For example, to connect as the `postgres`\n user role to port `5432`:\n\n psql -h \u003cvar translate=\"no\"\u003eIP_ADDRESS\u003c/var\u003e -p \u003cvar translate=\"no\"\u003ePORT_NUMBER\u003c/var\u003e -U \u003cvar translate=\"no\"\u003eUSERNAME\u003c/var\u003e\n\n### Connect through a PostgreSQL pooler\n\nIf you need to install and run the AlloyDB Auth Proxy on the intermediary VM,\ninstead of an external client, then you can enable secure\nconnections to it by pairing it with a *protocol-aware proxy* , also\nknown as a *pooler* . Popular open-source poolers for PostgreSQL include\n[Pgpool-II](https://pgpool.net/) and\n[PgBouncer](https://www.pgbouncer.org/).\n\nIn this solution, you run both the AlloyDB Auth Proxy and the pooler on the\nintermediary VM. Your client or application can then securely\nconnect directly to the pooler over SSL, without the need to run any\nadditional services. The pooler takes care of passing PostgreSQL\nqueries along to your AlloyDB cluster through the\nAuth Proxy.\n\nBecause every instance within an AlloyDB cluster has a\nseparate internal IP address, each proxy service can communicate with\nonly one specific instance: either the primary instance, the stand-by,\nor a read pool. Therefore, you need to run a separate pooler service,\nwith an appropriately configured SSL certificate, for every instance in\nthe cluster.\n\nConnect through Cloud VPN or Cloud Interconnect\n-----------------------------------------------\n\nFor production work requiring high availability (HA), we recommend the\nuse of a Google Cloud\n[Network Connectivity](/network-connectivity/docs/how-to/choose-product) product: either\n[Cloud VPN](/network-connectivity/docs/vpn) or\n[Cloud Interconnect](/network-connectivity/docs/interconnect),\ndepending upon your external service's needs and network topology. You\nthen configure\n[Cloud Router](/network-connectivity/docs/router/concepts/overview)\nto advertise the appropriate routes.\n\nWhile using a Network Connectivity product is a more involved\nprocess than setting up an intermediary VM, this approach shifts the\nburdens of uptime and availability from you to Google. In particular,\nHA VPN offers 99.99% SLA, making it appropriate for\nproduction environments.\n\nNetwork Connectivity solutions also free you from the need to\nmaintain a separate, secure VM as part of your application, avoiding the\nsingle-point-of-failure risks inherent with that approach.\n\nTo start learning more about these solutions, see [Choosing a Network Connectivity product](/network-connectivity/docs/how-to/choose-product).\n\nWhat's next\n-----------\n\n- Learn more about [Private services access and on-premises\n connectivity](/vpc/docs/private-services-access#on-premises-connectivity) in Google Cloud VPCs."]]