Stay organized with collections
Save and categorize content based on your preferences.
Private Service Connect architecture and performance
This page explains how Private Service Connect works.
Private Service Connect is implemented by using software-defined
networking (SDN) from Google Cloud called
Andromeda
(PDF). Andromeda is the distributed control plane and data plane for
Google Cloud networking that enables networking for
Virtual Private Cloud (VPC) networks. The Andromeda networking fabric processes
packets on the physical servers that host VMs. As a result, the data plane is
fully distributed and has no centralized bottlenecks on intermediate proxies or
appliances.
Because Private Service Connect traffic is processed fully on the
physical hosts, it has significant performance benefits over a proxy-oriented
model:
There are no additional bandwidth limits imposed by
Private Service Connect. The combined bandwidth of the
source and destination VM interfaces is effectively the bandwidth limit of
Private Service Connect.
Private Service Connect adds minimal latency to traffic.
The traffic path is the same as VM-to-VM traffic within a single
VPC network. Network address translation of traffic is the
only additional traffic processing step which is done entirely on the
destination host.
The following diagram shows a typical traffic path for
Private Service Connect traffic between a consumer
VPC network and a producer VPC network.
Physical hosts perform client load balancing to
determine which target host to send the traffic to (click to enlarge).
From a logical perspective, there are consumer
Private Service Connect endpoints and producer load balancers.
However, from a physical perspective traffic goes directly from the physical
server that hosts the client VM to the physical server that hosts the producer
load balancer VM.
Andromeda applies functions to Private Service Connect traffic as
shown in the following diagram:
Client-side load balancing is applied on the source host (Host 1) which
decides which target host to send the traffic to. This decision is based on
location, load and health.
The inner packet from VPC1 is encapsulated in an Andromeda header with the
destination network of VPC2.
The destination host (Host 2) applies SNAT and DNAT to the packet, using
the NAT subnet as the
source IP address range of the packet and the producer load balancer IP
address as the destination IP address.
There are exceptions where traffic is processed by intermediate routing hosts,
such as inter-regional traffic or very small or intermittent traffic flows.
However, Andromeda dynamically offloads traffic flows for direct, host-to-host
networking whenever possible to optimize for best latency and throughput.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-03 UTC."],[],[],null,["# Private Service Connect architecture and performance\n====================================================\n\nThis page explains how Private Service Connect works.\n\nPrivate Service Connect is implemented by using software-defined\nnetworking (SDN) from Google Cloud called\n[Andromeda](https://www.usenix.org/system/files/conference/nsdi18/nsdi18-dalton.pdf)\n(PDF). Andromeda is the distributed control plane and data plane for\nGoogle Cloud networking that enables networking for\nVirtual Private Cloud (VPC) networks. The Andromeda networking fabric processes\npackets on the physical servers that host VMs. As a result, the data plane is\nfully distributed and has no centralized bottlenecks on intermediate proxies or\nappliances.\n\nBecause Private Service Connect traffic is processed fully on the\nphysical hosts, it has significant performance benefits over a proxy-oriented\nmodel:\n\n- **There are no additional bandwidth limits imposed by\n Private Service Connect.** The combined bandwidth of the source and destination VM interfaces is effectively the bandwidth limit of Private Service Connect.\n- **Private Service Connect adds minimal latency to traffic.** The traffic path is the same as VM-to-VM traffic within a single VPC network. Network address translation of traffic is the only additional traffic processing step which is done entirely on the destination host.\n\nThe following diagram shows a typical traffic path for\nPrivate Service Connect traffic between a consumer\nVPC network and a producer VPC network.\n[](/static/vpc/images/psc-architecture.svg) Physical hosts perform client load balancing to determine which target host to send the traffic to (click to enlarge).\n\nFrom a logical perspective, there are consumer\nPrivate Service Connect endpoints and producer load balancers.\nHowever, from a physical perspective traffic goes directly from the physical\nserver that hosts the client VM to the physical server that hosts the producer\nload balancer VM.\n\nAndromeda applies functions to Private Service Connect traffic as\nshown in the following diagram:\n\n- Client-side load balancing is applied on the source host (`Host 1`) which decides which target host to send the traffic to. This decision is based on location, load and health.\n- The inner packet from `VPC1` is encapsulated in an Andromeda header with the destination network of `VPC2`.\n- The destination host (`Host 2`) applies SNAT and DNAT to the packet, using the [NAT subnet](/vpc/docs/about-vpc-hosted-services#psc-subnets) as the source IP address range of the packet and the producer load balancer IP address as the destination IP address.\n\nThere are exceptions where traffic is processed by intermediate routing hosts,\nsuch as inter-regional traffic or very small or intermittent traffic flows.\nHowever, Andromeda dynamically offloads traffic flows for direct, host-to-host\nnetworking whenever possible to optimize for best latency and throughput.\n\nWhat's next\n-----------\n\n- Learn more about [Private Service Connect](/vpc/docs/private-service-connect).\n- View [Private Service Connect compatibility\n information](/vpc/docs/private-service-connect-compatibility)."]]