Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Arsitektur dan performa Private Service Connect
Halaman ini menjelaskan cara kerja Private Service Connect.
Private Service Connect diimplementasikan menggunakan software-defined
networking (SDN) dari Google Cloud yang disebut
Andromeda
(PDF). Andromeda adalah bidang kontrol terdistribusi dan bidang data untuk jaringanGoogle Cloud yang memungkinkan jaringan untuk jaringan Virtual Private Cloud (VPC). Fabric jaringan Andromeda memproses paket di server fisik yang menghosting VM. Dengan demikian, bidang data terdistribusi sepenuhnya dan tidak memiliki bottleneck terpusat pada proxy atau perangkat perantara.
Karena traffic Private Service Connect diproses sepenuhnya pada host fisik, traffic ini memiliki manfaat performa yang signifikan dibandingkan model berorientasi proxy:
Tidak ada batas bandwidth tambahan yang diberlakukan oleh Private Service Connect. Bandwidth gabungan antarmuka VM sumber dan tujuan pada dasarnya merupakan batas bandwidth Private Service Connect.
Private Service Connect menambahkan latensi minimal pada traffic.
Jalur traffic sama dengan traffic VM-ke-VM dalam jaringan VPC tunggal. Penafsiran alamat jaringan (NAT) untuk traffic adalah satu-satunya langkah pemrosesan traffic tambahan yang dilakukan sepenuhnya di host tujuan.
Diagram berikut menunjukkan jalur traffic standar untuk traffic Private Service Connect antara jaringan VPC konsumen dan jaringan VPC produsen.
Host fisik melakukan load balancing klien untuk menentukan host target yang akan dikirimi traffic (klik untuk memperbesar).
Dari perspektif logis, terdapat endpoint Private Service Connect konsumen dan load balancer produser.
Namun, dari perspektif fisik, traffic beralih langsung dari server fisik yang menghosting VM klien ke server fisik yang menghosting VM load balancer produsen.
Andromeda menerapkan fungsi ke traffic Private Service Connect sebagaimana ditunjukkan dalam diagram berikut:
Load balancing sisi klien diterapkan pada host sumber (Host 1) yang memutuskan host target yang akan dikirimi traffic. Keputusan ini didasarkan pada lokasi, muatan, dan kondisi.
Paket dalam dari VPC1 dienkapsulasi dalam header Andromeda dengan jaringan tujuan VPC2.
Host tujuan (Host 2) menerapkan SNAT dan DNAT ke paket, menggunakan subnet NAT sebagai rentang alamat IP sumber paket dan alamat IP load balancer produsen sebagai alamat IP tujuan.
Terdapat pengecualian di mana traffic diproses oleh host perutean perantara, seperti traffic antar-regional atau aliran traffic yang sangat kecil atau terputus-putus.
Namun, Andromeda secara dinamis mengalihkan aliran traffic untuk jaringan langsung host-ke-host jika memungkinkan guna mengoptimalkan latensi dan throughput terbaik.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-19 UTC."],[],[],null,["# Private Service Connect architecture and performance\n====================================================\n\nThis page explains how Private Service Connect works.\n\nPrivate Service Connect is implemented by using software-defined\nnetworking (SDN) from Google Cloud called\n[Andromeda](https://www.usenix.org/system/files/conference/nsdi18/nsdi18-dalton.pdf)\n(PDF). Andromeda is the distributed control plane and data plane for\nGoogle Cloud networking that enables networking for\nVirtual Private Cloud (VPC) networks. The Andromeda networking fabric processes\npackets on the physical servers that host VMs. As a result, the data plane is\nfully distributed and has no centralized bottlenecks on intermediate proxies or\nappliances.\n\nBecause Private Service Connect traffic is processed fully on the\nphysical hosts, it has significant performance benefits over a proxy-oriented\nmodel:\n\n- **There are no additional bandwidth limits imposed by\n Private Service Connect.** The combined bandwidth of the source and destination VM interfaces is effectively the bandwidth limit of Private Service Connect.\n- **Private Service Connect adds minimal latency to traffic.** The traffic path is the same as VM-to-VM traffic within a single VPC network. Network address translation of traffic is the only additional traffic processing step which is done entirely on the destination host.\n\nThe following diagram shows a typical traffic path for\nPrivate Service Connect traffic between a consumer\nVPC network and a producer VPC network.\n[](/static/vpc/images/psc-architecture.svg) Physical hosts perform client load balancing to determine which target host to send the traffic to (click to enlarge).\n\nFrom a logical perspective, there are consumer\nPrivate Service Connect endpoints and producer load balancers.\nHowever, from a physical perspective traffic goes directly from the physical\nserver that hosts the client VM to the physical server that hosts the producer\nload balancer VM.\n\nAndromeda applies functions to Private Service Connect traffic as\nshown in the following diagram:\n\n- Client-side load balancing is applied on the source host (`Host 1`) which decides which target host to send the traffic to. This decision is based on location, load and health.\n- The inner packet from `VPC1` is encapsulated in an Andromeda header with the destination network of `VPC2`.\n- The destination host (`Host 2`) applies SNAT and DNAT to the packet, using the [NAT subnet](/vpc/docs/about-vpc-hosted-services#psc-subnets) as the source IP address range of the packet and the producer load balancer IP address as the destination IP address.\n\nThere are exceptions where traffic is processed by intermediate routing hosts,\nsuch as inter-regional traffic or very small or intermittent traffic flows.\nHowever, Andromeda dynamically offloads traffic flows for direct, host-to-host\nnetworking whenever possible to optimize for best latency and throughput.\n\nWhat's next\n-----------\n\n- Learn more about [Private Service Connect](/vpc/docs/private-service-connect).\n- View [Private Service Connect compatibility\n information](/vpc/docs/private-service-connect-compatibility)."]]