Control egress and ingress traffic to a VPN tunnel on a per-project basis.
- By default, all projects will deny incoming traffic from a VPN tunnel.
- By default, projects with data exfiltration protection enabled will deny outgoing traffic to a VPN tunnel.
Use the following directions to change the default VPN traffic egress and ingress rules for a project.
Configure ingress traffic
By default, all projects will deny incoming traffic from a VPN tunnel. To
enable a project to allow traffic from a VPN tunnel, use a
ProjectNetworkPolicy
object which targets the routes received over the Border Gateway Protocol (BGP)
session used on the VPN tunnel:
Retrieve all received routes from the
VPNBGPPeer
status in question:kubectl --kubeconfig ORG_ADMIN_CLUSTER_KUBECONFIG get -n platform vpnbgppeer VPN_BGP_PEER_NAME -ojson | jq '.status.received'
Replace the following:
ORG_ADMIN_CLUSTER_KUBECONFIG
: the org admin cluster's kubeconfig path.VPN_BGP_PEER_NAME
: the name of your VPN BGP session.
For more information, see Create a VPN BGP session.
The output looks like the following example:
[ { "prefix": "192.168.100.0/24" } ]
Add all of the received routes from the
VPNBGPPeer
status to aProjectNetworkPolicy
object in the namespace of the project in question:kubectl --kubeconfig ORG_ADMIN_CLUSTER_KUBECONFIG create -n PROJECT_NAME -f - <<EOF apiVersion: networking.gdc.goog/v1alpha1 kind: ProjectNetworkPolicy metadata: name: allow-ingress-vpn-traffic spec: policyType: Ingress subject: subjectType: UserWorkload ingress: - from: - ipBlock: cidr: 192.168.100.0/24 EOF
Replace PROJECT_NAME
with the name of your GDC project.
Configure egress traffic
By default, a project with data exfiltration protection enabled will deny sending traffic to the VPN.
You can allow a project to send traffic to a VPN tunnel by disabling data exfiltration protection for the project. For more information, see Prevent data exfiltration.
Access the user VM
Hosts in the remote network with a VPN tunnel connection to a GDC organization can access the primary interface of organization user VMs, assuming egress and ingress traffic to the project containing the user VM is allowed.
Follow these steps to access the primary interface of the user VM:
Get the interfaces of the user VM by viewing its respective
VirtualMachine
object in the org admin cluster:kubectl --kubeconfig ORG_ADMIN_CLUSTER_KUBECONFIG get -n PROJECT_NAME gvm VM_NAME -ojson | jq '.status.network'
Replace
VM_NAME
with the name of theVirtualMachine
object.The output looks like the following example:
[ { "ipAddresses": [ "172.16.19.189" ], "macAddress": "8a:fc:81:0b:41:dc", "name": "eth0" }, { "ipAddresses": [ "172.20.128.15/21" ], "macAddress": "56:1b:07:85:50:b3", "name": "eth1" } ] ```
Hosts in the remote network with a VPN tunnel connection to a GDC organization can access user VMs through the primary
eth0
interface:/home/ubuntu# ssh -i ~/vm-access user@172.16.19.189
Note, for SSH access details including retrieving a key for a VM, see Connect to a VM.