To set a network policy for virtual machine (VM) workloads at the project
namespace level, use the ProjectNetworkPolicy
resource, a multi-cluster
network policy for Google Distributed Cloud air-gapped (GDC). It lets
you to define policies, which allows communication within projects, between
projects, and to external IP addresses.
For traffic within a project, GDC applies a predefined project network policy, the intra-project policy, to each project by default. To enable and control traffic across projects within the same organization, define cross-project policies. When multiple policies are present, GDC additively combines the rules for each project. GDC also allows traffic if at least one of the rules match.
Request permission and access
To perform the tasks listed in this page, you must have the Project
NetworkPolicy Admin role. Ask your Project IAM Admin to grant you the Project
NetworkPolicy Admin (project-networkpolicy-admin
) role in the namespace
of the project where the VM resides.
Intra-project traffic
By default, VM workloads in a project namespace have the ability to communicate with each other without exposing services to the external world, even if the VMs are part of different clusters within the same project.
Ingress intra-project traffic network policy
When you create a project, you create a default base ProjectNetworkPolicy
on
the org admin cluster to allow intra-project communication. This policy allows
ingress traffic from other workloads in the same project. You can remove it, but
be careful if doing so, as this results in denying both intraproject and container
workload communication.
Egress intra-project traffic network policy
By default, you don't need to take action regarding egress. This is because in the absence of an egress policy, all traffic is allowed. However, when you set a single policy, only the traffic the policy specifies is allowed.
When you disable Data Exfiltration Prevention
and apply an egress ProjectNetworkPolicy
to the project, such as
preventing access to an external resource, use a required policy to
allow intra-project egress:
kubectl --kubeconfig ORG_ADMIN_KUBECONFIG apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
namespace: PROJECT_1
name: allow-intra-project-egress-traffic
spec:
policyType: Egress
ingress:
- from:
- projects:
matchNames:
- PROJECT_1
EOF
Cross-project (within org) traffic
VM workloads from different project namespaces but within the same organization can communicate with each other by applying a cross-project network policy.
Ingress cross-project traffic network policy
For project workloads to allow connections from other workloads in another project, you must configure an Ingress policy to allow the other project workloads to data transfer out.
The following policy enables workloads in the PROJECT_1
project to permit connections from workloads in the PROJECT_2
project, as well as the return traffic for the same flows. If you want to
access your VM in the PROJECT_1
from a source inside the
PROJECT_2
, you can also use this policy.
Run the following command to apply the policy:
kubectl --kubeconfig ORG_ADMIN_KUBECONFIG apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
namespace: PROJECT_1
name: allow-ingress-traffic-from-PROJECT_2
spec:
policyType: Ingress
subject:
subjectType: UserWorkload
ingress:
- from:
- projects:
matchNames:
- PROJECT_2
EOF
The preceding command allows PROJECT_2
to go to
PROJECT_1
, but doesn't allow connections initiated from
PROJECT_1
to PROJECT_2
.
For the latter, you require a reciprocal policy in the PROJECT_2
project. Run the following command to apply the reciprocal policy:
kubectl --kubeconfig ORG_ADMIN_KUBECONFIG apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
namespace: PROJECT_2
name: allow-ingress-traffic-from-PROJECT_1
spec:
policyType: Ingress
subject:
subjectType: UserWorkload
ingress:
- from:
- projects:
matchNames:
- PROJECT_1
EOF
You've now permitted connections initiated to and from
PROJECT_1
and PROJECT_2
.
Use the following definitions for your variables.
Variable | Definition |
---|---|
ADMIN_KUBECONFIG | The admin cluster kubeconfig path. |
PROJECT_1 | The name of a GDC project corresponding to PROJECT_1 in the example. |
PROJECT_2 | The name of a GDC project corresponding to PROJECT_2 in the example. |
Egress cross-project traffic network policy
When you grant an ingress cross-project traffic policy to enable workloads in
one project, PROJECT_1
, to allow connections from
workloads in another project, PROJECT_2
, this also
grants the return traffic for the same flows. Therefore, you don't need
an egress cross-project traffic network policy.
Cross-organization traffic
To connect a VM workload to a destination outside of your project that resides in a different organization requires explicit approval. That approval is due to Data Exfiltration Prevention, which GDC enables by default and prevents a project from having egress to workloads outside the organization where the project resides. The instructions to add a specific egress policy and enable data exfiltration are as follows in this section.
Ingress cross-organization traffic network policy
To allow ingress traffic across different organizations, a
ProjectNetworkPolicy
must be applied which allows traffic from organization
external clients to your project, for example connect to the VM using SSH.
A corresponding Egress policy is not required for the reply traffic. Return traffic is implicitly allowed.
If you want to access your VM in the PROJECT_1
from a source
outside the organization that the VM resides in, you must apply the following
policy to achieve this. You have to get and use the CIDR
which contains your source IP address. The CIDR
should be in network/len
notation.
For example, 192.0.2.0/21
is a valid one.
Configure and apply your Ingress
ProjectNetworkPolicy
, following thekubectl
example. Apply the policy on all user workloads inPROJECT_1
. It allows ingress traffic to all hosts inCIDR
, which reside outside the organization.Apply your
ProjectNetworkPolicy
configuration:kubectl --kubeconfig ORG_ADMIN_KUBECONFIG apply -f - <<EOF apiVersion: VERSION kind: ProjectNetworkPolicy metadata: namespace: PROJECT_1 name: allow-external-traffic spec: policyType: Ingress subject: subjectType: UserWorkload ingress: - from: - ipBlock: cidr: CIDR EOF
Egress cross-organization traffic network policy
To enable data transfer out to services outside of the organization, customize
your project network policy, ProjectNetworkPolicy
. Since Data
Exfiltration Prevention is enabled by default, your customized Egress
ProjectNetworkPolicy
shows a validation error in the status field, and the
dataplane ignores it. This process happens by design.
As stated in
Security and connectivity,
workloads can data transfer out when you let data exfiltration for a given
project. The traffic that you permit to data transfer out is a source network
address translation (NAT) using a known IP address allocated for the project.
Security and connectivity
also provides details about project network policy
(ProjectNetworkPolicy
) enforcement.
A corresponding Ingress policy is not required for the reply traffic. Return traffic is implicitly allowed.
Enable your customized egress policy:
Configure and apply your own customized Egress
ProjectNetworkPolicy
, following thekubectl
example. Apply the policy on all user workloads inPROJECT_1
. It allows egress traffic to all hosts inCIDR
, which reside outside the organization. Your first attempt causes a necessary status error, which is intended.Apply your
ProjectNetworkPolicy
configuration:kubectl --kubeconfig $org_admin_kubeconfig apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: ProjectNetworkPolicy metadata: namespace: PROJECT_1 name: allow-egress-traffic-to-NAME spec: policyType: Egress subject: subjectType: UserWorkload egress: - to: - ipBlock: cidr: CIDR EOF
When you finish, confirm that you see a validation error in your status.
Ask the administrator user to disable Data Exfiltration Prevention. This enables your configuration, while preventing all other egress.
Check the
ProjectNetworkPolicy
that you just created and verify that the error in the validation status field is gone, and the statusReady
isTrue
, indicating that your policy is in effect:kubectl --kubeconfig ORG_ADMIN_KUBECONFIG get projectnetworkpolicy allow-egress-traffic-to-NAME -n PROJECT_1 -o yaml
Replace the variables, using the following definitions.
Variable Definition ADMIN_KUBECONFIG
The admin cluster kubeconfig
path.PROJECT_1
The name of the GDC project. CIDR
The Classless Inter-Domain Routing (CIDR) block of the permitted destination. NAME
A name associated with the destination. After you have applied this policy, and provided that you have not defined other egress policies, all other egress traffic is denied for
PROJECT_1
.