This document describes the level of compliance that Google Distributed Cloud has with the CIS Ubuntu Benchmark.
Versions
This document refers to these versions:
Anthos version | Ubuntu version | CIS Ubuntu Benchmark version | CIS level |
---|---|---|---|
1.9 | 18.04 LTS | v2.0.1 | Level 2 Server |
Access the benchmark
The CIS Ubuntu Benchmark is available on the CIS website.
Configuration profile
In the CIS Ubuntu Benchmark document, you can read about configuration profiles. The Ubuntu images used by Google Distributed Cloud are hardened to meet the Level 2 - Server profile.
Evaluation on Google Distributed Cloud
We use the following values to specify the status of Ubuntu recommendations in Google Distributed Cloud.
Status | Description |
---|---|
Pass | Complies with a benchmark recommendation. |
Fail | Does not comply with a benchmark recommendation. |
Equivalent control | Does not comply with the exact terms in a benchmark recommendation, but other mechanisms in Google Distributed Cloud provide equivalent security controls. |
Depends on environment | Google Distributed Cloud does not configure items related to a benchmark recommendation. Your configuration determines whether your environment complies with the recommendation. |
Status of Google Distributed Cloud
The Ubuntu images used with Google Distributed Cloud are hardened to meet the CIS Level 2 - Server profile. The following table gives justifications for why Google Distributed Cloud components did not pass certain recommendations.
# | Recommendation | Scored/Not Scored | Status | Justification | Affected Components |
---|---|---|---|---|---|
1.1.2 | Ensure /tmp is configured | Scored | Fail | Canonical has no plan to modify the cloud image partitions at this time. | All cluster nodes, Admin workstation, Seesaw |
1.1.6 | Ensure separate partition exists for /var | Scored | Won't fix | Canonical has no plan to modify the cloud image partitions at this time. | All cluster nodes, Admin workstation, Seesaw |
1.1.7 | Ensure separate partition exists for /var/tmp | Scored | Won't fix | Canonical has no plan to modify the cloud image partitions at this time. | All cluster nodes, Admin workstation, Seesaw |
1.1.11 | Ensure separate partition exists for /var/log | Scored | Won't fix | Canonical has no plan to modify the cloud image partitions at this time. | All cluster nodes, Admin workstation, Seesaw |
1.1.12 | Ensure separate partition exists for /var/log/audit | Scored | Won't fix | Canonical has no plan to modify the cloud image partitions at this time. | All cluster nodes, Admin workstation, Seesaw |
1.1.13 | Ensure separate partition exists for /home | Scored | Won't fix | Canonical has no plan to modify the cloud image partitions at this time. | All cluster nodes, Admin workstation, Seesaw |
1.1.21 | Ensure sticky bit is set on all world-writable directories | Scored | Fail | This could interfere with the functionality of Anthos and its services and is not enabled by default | All cluster nodes, Admin workstation |
1.5.1 | Ensure permissions on bootloader config are configured | Scored | Fail | Permissions have been left as default. | All cluster nodes, Seesaw |
1.5.2 | Ensure bootloader password is set | Scored | Depends on Environment | No root password is set on Ubuntu cloud images. | All cluster nodes, Admin workstation, Seesaw |
1.5.3 | Ensure authentication required for single user mode | Scored | Depends on Environment | No root password is set on Ubuntu cloud images. | All cluster nodes, Admin workstation, Seesaw |
1.8.1.2 | Ensure local login warning banner is configured properly | Scored | Equivalent Control | Anthos also applies DISA-STIG hardening to nodes, which updates the warning banner accordingly | All cluster nodes, Seesaw |
3.1.2 | Ensure IP forwarding is disabled | Scored | Fail | IP forwarding is necessarily in order for Kubernetes (GKE) to correctly function and route traffic | All cluster nodes, Admin workstation, Seesaw |
3.2.7 | Ensure Reverse Path Filtering is enabled | Scored | Depends on Environment | Asynchronous routing and reverse path origination is a requirement for delivering cluster load balancing | Seesaw |
3.5.2.5 | Ensure firewall rules exist for all open ports | Not Scored | Depends on Environment | It is recommended that Anthos on VMware be deployed on a private network with appropriate firewall protections. The required firewall rules can be found here. | All cluster nodes, Admin workstation, Seesaw |
3.5.4.1.1 | Ensure default deny firewall policy | Scored | Depends on Environment | It is recommended that Anthos on VMware be deployed on a private network with appropriate firewall protections. The required firewall rules can be found here. | All cluster nodes, Admin workstation, Seesaw |
3.5.4.1.2 | Ensure loopback traffic is configured | Scored | Depends on Environment | Loopback interface usage is limited given the load balancing functionality used. | Seesaw |
3.5.4.2.1 | Ensure IPv6 default deny firewall policy | Scored | Depends on Environment | It is recommended that Anthos on VMware be deployed on a private network with appropriate firewall protections. The required firewall rules can be found here. Additionally, Anthos has no requirement for IPv6 under GA support. | All cluster nodes, Admin workstation, Seesaw |
3.5.4.2.2 | Ensure IPv6 loopback traffic is configured | Scored | Depends on Environment | Anthos has no requirement for IPv6 under GA support. | Admin control plane, Seesaw |
4.1.1.3 | Ensure auditing for processes that start prior to auditd is enabled | Scored | Fail | A known issue with our build process flags this as Failed, however this should be considered a false alarm. This will be remedied in the future. | All cluster nodes, Seesaw |
4.1.1.11 | Ensure use of privileged commands is collected | Scored | Fail | Some binaries are installed at runtime, so the runtime remediation is needed. | All cluster nodes, Admin workstation, Seesaw |
4.2.1.5 | Ensure rsyslog is configured to send logs to a remote log host | Scored | Depends on Environment | Anthos on VMWare currently collects all journald logs (from system services). These can view these logs under "k8s_node" | All cluster nodes, Admin workstation, Seesaw |
4.2.3 | Ensure permissions on all logfiles are configured | Scored | Fail | This specific test is overly restrictive and unrealistic as many services may require a group to write log files. This item may be removed in a future benchmark. | All cluster nodes, Admin workstation, Seesaw |
5.2.12 | Ensure SSH PermitUserEnvironment is disabled | Scored | Fail | This setting conflicts with DISA-STIG hardening settings. | All cluster nodes, Seesaw |
5.2.13 | Ensure only strong Ciphers are used | Scored | Equivalent Control | The application of DISA-STIG uses an alternative list of supported ciphers which does not 1:1 align with those used by this benchmark | All cluster nodes |
5.2.18 | Ensure SSH access is limited | Scored | Depends on Environment | This is not configured by default. This can be configured to meet your specific requirements. | All cluster nodes, Admin workstation, Seesaw |
5.2.19 | Ensure SSH warning banner is configured | Scored | Equivalent Control | SSH warning banner is modified by the application of DISA-STIG hardening configuration | All cluster nodes, Seesaw |
6.1.6 | Ensure permissions on /etc/passwd are configured | Scored | Fail | This specific test is overly restrictive and is being updated by Canonical (link) | All cluster nodes, Admin workstation, Seesaw |
6.1.10 | Ensure no world writable files exist | Scored | Fail | Permissions have been left as default. | All cluster nodes |
6.1.11 | Ensure no unowned files or directories exist | Scored | Fail | Permissions have been left as default. | All cluster nodes |
6.1.12 | Ensure no ungrouped files or directories exist | Scored | Fail | Permissions have been left as default. | All cluster nodes |
6.2.10 | Ensure users' dot files are not group or world writable | Scored | Fail | The default settings for Ubuntu permit dot file group permissions due to compatibility | Admin workstation |
Configure AIDE cron job
AIDE is a file integrity checking tool that ensures compliance with CIS L1
Server benchmark 1.4 Filesystem Integrity Checking
. In Google Distributed Cloud,
the AIDE process has been causing high resource usage issues.
From 1.9.7, the AIDE process on nodes is disabled by default to prevent resource
issues. This will affect compliance with CIS L1 Server benchmark 1.4.2: Ensure
filesystem integrity is regularly checked.
If you want to opt in to run the AIDE cron job, complete the following steps to re-enable AIDE:
Create a DaemonSet.
Here's a manifest for a DaemonSet:
apiVersion: apps/v1 kind: DaemonSet metadata: name: enable-aide-pool1 spec: selector: matchLabels: app: enable-aide-pool1 template: metadata: labels: app: enable-aide-pool1 spec: hostIPC: true hostPID: true nodeSelector: cloud.google.com/gke-nodepool: pool-1 containers: - name: update-audit-rule image: ubuntu command: ["chroot", "/host", "bash", "-c"] args: - | set -x while true; do # change daily cronjob schedule minute=30;hour=5 sed -E "s/([0-9]+ [0-9]+)(.*run-parts --report \/etc\/cron.daily.*)/$minute $hour\2/g" -i /etc/crontab # enable aide chmod 755 /etc/cron.daily/aide sleep 3600 done volumeMounts: - name: host mountPath: /host securityContext: privileged: true volumes: - name: host hostPath: path: /
In the above manifest:
The AIDE cron job will only run on node pool
pool-1
as specified by the nodeSelectorcloud.google.com/gke-nodepool: pool-1
. You can configure the AIDE process to run on as many node pools as you wish by specifying the pools under thenodeSelector
field. To run the same cron job schedule across different node pools, remove thenodeSelector
field. However, to avoid host resource congestions, we recommend you maintain separate schedules.The cron job is scheduled to run daily at 5:30am as specified by the configuration
minute=30;hour=5
. You can configure different schedules for the AIDE cron job as required.
Copy the manifest to a file named
enable-aide.yaml
, and create the DaemonSet:
kubectl apply --kubeconfig USER_CLUSTER_KUBECONFIG -f enable-aide.yaml
where USER_CLUSTER_KUBECONFIG is the path of the kubeconfig file for your user cluster.