Using AppArmor and SecComp Profiles for Security Audits

Martin Wimpress
← Slim Blog

One of the most trusted ways to standardize and control access to Linux servers is to use powerful access control mechanisms like Mandatory Access Control (MAC). Essentially, these are software components that centralize control of security policies and protect the underlying systems from unauthorized access to resources. AppArmor and SecComp are two of the most popular access control methods in the industry. This tutorial will define MAC in more detail and compare AppArmor and SecComp. Then, it will explain profiles and the ways in which they can be used to audit environments. By the end of this tutorial, you will understand two of the most common security controls and know how to perform security audits with them.

Let’s get started.

What Is Mandatory Access Control (MAC)?

Mandatory Access Control is a general security mechanism that deals with access control between subjects and resources. Under this mechanism, subjects assert certain control over the resources that they can access and the resources that they create. For example, access may be restricted in such a way that even the creator of a file cannot open it later.

Before MACs, many mainstream operating systems utilized Discretionary Access Control (DAC). Under DAC, users normally have absolute control over the resources that they create. DAC is still the standard access control mechanism for everyday end users. Just imagine how many users would complain if they were unable to open files that they just created on their home Windows machine!

The reason why MAC systems were developed in the first place was to improve baseline security and network daemons as well as eliminate the need for a root account. Relying on a root account to control all files and processes increases security risks, and MAC seeks to alleviate this problem by having the system or security admins configure a set of allowed permissions that even a root user cannot override. This provides a better security model for rolling out policies that are enforced for all users.

Next, we will explain and compare two of the most popular access control methods: AppArmor and SecComp.

What Is AppArmor?

AppArmor is a MAC system implementation that was first introduced to the world by Canonical in 2009. It works by confining programs to a limited set of resources. In this context, confinement means the allocation of permissions for each specified program. For example, this could be network access, access to specific files or folders, or particular capabilities.

AppArmor works by loading security profiles written in its own configuration language and parsed by the apparmor_parser tool. AppArmor comes with a whole list of bundled profiles, and once you install them, you can check their status using the apparmor_status tool.

The easiest way to get started is to download the latest Ubuntu image and install AppArmor and an nginx application there. Make sure that your Docker host has AppArmor listed in the security options, or else loading the profiles will have no effect:

$ docker info | grep apparmor
Security Options: apparmor seccomp

$ docker pull nginx
$ docker run -it ba6acccedd29 /bin/bash
$ root@ac41bd807860:/# apt-get update
$ root@ac41bd807860:/# apt install apparmor-easyprof apparmor-notify apparmor-utils

The process of building AppArmor profiles is straightforward. Afterwards, you can use the aa-easyprof tool to generate a skeleton profile for the nginx process:

root@ac41bd807860:/# aa-easyprof /usr/sbin/nginx
# vim:syntax=apparmor

# AppArmor policy for nginx
# ###AUTHOR###
# ###COPYRIGHT###
# ###COMMENT###

#include <tunables/global>

# No template variables specified

"/usr/sbin/nginx" {

    #include <abstractions/base>

    # No abstractions specified

    # No policy groups specified

    # No read paths specified

    # No write paths specified
  }

Then, you can try to load it like this:

$ aa-easyprof /usr/sbin/nginx > usr.sbin.nginx
$ mv usr.sbin.nginx /etc/apparmor.d
$ apparmor_parser -r /etc/apparmor.d/usr.sbin.nginx

By default, this profile does not allow nginx to access any resources, so it won’t be able to run at this point. You can load a valid nginx profile from this repo and apply it either from inside the Docker container or from the Docker host command line as follows:

$ curl https://raw.githubusercontent.com/genuinetools/bane/master/docker-nginx-sample --output docker-nginx
$ mv ./docker-nginx /etc/apparmor.d/containers/docker-nginx
$ sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx

$ docker run --security-opt "apparmor=docker-nginx" -p 80:80 -d --name apparmor-nginx ba6acccedd29

Using this profile, the application starts as expected but runs within the secure AppArmor confinement.

What Is SecComp?

Secure Computing Mode, or SecComp, is a security tool in the Linux kernel. It’s not technically a MAC module, although it has several similar characteristics (such as allowing or denying capabilities).

SecComp is a special process confinement that creates a "secure" state by disabling system calls except exit(), sigreturn(), read(), and write() to file descriptors that are already open. Any other syscall operations will result in the kernel terminating the process with SIGKILL or SIGSYS signals.

As long as the host kernel is compiled with SecComp config, you’ll mainly use it for securing Docker containers. The default SecComp profile in Docker specifically blocks some syscalls, but it allows more than 300.

When using SecComp, it’s important to find the most suitable allowed syscalls for each application. You don't want to allow more than the minimum amount, but you also don’t want to be overly restrictive. When you are deploying different instances of applications, each with different requirements and access control permissions, capturing and auditing those capabilities becomes a big maintenance issue.

Next, we’ll show you how you can use these profiles in practice.

What Are Profiles?

Profiles are similar to a sandbox environment that not only acts as a firewall for syscalls, but also enables you to restrict actions within Docker containers and the host’s Linux kernel.

Both AppArmor and SecComp profiles are used to secure containers by limiting the actions they can perform. With SecComp, you restrict the available syscalls within the containers, and with AppArmor, you apply process confinements that enforce MAC rules.

If you are running workloads in untrusted containers, then it makes sense to use as many security profiles as you can apply without negatively impacting performance and stability. For example, let’s say that you are building container images in Kubernetes that will host user applications (like web apps). These applications are considered to be untrusted because users could install anything. To ensure that this doesn’t affect other containers within the same node, your security team can install AppArmor and SecComp profiles for each container you deploy.

There are several different ways to install these profiles in Kubernetes. For example, you can develop a special mutating admission controller to ensure that pod deployments contain the required AppArmor and SecComp profile configurations pulled from a central repository. This way, the security team can update the list of active profiles and roll out the changes to all current and future deployments with GitOps.

How to Use Profiles to Audit Environments

AppArmor and SecComp security profiles can be used to audit the runtime actions of each application. We will show you how below.

AppArmor

You can enforce an AppArmor profile when running a container using the --security-opt apparmor= option flag. You can enforce different profiles depending on the kind of audit requirements you need. For example, you can create a default audit profile that logs every write operation like this:

$ touch /etc/apparmor.d/containers/apparmor-audit-writes

#include <tunables/global>

profile apparmor-audit-writes flags=(attach_disconnected) {
    #include <abstractions/base>
    file,

    # Audit all file writes.
    audit /** w,
}

Now, you need to load the profile to the Docker host machine (or all K8s nodes) using the apparmor_parser tool:

$ apparmor_parser -r -W /etc/apparmor.d/containers/apparmor-audit-writes

For testing purposes, you can force the profiles into complain mode:

$ aa-complain /etc/apparmor.d/*

This will just log any restrictions without blocking them, so it will generate an audit log in the journal when you write something to the container filesystem.

When deploying a pod using K8s, you will need to add an annotation referencing the profile name and the container name:

container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>

For example:

container.apparmor.security.beta.kubernetes.io/hello: apparmor-audit-writes

When you run the above container, you’ll see audit log messages for each access that the container is using:

$ tail -f /var/log/audit/audit.log

Seccomp

You can apply SecComp profiles in Docker with the --security-opt seccomp= flag providing the path to a JSON file with the rules for that container. When writing SecComp rules, you have the option to define the defaultAction and action parameters for each syscall. For example, you can use:

  • SCMP_ACT_KILL: This will terminate the thread if it uses a syscall that does not match any of the configured seccomp filter rules.
  • SCMP_ACT_TRAP: With this, the thread that triggered this action will be sent a SIGSYS signal.
  • SCMP_ACT_LOG: This will just log an audit event if the syscall does not match any of the configured seccomp filter rules.
  • SCMP_ACT_ALLOW: This executes the syscall.

You can see the list of available options here. What is important is that there are several ways to log the violation either within the application or the container host.

If you are interested in auditing events, you can set the SCMP_ACT_LOG as a defaultAction that will log syscalls without blocking them:

audit-profile.json
{
    "defaultAction": "SCMP_ACT_LOG"
}

Once you’ve applied the profile, you can inspect the audit event logs inside the Docker host or node by querying the syslog:

$ tail -f /var/log/syslog

Next, we’ll show you an example of how you can use SecComp to audit a Kubernetes deployment.

Example of a Security Audit

Now, we’ll show you how to use SecComp profiles to audit syscall usage in public cloud environments like GCloud. To begin, you will need to have access to a development K8s cluster. You can create one quickly using the GCloud CLI:

$ gcloud container clusters create hello-cluster --num-nodes=1
Creating cluster hello-cluster in europe-west2-a...done.

Next, download the kubeconfig credentials and verify that you can query the cluster nodes:

$ gcloud container clusters get-credentials hello-cluster
Fetching cluster endpoint and auth data.
kubeconfig entry generated for hello-cluster.

$ kubectl get nodes
NAME    STATUS  ROLES   AGE VERSION

gke-hello-cluster-default-pool-3642361e-0qxz Ready <none> 2m42s v1.20.10-gke.301

Then, you want to create an auditing profile and upload it to the node instance using scp or rsync:

$ cat << EOF > audit.json
{
    "defaultAction": "SCMP_ACT_LOG"
}
EOF

$ gcloud compute ssh gke-hello-cluster-default-pool-3642361e-0qxz
$ sudo mkdir /var/lib/kubelet/seccomp
$ exit

$ gcloud compute scp audit.json
gke-hello-cluster-default-pool-3642361e-0qxz:\~/
$ gcloud compute ssh gke-hello-cluster-default-pool-3642361e-0qxz
$ sudo mv audit.json /var/lib/kubelet/seccomp
$ exit

Now, create a sample pod that references the SecComp file that you uploaded:

$ cat << EOF > example-pod.yml
apiVersion: v1
kind: Pod
metadata:
    name: audit-nginx
    labels:
        app: audit-nginx
spec:
    securityContext:
        seccompProfile:
            type: Localhost
            localhostProfile: audit.json
    containers:
    - name: nginx
        image: nginx
        securityContext:
            allowPrivilegeEscalation: false
EOF

For this example, we used a securityContext config that uses the seccompProfile option. K8s will attempt to load the profile located in /var/lib/kubelet/seccomp/audit.json path:

$ kubectl apply -f example-pod.yml
$ kubectl get pod/audit-nginx
NAME    READY   STATUS  RESTARTS    AGE
audit-nginx 1/1 Running 0   25s

$ kubectl expose pod/audit-nginx --type LoadBalancer --port 80

It will take some time to reserve a LoadBalancer IP address. You can watch from the command line as follows:

$ kubectl get svc -w

Once the service is exposed, you can query it with curl and inspect the syslog for audit events from inside the node:

$ curl 35.242.171.254
$ gcloud compute ssh gke-hello-cluster-default-pool-3642361e-0qxz
$ tail -f /var/log/syslog | grep 'audit-nginx'

Now, try to create a new pod by applying the following SecComp profile that denies all syscalls by default:

$ cat << EOF > deny.json
{
    "defaultAction": "SCMP_ACT_ERRNO"
}
EOF

As you can see, the container won’t even start:

$ kubectl get pods -o wide
NAME    READY STATUS    RESTARTS    AGE IP  NODE    NOMINATED NODE  READINESS   GATES
audit-nginx 0/1 CrashLoopBackOff    1   15s 10.0.0.13   gke-hello-cluster-default-pool-3642361e-0qxz <none> <none>

By examining the system profiles across all your infrastructure, you can compare them to master policies and help ensure parity. I hope this tutorial has helped you understand how to use AppArmor and Seccomp profiles to create secure application environments.

Embarking on a New Journey

Farewell, Slim — Transitioning to a new and larger mission!

We're excited to share some big news from Slim.AI. We're taking a bold new direction, focusing all our energy on software supply chain security, now under our new name root.io. To meet this opportunity head-on, we’re building a solution focused on transparency, trust, and collaboration between software producers and consumers.

When we started Slim.AI, our goal was to help developers make secure containers. But as we dug deeper with our early adopters and key customers, we realized a bigger challenge exists within software supply chain security ​​— namely, fostering collaboration and transparency between software producers and consumers. The positive feedback and strong demand we've seen from our early customers made it crystal clear: This is where we need to focus.

This new opportunity demands a company and brand that meet the moment. To that end, we’re momentarily stepping back into stealth mode, only to emerge with a vibrant new identity, and a groundbreaking product very soon at root.io. Over the next few months, we'll be laser-focused on working with design partners and building up the product, making sure we're right on the mark with what our customers need.

Stay informed and up-to-date with our latest developments at root.io. Discover the details about the end of life for Slim services, effective March 31, 2024, by clicking here.

Embarking on a New Journey

Farewell, Slim — Transitioning to a new and larger mission!

We're excited to share some big news from Slim.AI. We're taking a bold new direction, focusing all our energy on software supply chain security, now under our new name root.io. To meet this opportunity head-on, we’re building a solution focused on transparency, trust, and collaboration between software producers and consumers.

When we started Slim.AI, our goal was to help developers make secure containers. But as we dug deeper with our early adopters and key customers, we realized a bigger challenge exists within software supply chain security ​​— namely, fostering collaboration and transparency between software producers and consumers. The positive feedback and strong demand we've seen from our early customers made it crystal clear: This is where we need to focus.

This new opportunity demands a company and brand that meet the moment. To that end, we’re momentarily stepping back into stealth mode, only to emerge with a vibrant new identity, and a groundbreaking product very soon at root.io. Over the next few months, we'll be laser-focused on working with design partners and building up the product, making sure we're right on the mark with what our customers need.

Stay informed and up-to-date with our latest developments at root.io. Discover the details about the end of life for Slim services, effective March 31, 2024, by clicking here.