Securing and monitoring containers in enterprise environments
All Boxed Up
A recent flurry of activity in the container space raises several interesting questions about security among a number of operational aspects in the enterprise environment.
Docker doubtlessly still reigns supreme in the container run-time space, but various industry projects mean that the Docker stronghold will almost certainly shift in one form or another over the coming months. The recent release of Docker Enterprise Edition (Docker EE) [1] shows that this fact hasn't escaped Docker, and to my mind, they should quite rightly take advantage of their market share and fully monetize their current standing.
The Docker EE offering advises you to meld all parts of your containerization and orchestration workflow together using one vendor to avoid sticking pieces of duct tape between the components to integrate them. In their words: "An application-centric platform, Docker EE is designed [to] accelerate and secure across the entire software supply chain, from development to production running on any infrastructure" [1]. More easily digestible details can be seen in Figure 1.
Just One Moment
My interest from a DevSecOps perspective is security, and in Figure 1 you can see that image scanning for common vulnerabilities and exposures (CVEs) [2] is indeed bundled with the EE flavor of Docker. However, that is not so for the less feature filled Docker Community Edition (Docker CE) [3], which is promoted for developers and small teams. However, it is thankfully available for free as a preview for a period and for those using a paid plan for private repositories. As you can see in Figure 2, it's highly efficacious.
The Docker site states: "During the free period, Docker Security Scanning scans the three most recently updated tags in each of your private repositories. You can push an update to an older tag to trigger a scan. The scan runs on each new image push and updates the scan results when new information comes in from the CVE databases" [4].
Although this functionality is undoubtedly very much required, unfortunately, it's merely one critical aspect of securing your containers, and you might be forgiven for falling into a false sense of security.
Not Quite as Simple as That
Although I have no doubt that Docker EE offers a sophisticated security posture using a greater refined set of security principles, such as verifying images have come from a trusted registry (and not an unknown registry populated with a selection of images with nefarious intent), those businesses feeling satisfied that all their bases are covered by just running CVE scans are definitely missing a trick.
In simple terms, the three major areas to worry about when securing your containers and continuous integration/continuous deployment (CI/CD) pipelines are as follows:
- Images. It's imperative that images are signed so that you know precisely which version of an image you are pulling from your trusted registry (with which you are authenticating and methodically logging each transaction). Once you know what you're dealing with, you can scan that image for common vulnerabilities across multiple third-party feeds.
- Run Time. Another critical area is your container run-time security. For example, what if a container suddenly spawns an anomalous process or tries to create a volume mount when it has never tried to do that in the past? Monitoring these changes and automatically mitigating the level of damage that they can cause or alerting on-call staff to any such changes should they occur in production makes running an estate reliably much easier.
- Host Security. A third aspect often overlooked is access to the run-time daemon itself, which integrates into the host's kernel with impunity. Take the popular Docker
run
command as a case in point. Think of a Jenkins job firing adocker run
– the daemon acts as the superuser, the root user, on the host. Even the suggested Docker system group offers no protection because any access to the Docker daemon is effectively relinquishing superuser permissions on the host. The terrifying result of a successful, sophisticated compromise is that you would lose all the containers on that host followed by the host itself.
Approaching your container security from a traditional security perspective for a moment, remember that effective mitigation is all about layering your protection; providing defense-in-depth.
There is some solace in that containers are declarative. In other words, a Dockerfile might be treated as a shipping manifest of sorts (pun intended).
You can use that predefined shipping manifest to describe how a container should work and, following that, how it was intended to interact with your estate. With some forethought, you can translate this key information into a useful ruleset.
Thankfully, inside containers, developers mostly use popular software components, so these rulesets can be relevant to many decoupled containers, and it can be much easier to define those that might have been required for old-school bare metal servers running many different services simultaneously.
Clearly, a number of security challenges are introduced by running containers. However, undoubtedly, modern containers are fantastic for deploying software. Containers have been fully embraced by developers because of their ease of use, hence the unparalleled popularity across the industry relative to that of former technologies used in containers. That said, one fact has been repeated that cannot be denied …
Containers Are Not Virtual Machines
For good reason, developers love containers for the convenience of packaging dependencies together into a portable unit. However, not unsurprisingly, they commonly expect containers to work like virtual machines (VMs) from a security standpoint. This is definitely not helpful to either party because they are distinctly different animals.
As vigilant security becomes increasingly important in this brave new containerized world, you need to accept that applications will always be compromised, and you therefore need a way of defending against these threats.
After all, containers are not like Solaris Zones or BSD Jails from the past, which are relatively prebuilt and uniformly defined. Modern containers are instead made up of a collection of configs that the Linux kernel has successfully made available over recent years. Their configuration is not set in stone, which makes them more flexible; of course, this complexity adds other security challenges.
Behind the scenes, containers now comprise namespaces and control groups (cgroups), which are kernel primitives, or the system's Lego blocks, from which you build upward.
Kernel namespaces offer a running process a defined amount of visibility of a system (e.g., a specifically grouped set of other processes or a local routing table of its own). Separately, control groups offer a sys admin granular control over what that process can use.
Consider that namespaces are just a simple form of virtualizing system resources, and cgroups simply control, for example, how much CPU or RAM a process can use.
By combining the functionality of namespaces and cgroups, you can build a type of isolation between your containers and host (and indeed intracontainer isolation), but you certainly need to provide additional hardening on top, such as a Mandatory Access Control (MAC) system (e.g., SELinux on Red Hat derivatives or AppArmor on Ubuntu and Debian). It's no understatement to say that Red Hat has put a lot of effort into SELinux for containers, and for good reason.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more and more Linux systems are getting targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.
-
New KDE Slimbook Plasma Available for Preorder
Powered by an AMD Ryzen CPU, the latest KDE Slimbook laptop is powerful enough for local AI tasks.
-
Rhino Linux Announces Latest "Quick Update"
If you prefer your Linux distribution to be of the rolling type, Rhino Linux delivers a beautiful and reliable experience.