Security by Design

Principles for Making Your Linux System More Secure


Security by design not only makes for a securer system, it also provides a better understanding of how your Linux system is constructed. Here are 10 of the most common security by design principles.

To many users, security is a matter of using the right tools – a matter, for instance, of setting up a firewall or perhaps an antivirus application. Such tools should be part of any security policy, but they are lacking in two important ways. First, they do not add up to any coherent understanding of security. Users install these tools, but without much understanding of the security principles that lie behind them. Second, many are reactive tools, designed to respond to an intrusion, rather than prevent them in the first place. A more fruitful approach is security by design, which offers basic approaches to writing software or configuring a system.

Security by design is guided by a handful of basic principles that can be found throughout a system. A few of these principles have generally agreed upon names; others do not. All the same, if you study security by design, you will see the same ideas consistently referenced. There is general agreement about what matters, although not in nomenclature.

Here are 10 of the most commonly mentioned principles that govern security by design, and some of the ways that they affect Linux computing.


A secure system is not designed primarily for normal operation. Rather, it is designed and maintained to minimize the problems when something goes wrong – from a perspective of pessimism. You can often see this principle at work in bug reports. The description of the bug can make it seem like it would often never occur or would require some unlikely circumstance such as a user running a web browser as root. Yet even low probability events sometimes happen, so bugs are patched as rapidly as possible in free software development. At the most, bugs are given a priority, and those with the lowest priority may be patched last. Sooner or later, though, all bugs will be given attention.

Open Design

One major advantage of free software is that development is public. Anyone can access the code, and the engineering standards are freely available, which means that with open design there is a greater chance of improvements or of bugs being detected. This principle was expressed in Eric S. Raymond’s The Cathederal and the Bazaar as “given enough eyeballs, all bugs are shallow.” It is named Linus’s Law in honor of Linus Torvalds.

The opposite of open design is security by obscurity, which is frequently considered a practice of proprietary software development. Instead of reporting a bug as soon as it discovered – the common practice in free software – security by obscurity delays reporting the bug until a patch is released. The problem with this practice is that no one knows if the bug is exploited during the wait for the patch, which can take months. By contrast, open design provides an incentive to write a speedy patch and allows more than one person or team to write a patch. Open design is not foolproof, and it is not always followed, but at the very least, it minimizes security risks.

System Logs

All transactions on a system should be logged so that a record of them exists. Although ordinary users might ignore logs, admins know that they are invaluable for troubleshooting. Logs are especially useful for complex applications like Git, in which constant changes and multiple development branches can easily make it hard for users to orient themselves or to be sure that they have done what they intended. However, even a simple utility, like fstrim, which optimizes SSD drives, can benefit from a log to show that it is performing as expected. In Linux, most logs are stored in /var/log and rotated periodically so that one file cannot easily fill an entire drive.

Economy of Design

A piece of software should be as simple as possible to fulfill its function. The implications of this principle were summarized decades ago in the Unix philosophy. As explained by Doug McIlroy in 1978, the core value of the Unix philosophy is “make each program do one thing well.” Some of the implications of this practice were spelled out in 1994 by Peter H. Salus, who added that programs needed to be designed to work together, and that text was the universal interface for communication between programs.

In practice, economy of design is not always followed. Desktop applications like LibreOffice could hardly be farther away from the Unix philosophy, and one of the criticisms of systemd centers on whether it violates the Unix philosophy, especially when compared to earlier init systems. However, in administration, the concept continues to be valid. After all, the simpler a piece of code is, the less chance exists of something going wrong.

Fail-Safe Defaults

This principle states that software should be designed to have intelligent defaults that are safe in themselves and minimize damage in the case of a crash or intrusion. As an example of a fail-safe default, the umask command, which sets the default permissions for new files, is never set so that the owner, the owner’s group, and other users all have the ability to read and write. More frequently, permissions are set so that other users have neither read nor write permissions. On systems configured for a higher level of security, the owner’s group many only be able to read files or not see them at all.

Least Privilege (Least Access)

Least privilege states that processes, applications, and users only should have access to the system resources that are absolutely necessary. On a fresh installation, all three should have minimal access, with rights being added later after due consideration. Administrators may even use the nice command to limit memory and priority. The system of permissions, user accounts, and groups are all based on this principle, although the setup on Linux is somewhat dated. The use of sudo extends this principle by only allowing temporary access to the root command and opening the possibility of spreading administrative functions over more than one account.

Least Astonishment

To minimize mistakes, design should follow users’ expectations, so they know what they are doing. The principle of least astonishment holds true both at the command line and on the desktop. For example, commands started at the prompt generally have many of the same options, ranging from ones to substitute the default configuration, output, and logfiles to ones to apply the command recursively or make the output more verbose. Similarly, when Linux moved towards the desktop, it adopted the menu structure already used by Windows, with the File and Edit menus on the left and the Windows and Help menus on the right. As a result, this sequence of the top-level menu is almost universal today.

Defense in Depth

Defense in Depth (DiD) is a principle straight from the battlefield. It states that a system should not depend on a single feature for protection. On a simple level, DiD explains why reactive security features are not enough. Antivirus software, for example, is only as good as the virus definitions it includes. If a virus is newer than the definitions, it is likely to be undetected, except by general characteristics that might identify it as a virus. However, if a virus does slip through, the damage it can do may be limited by user accounts and their permissions.

Fault Tolerance (Containment of Failure)

Fault tolerance means that, when an application fails or a system is breached, the first priority is to isolate the problem and keep the rest of the system running if at all possible. If a single application or process fails, the kill command can terminate it without compromising the rest of the system, with any luck. Similarly, in a well-designed system, an intruder who gains access to a regular user’s account might gain access to files, but has to use sudo or find another exploit to take control of the root account and the entire system. So long as the user whose account is compromised has a recent backup, they can have a new account created and be up and running after they have transferred their files.

Current Backups

Backups are the last line of defense in security. An account or system can be completely compromised, yet so long as you have reliable and safely stored backups, the inconvenience is reduced to a few hours of transferring files. To fulfill their function, backups should be current, reliable, and stored on a separate disk from your system. You may want them stored off-site or encrypted on a cloud (or both).

If they are honest, most home users will admit that they are sometimes tempted to skip backups. Backups can be a drag on system resources and can take hours to complete. However, the inconvenience can be reduced by scheduling them when no one is using the system (often in the early morning). Where and how often your files should be backed up can be determined by your answer to one simple question: Can you afford to lose them?”

Why Design Principles Matter

Design principles are not the only way that security can be defined. For example, security can be defined as striking a balance between safety and user convenience, in which case, the definition may be expanded to include goals like privacy and access to data.

However, security design principles are the criteria that best explain how a Unix-like system such as Linux is built. You may never design a system or extensively configure one, but if you understand security design principles, you should have a clearer understanding of how your system is constructed and the criteria to make it as secure as possible. At times, a feature may illustrate more than one of these principles, but together they are the reason why Linux and related systems have a reputation for security.

Related content

comments powered by Disqus

Issue 237/2020

Buy this issue as a PDF

Digital Issue: Price $12.99
(incl. VAT)