Zack's Kernel News

Zack's Kernel News

Article from Issue 205/2017
Author(s):

This month we discuss replacing the random number generator, checking when a process dumps core, fixing filesystem security issues, and adding build dependencies to clean the source tree.

Replacing the Random Number Generator

Stephan Müller ran into difficulties when he tried to do a wholesale replacement of the Linux random number generator (LRNG). A good source of random numbers is crucial for securing running systems against certain kinds of attacks. Stephan felt that the existing RNG code suffered from design flaws that required a full rewrite.

In particular, he said that the old /dev/random implementation had once been sufficient, but now was having trouble providing good randomness for embedded systems and other newer hardware on the market. Stephan felt that LRNG could work as a simple drop-in replacement for /dev/random so that user code would never notice the change.

However, regardless of the value of Stephan's implementation, Greg Kroah-Hartman said that making such a big change all at once, to such a crucial piece of the kernel, was not a good idea. He suggested submitting a series of smaller patches that would gradually implement what Stephan had in mind.

But Stephan said he'd tried that and rarely got any response to his smaller patches. Not even a thumbs-down. Also, he said, any patches that were more than just cleanups, but were actual logic changes, were going to be relatively big. There was no getting around that.

But Greg insisted that "Evolution is the correct way to do this; kernel development relies on that. We don't do the 'use this totally different and untested file instead!' method."

Stephan pointed out that "The offered patch set does not rip out existing code. It adds a replacement implementation which can be enabled during compile time. Yet it is even disabled per default." He said that many other areas of the kernel used such an approach, and it seemed to be a standard accepted practice.

Meanwhile, Theodore Ts'o said that, in fact, he had been taking a variety of Stephan's patches for quite some time and incorporating them into the existing /dev/random code, but he hadn't taken some of the more extreme logic changes because he said H. Peter Anvin had disagreed with Stephan about their value.

Sandy Harris replied here, pointing out some of the issues with the existing /dev/random design. He said, "you cannot generate good output without a good seed and just after boot, especially first boot on a new system, you may not have enough entropy. A user space process cannot do it soon enough, and all the in-kernel solutions (unless you have a hardware RNG) pose difficulties."

He added that Stephan's approach had a lot of good theory behind it, in terms of being able to get a proper seed as early as possible in the boot process. Nevertheless, he agreed that there wasn't enough justification to merge the whole patch as-is.

But Theodore replied that he was skeptical of the theoretical underpinnings of Stephan's approach. He said:

Most of them are done using the x86 as the CPU. This is true of the McGuire, Okech, and Schiesser paper you've cited above. But things are largely irrelevant on the x86, because we have RDRAND. And while I like to mix in environmental noise before generating personal long-term public keys. I'm actually mostly OK with relying on RDRAND for initializing the seeds for hash table to protect against network denial of service attacks. (Which is currently the first user of the not-yet-initialized CRNG on my laptop during kernel boot.)

The real problem is with the non-x86 systems that don't have a hardware RNG, and there depending timing events which don't depend on external devices is much more dodgy. Remember that on most embedded devices there is only a single oscillator driving the entire system. It's not like you even have multiple crystal oscillators beating against one another.

So if you are only depending on CPU timing loops, you basically have a very complex state machine, driven by a single oscillator, and you're trying to kid yourself that you're getting entropy out the other end. How is that any different from using AES in counter mode and claiming because you don't know the seed, that it's "true randomness"? It certainly passes all of the statistical tests!

Hence, we have to rely on external events outside of the CPU, and so we need to depend on interrupt timing – and that's what we do in drivers/char/random.c already! You can debate whether we are being too conservative when we judge that we've collective enough unpredictability to count it as a "bit" of randomness. So it's trivially easy to turn the knob and make sure the CRNG gets initialized more quickly using fewer interrupt timings, and boom! Problem solved.

Simply turning the knob to make our entropy estimator more lax makes people uncomfortable, and since they don't have access to the internal microarchitecture of the CPU, they take comfort in the fact that it's really, really complicated, and so something like the Jitter RNG *must* be a more secure way to do things. But that's really an illusion.

If the real unpredictability is really coming from the interrupts changing the state of the CPU microarchitecture, the real question is how many interrupts do you need before you consider things "unpredictable" to an adequate level of security? Arguing that we should turn down the "interrupts per bit of entropy" in drivers/char/random.c is a much more honest way of having that discussion.

In a later email, Theodore also added, "Practically no one uses /dev/random. It's essentially a deprecated interface; the primary interfaces that have been recommended for well over a decade is /dev/urandom and, now, getrandom(2). We only need 384 bits of randomness every 5 minutes to reseed the CRNG, and that's plenty, even given the very conservative entropy estimation currently being used. This was deliberate. I care a lot more that we get the initial boot-time CRNG initialization right on ARM32 and MIPS embedded devices, far, far, more than I care about making plenty of information-theoretic entropy available at /dev/random on an x86 system. Further, I haven't seen an argument for the use case where this would be valuable. If you don't think they count because ARM32 and MIPS don't have a high-res timer, then you have very different priorities than I do."

But Stephan pointed out that his concerns were not limited to /dev/random alone, but included /dev/urandom as well. Regarding Theodore's insistence that 384 bits of randomness every five minutes was enough, Stephan pointed out that on certain embedded systems, this would not actually be enough.

Jeffrey Walton also objected to Theodore's statement that /dev/random was a deprecated interface; he said if this were true, it should be better documented. He quoted the random(4) man page as saying, "/dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation." He added, "If the generator is truly deprecated, then it may be prudent to remove it completely or remove it from userland. Otherwise, improve its robustness. At minimum, update the documentation."

The discussion petered out shortly afterward. The debate seems to focus on whether or not Stephan has identified a truly better source of random number seeds for all hardware setups, or not. At stake is the security of the system, particularly early in the boot process, when good sources of random number seed data can be scarce. For the moment, it doesn't seem as though Stephan has convinced Theodore or the other big-time hackers in that area, but at least one of Stephan's concerns has been answered – his smaller fixes are being considered and, in some cases, adopted into the kernel.

Checking When a Process Dumps Core

Roman Gushchin from Facebook wanted to be able to check to see if a process was in the midst of producing a core dump. The idea was, you don't want to kill a process while it's dumping core, because then you get an undebuggable dump. He posted a small patch to add a core-dump flag to /proc/$PID/status.

There was no significant dissent, but there were questions about the proper interface to use. Alexey Dobriyan felt that instead of adding the flag, it might be better to introduce a new "core-dumping" state that any process ID might be in. Roman didn't like this idea because, even while dumping core, the process could be in other states, such as sleeping or running.

In that case, Alexey said, it might be faster to have a /proc/$PID/coredump file that was simply either 1 or  , instead of having to open and parse the status file each time. Although Roman felt that speed was not really an issue, there wasn't really a need for anyone to loop on checking for a core dump. They'd just check once and then kill the process.

Konstantin Khlebnikov also had a couple of suggestions. For one thing, he thought it might reduce clutter to have the line in the /proc/$PID/status file only if a core dump was actually taking place. But Roman didn't like this idea because no other data element appears and disappears like that.

Konstantin also suggested exposing the process ID of the core-dump helper, but Roman felt this would risk introducing race conditions, and he couldn't think of any valid use for it. At the most, Roman felt this would be a separate feature, not part of the current patch.

With no further suggestions and no objections, Roman asked Andrew Morton to accept the patch into his tree to feed up to Linus Torvalds at some point, but before doing so, Andrew wanted to know what real-world situations would actually use this feature.

Roman replied that, at Facebook, they were seeing "corrupted coredump files on machines in our fleet, just because processes are being killed by timeout in the middle of the core writing process. We do have a process health check, and some agent is responsible for restarting processes which are not responding for health check requests. Writing a large coredump to the disk can easily exceed the reasonable timeout (especially on an overloaded machine). This flag will allow the agent to distinguish processes which are being coredumped, extend the timeout for them, and let them produce a full coredump file."

Andrew also wanted the feature explained in documentation, and Roman updated the patch to do so.

In general, there's no guarantee that a patch like this would go into the kernel. The fact that this particular one had an easy path means nothing. A security hole can always emerge from an unexpected angle, and that would be that for the patch. Or, Andrew or Linus could decide that the use case envisioned by Roman was too specific to the company producing the patch and didn't justify the added cruft to the proc file. It could turn out that the "proper" place to perform a given check is in user space. Any number of things can put the kibosh on a patch of this sort. In this case, however, it looks like smooth sailing.

Fixing Filesystem Security Issues

Security fixes sometimes resemble a game of whack-a-mole. As long as new features keep being added to the kernel, it seems there will always be security holes to exploit, and new features must continue to be added, as long as there is new hardware to support.

Salvatore Mesoraca recently closed a security hole in which an attacker would create a file that would normally be created by a piece of user software, and then the user software would write its sensitive data to the attacker's file instead of one owned and readable only by itself.

The solution was simply to disallow this at the filesystem level and cause the user software to fail to access the file if it was unable to create the file itself. This way, in the worst case, the user would be unable to run their software; however, this is better than running the software and falsely believing it to be secure.

There was a general cheer among those watching Salvatore's work. In particular, Alexander Peslyak pointed out that there were a number of related security problems that could be better identified with this fix in the code. He had used a similar feature to help debug security issues in an old Postfix privsep implementation, and he expected others would find similar benefits.

It's interesting to note that in Linux – and in free software generally – security fixes take precedence over all other considerations, even to the point of extensive design changes and disabling significant features, and that's exactly the right ethic to have about it. Contrast this with proprietary operating systems that tend to spawn entire economies based on using simple scripts to crack millions of vulnerable systems, using them to deliver spam, launch DoS attacks, and so on.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Kernel News

    This month in Kernel News: Opening a Random Can of Worms and Out with the Old.

  • Kernel News

    Zack Brown reports on: Trusted Computing and Linux; Load Balancer Improvements; and New Random Number Handling.

  • Kernel News

    Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

  • Kernel News

    Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

  • Deleting Data

    Backups are a common topic, but you’ll hardly hear anyone mention safe data deletion.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News