Zack's Kernel News

Zack's Kernel News

Article from Issue 225/2019
Author(s):

Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

Cleaning Up Dependencies on Older GCC Versions

When Linus Torvalds recently agreed to raise the GCC minimum supported version number to 4.8, it meant that any older system that still used older build tools, including older versions of GCC, wouldn't be able to compile new kernels without upgrading that software. The justification for his decision was partly that the various kernel ports had come to depend on later GCC versions and partly that the various Linux distributions had started shipping with later GCC versions as well. To Linus, this meant that regular users would almost certainly not be inconvenienced by the change; while kernel developers – well, Linus didn't mind inconveniencing them so much.

But it wasn't entirely an inconvenience to developers, as Steven Rostedt recently demonstrated. Now that GCC 4.8 was the new minimum, the kernel no longer had to support older versions of GCC that lacked some of the modern new features. The way this generally works is that the kernel build system checks which version of GCC is installed and then compiles certain kernel features that are specifically coded for that GCC version. This way, by hook or by crook, all kernel features get implemented, even if they have to work around deficiencies in an older compiler. When the older compilers aren't supported anymore, all of that targeted kernel code can simply be torn out by the roots, without anyone making a fuss.

Steven had noticed that function tracing had traditionally used two possible implementations: one using only GCC's -pg flag, and one using the -pg flag in conjunction with -mfentry. Everyone understood that -pg plus -mfentry was better, but the kernel build system had always had to support using only the -pg flag by itself, because older versions of GCC didn't support the -mfentry flag. In order to make sure function tracing was supported in all kernels, there needed to be multiple implementations: some using the excellent -pg plus -mfentry, and others using the less excellent -pg. Now that the minimum GCC version was 4.8, it meant that all supported GCC versions now implemented the -mfentry flag, so the -pg solution was no longer needed.

Steven posted a patch to remove it.

There was wide applause for this violent and merciless act. Peter Zijlstra and Jiri Kosina both shouted with glee, while Linus smiled contentedly.

It's common with this sort of adjustment that developers will start to look deeper into the given piece of code to identify yet more areas that can be simplified or removed. In this case, as Linus pointed out, there were a bunch of conditionals that no longer had more than one choice, now that the second choice had been taken away. So it would now be possible to get rid of all those conditionals and simplify the code still further.

This type of patch is very popular with Linus and his core group of developers – anything that shrinks the code base, or that shrinks the size of the compiled kernel, is already a patch they like, before they've even looked at what it does. After that, they may object for other reasons. But just tell people your patch removes X hundreds of lines of code, and they start nosing around interestedly, hoping to find an excuse to apply this patch to the tree.

Protecting RAM from Hostile Eyes

Security issues are always weird. They're always something nobody has thought of yet. And then lo and behold, there it is, like a rabbit in a hat. Where did that come from? Or else you're implementing what seems to be a perfectly good feature, like a macro language for your closed source word processor product; what could ever go wrong with that? And lo and behold, you create an entirely new universe of attacks against perfectly innocent users.

In the Linux world, Matthew Garrett saw a sliver of danger lurking in the crack of time between when a piece of RAM is freed by a piece of software and when it gets wiped clean by the kernel. In that moment, a hostile attacker could potentially reboot the system into a different OS, read the RAM, and see the data belonging to that piece of software. If that data was secret and sensitive, there could be a problem.

There are already mechanisms to prevent sensitive data being sent into swap and to make sure RAM gets thoroughly overwritten when the program is done with it. But there was always that sliver of time between release and overwrite, when a reboot might leave the data exposed.

Now, it was also possible to instruct UEFI firmware to wipe all RAM before booting to another OS. But as Matthew pointed out, this would actually multiply the time it took to boot the system by quite a bit, producing a truly unacceptable slowdown.

A better approach, he felt, was for Linux itself to wipe all RAM before shutting down. Assuming the attacker didn't simply turn off the power to induce a cold boot, Linux could then shut down gracefully, deleting all data and leaving the attacker with nothing to target. There was real value in this, Matthew said. Consider if the OOM killer suddenly killed a process, without giving it any time to exit cleanly and delete its own data. The operating system itself would have to be the last line of defense. Matthew posted a patch to implement his idea.

There was a bunch of support for this. However, Christoph Lameter pointed out that Matthew's patch only wiped RAM a little bit earlier than it would otherwise be wiped, given that RAM was always scrubbed clean before being allocated to another process. If anything, he said, Matthew's patch would reduce the width of the dangerous sliver of time, but it wouldn't eliminate it completely. And since Matthew's patch extended the behaviors of system calls and created whole new kernel behaviors, it seemed to Christoph that this slight reduction of the attack surface was not worth the added complexity.

It's unclear whether Christoph's objection might stop the patch from going into the kernel. Clearly his point is valid. But there was also a fair amount of initial support for the patch, and it seems like a common sense precaution in some ways. Ultimately, I believe, if the security hole remains open, it may make Linus Torvalds less willing to accept a patch that adds any sort of complexity to the kernel. But to close the hole entirely may prove to be impossible.

Verifying Return Addresses

Some security fixes look cool but are nonstarters for other reasons. For example, Mike Rapoport from IBM wanted to strengthen Linux address space isolation, which on its own merits could be very useful. Address space isolation is where a given process can only "see" a certain range of RAM addresses, and anything outside of that range is completely cut off from potential attack.

Mike's approach, he explained, was to have the kernel confirm every address that popped off the stack. This way, the return addresses of any function calls made in an address space, would be guaranteed to bring execution back to that calling function. Without Mike's patch, a hostile user might be able to insert a fake return address that didn't exist in the kernel's predefined symbol table. Then when that address came up off the stack, control would be handed over to a hostile piece of code, possibly with root access. Mike's patch stopped that dead in its tracks.

Unfortunately, as Mike himself admitted, confirming that every single return address had a corresponding entry in the symbol table was going to be a high-intensity operation, happening many times per second on a normal system. He saw no way to accomplish this without the kernel taking some kind of generalized speed hit.

Not only that, but as Jiri Kosina said, Mike's code was actually incompatible with other security projects that attempted to solve similar problems. So between the speed hit and the project conflicts, the otherwise useful aspects of Mike's patch were not sufficient to gain any interested supporters among the kernel developers.

It's not such a rare outcome – someone works many hours to prove that a given feature is possible, only to find that it doesn't work with another obscure project that seems to have more favor or that the feature itself has some inescapable drawback like a speed hit or something else that dooms it.

At the same time, in many cases, a failed project points the way towards a better approach that later on succeeds. It's almost never the case that a project is simply bad and useless, especially if a developer thought it was important enough to invest so much time.

The Author

The Linux kernel mailing list comprises the core of Linux development activities. Traffic volumes are immense, often reaching 10,000 messages in a week, and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls to take on this task is Zack Brown.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Kernel News

     

  • ZACK'S KERNEL NEWS

    The Linux kernel mailing list comprises the core of Linux development activities. Traffic volumes are immense, often reaching ten thousand messages in a given week, and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls to take on this task is Zack Brown. Our regular monthly column keeps you abreast of the latest discussions and decisions, selected and summarized by Zack. Zack has been publishing a weekly online digest, the Kernel Traffic news letter for over five years now. Even reading Kernel Traffic alone can be a time consuming task. Linux Magazine now provides you with the quintessence of Linux Kernel activities, straight from the horse’s mouth.

  • ZACK'S KERNEL NEWS
  • Kernel News

    Chronicler Zack Brown reports on the latest news, views, dilemmas, and developments within the Linux kernel community.

  • Kernel News

    Chronicler Zack Brown reports on the NOVA filesystem, making system calls userspace only, and extending module support to plain executables. 

comments powered by Disqus