Zack's Kernel News
Splitting the Scheduler
Yuyang Du proposed splitting the Linux process scheduler into two halves. The problem, he said, was that computers were shipping with more and more CPU cores, and also that a lot of embedded systems were shipping with cores of different strengths. Trying to schedule processes while balancing the load between CPU cores, Yuyang said, made the scheduler code unnecessarily complex. He suggested dividing the scheduler into two halves – a top half to handle process scheduling and a bottom half to balance the load among the available CPUs.
Part of the problem, he said, was that the scheduler also had to deal with a variety of issues such as power consumption and data throughput. Yuyang's idea was that the two halves of the scheduler would each optimize for these things selfishly. This would allow the logic to be much easier to understand and maintain.
Peter Zijlstra replied bluntly, "Yeah, that's not _ever_ going to happen. We've had that discussion many times."
Morten Rasmussen more circumspectly told Yuyang that he wasn't sure the scheduler logic could be as cleanly separated into two halves as Yuyang hoped.
Morten's interpretation of Yuyang's proposal was that Yuyang wanted to have a more straightforward plug-n-play ability to use different bottom half (load balancing) approaches, depending on whether the system used all the same type of CPU, or CPUs of different strengths.
But, Morten said that any load balancing approach would simply have no choice but to modify the heuristics used for process scheduling. Any scheduler that limited the ability of the top and bottom halves to influence each other, he said, would have a strong negative effect on either of their ability to explore diverse implementations.
In other words, the two halves were essentially locked together.
Peter also elaborated his response, saying, "You cannot treat them completely independent, as fairness must extend across CPUs. And there's good reasons to integrate them further still." He gave some technical reasons, but he added that even if Yuyang's proposal were possible, he didn't see the benefits Yuyang claimed. The discussion didn't go very far. Yuyang tried to present the case that a good design might make the problem easier to tackle, but no one engaged.
I like this sort of thing, where a visionary developer tries to split some code that everyone thinks is unsplittable. Who knows; it's possible that Yuyang could find the elusive dividing line that would allow top and bottom to separate cleanly, while still allowing diverse implementation explorations in both halves.
The argument against it seems fairly unyielding, though. No one seemed to consider even for a moment the possibility that the scheduler could be divided up this way. So, it's unclear how much support Yuyang would have if he decided to take on the long-term project alone.
Tainting Kernels Compiled with Experimental Modules
Daniel Vetter remarked sardonically, "Users just love to set random piles of options since surely enabling all the experimental stuff helps. Later on we get bug reports because it all fell apart."
Daniel's idea was to "Make it clear that users are playing with fire here." He wanted to taint the kernel if a user included any of a variety of experimental modules. Tainting the kernel allows developers to know when a user's bug report comes from a tainted source. The developers, instead of trying to track down the bug in what could likely be a wild goose chase, could then ask the user to reproduce the bug in a kernel that had not been tainted with experimental modules.
Andrew Morton replied, "Seems harmless and potentially useful to others so yes, I'd vote for putting it in core kernel."
Rusty Russell and Andrew raised various technical concerns. Daniel was encouraged and offered to post a new patch that addressed those issues.
Inefficiencies with Large Numbers of Mounts
Al Viro remarked that some modern systems used a "serious amount of mounts." Apparently large numbers of filesystem mounts had never been thoroughly tested before, because there'd been no need – nobody used them. With the experience of modern users, Al was able to identify some efficiency issues in the kernel.
For one thing, the
percpu_free() routines did not seem to be very efficient. He clocked them at O(N^2), meaning that the amount of time used to execute them increased by the square of the number of mounts. This would cause increasingly nightmarish slowdowns with each mount.
The problem seemed difficult to solve entirely, but Al did identify various "low-hanging fruit" that could help speed things up a bit in the short term. For example, one piece of existing code needed to add up the lengths of all existing mount allocations to find a desired offset. Things like avoiding doing all that addition, Al said, could make a relatively big difference.
In another case, Al said that the
propagate_mnt() routine had already been known to be inefficient when it was first written, although it was only now that the inefficiencies had started to bite users. He said, "we recognized the problem, but decided to go for simpler logics unless and until somebody runs into situations where it creates a lot of copies only to tear almost all of them down." But, he said that it turned out to be quite easy to trigger the bad behavior. He figured this would not be difficult to fix – the new code would just have to be a bit more complicated than the current code.
Another problem that Al said would be easy to solve was that reading
/proc/mounts from one end to the other was currently slow, because for each entry, the code had to walk through the entire list of entries to identify its current place. For a large number of entries, this could cause big delays. Al said he could fix this by simply caching the relevant data and keeping track of the current list item.
- ThreadSanitizer: https://code.google.com/p/thread-sanitizer/
- KernelStrider: https://code.google.com/p/kernel-strider/
- Races found: https://code.google.com/p/kernel-strider/wiki/RacesFound
Buy this article as PDF
Mozilla’s script blocker add-on could be putting malware sites on the whitelist.
The Internet community officially banishes the notoriously unsafe Secure Sockets Layer protocol.
Popular desktop environment continues the Gnome 2 legacy – with new support for the Gnome 3 toolkit.
The Obama White House has issued a memorandum telling all US government agencies they must use HTTPS for all websites and web communication.
New program will dial up security for the Firefox browser.
Red Hat's community distro embraces the cloud.
New partnership will bring more and better CS training to US schools
Criminals offer online help over Tor network
Sophisticated malware is still present on Joomla and WordPress sites around the world.
Future versions of Ubuntu's code service will support the popular Git version control system used with Linux and other open source projects.