Facebook releases its own OOM implementation
The Unpopular OOM killer
The kernel's OOM killer definitely is only a helper. However, it fulfills an important role if your primary goal is to prevent the system from crashing: In case of an emergency, it kills programs that would otherwise – with a high probability – cause the system to crash due to RAM over-allocation. Put simply, the task of the OOM killer is to give the admin some time to deal with the actual problem in detail, without the entire environment blowing up in the admin's face.
As a helper construct, the kernel's internal OOM killer often does more evil than good – if you believe Facebook. I'll explain why the social media giant has been critical of the kernel OOM solution, but first I need to explain a little more about how the current OOM killer works in Linux.
Holistics Instead of Heuristics
The way the OOM killer identifies its potential victims on the running system has changed dramatically in recent years. The first OOM implementation in Linux, which was in use for many years, was essentially based on heuristics: Using many parameters, the kernel tried to find out which programs are unimportant for providing the elementary functions and can be sent to the happy hunting grounds without causing too much fuss.
The function that is responsible for this in the kernel memory management is appropriately known as badness()
– it calculates which process on the running system has the highest level of badness, and thus generates a list in descending order.
However, the heuristics used were not particularly comprehensible, and so Google established a completely new implementation of the OOM killer in Linux in 2010. The function that is responsible for identifying the problematic processes is still known as badness
, but beyond that, not much has remained the same.
The badness
function now follows a rather simple approach: It is interested almost exclusively in the memory consumption of individual processes, throwing all of the system's processes onto the scales, and then posing the question as to how it can free up as much memory as possible by switching off as few processes as possible.
For each process, the kernel calculates the OOM score, which, by the way, can also be read using cat
from the /proc
filesystem (/proc/PID/oom_score
). If an OOM situation arises, the OOM killer starts to terminate the processes with the highest OOM scores one after another.
How exactly the OOM score is calculated is much easier to answer since the patches were introduced: For each process, the kernel evaluates how much memory it actually uses. Memory is defined as the sum of the working memory and the swap memory, although swap memory is becoming increasingly rare on today's systems. The rule of thumb is: The more memory a process uses at the time of the OOM rating, the higher it is listed on the kill list.
Exceptions Confirm the Rule
The Google team has implemented a few exceptions to avoid annoying effects. Basically, the kernel always subtracts 30 points from the OOM score for root processes because the system administrator's processes are potentially more important than those in user space. In addition, the admin can also influence the score: For this purpose, there is an oom_score_adj
file in the /proc
filesystem that you can use to adjust the value for each process. Values from -1000
to +1000
are possible.
If the admin gives a process -1000
, the OOM killer will not kill this process. However, setting the value to +1000
is like painting a large X on the process's belly; it increases the probability that this process will be the first to die an OOM death.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)