Zack's Kernel News
Dealing with Loose Build Dependencies
The Linux kernel build system goes far beyond simple makefiles and ultimately is like a whole separate software development project enclosed within the kernel project itself. Recently Linus Torvalds himself submitted a bug report on the kernel build system, in which he found that a giant portion of build time was spent invoking the cc1plus compiler to perform tests in the GCC plugins directory. Apparently the same goal could be accomplished much faster by simply checking the existence of a particular .h
file rather than running an entire compiler instance to read that file.
Masahiro Yamada pointed this out and posted a patch that not only took out the cc1plus invocation altogether, but also didn't bother implementing the test it had been supposed to perform. As he put it, failing to perform the test would produce a single unimportant warning. He remarked, "modern C++ compilers should be able to build the code, and hopefully skipping this test should not make any practical problem."
Linus approved the patch without comment. Kees Cook also approved and accepted the patch into his own tree.
However, a couple of weeks later, Marek Szyprowski reported that this simple patch "causes a build break with my tests setup, but I'm not sure weather it is really an issue of this commit or a toolchain I use. However I've checked various versions of the gcc cross-compilers released by Linaro [...] and all fails with the same error."
He added, "Compilation works if I use the cross-gcc provided by gcc-7-arm-linux-gnueabi/gcc-arm-linux-gnueabi Ubuntu packages."
Masahiro replied, "I can compile gcc-plugins with Linaro toolchains." And proceeded to try to track down the missing component that he felt must exist in Marek's setup.
Specifically, Masahiro suggested that Marek install the libgmp-dev package, which included header files for the GNU Multiple Precision Arithmetic Library.
Marek installed the headers and reported the problem solved.
However, Jon Hunter of NVidia also complained about the identical problem, saying, "this change also breaks the build on our farm build machines and while we can request that packages are installed on these machines, it takes time. Is there anyway to avoid this?"
Marek replied that reverting Masahiro's patch would do the trick, though of course then the build process would be slower once again.
However, this would not do the trick for Jon. He said, "that works locally, but these automated builders just pull the latest -next branch and build." But on further reflection, Jon added, "if you are saying that this is a problem/bug with our builders, then of course we will have to get this fixed."
And Masahiro confirmed that yes, the problem was probably with NVidia's build system, and they should take steps to make sure the package dependency was updated in their toolchain.
Masahiro explained:
"Kconfig evaluates $(CC) capabilities, and hides CONFIG options it cannot support.
"In contrast, we do not do that for $(HOSTCC) capabilities because it is just a matter of some missing packages.
"For example, if you enable CONFIG_SYSTEM_TRUSTED_KEYRING and fail to build scripts/extrace-cert.c due to missing <openssl/bio.h>, you need to install the openssl dev package.
"It is the same pattern."
In other words, $(CC)
refers to the standard compiler on the user's system, so Kconfig will hide any build options that aren't supported by that compiler. However, $(HOSTCC)
specifically identifies package dependencies that are necessary for a given build option. Kconfig won't hide options that are simply missing software dependencies on the user system – the user is expected to install those dependencies in order to get the kernel features they need.
This made sense to Jon, and he said he'd speak to the engineers about updating their build environment.
Close by, Thierry Reding pointed out that the original code, prior to Masahiro's patch, actually attempted to build a test plugin, while Masahiro's patch simply verified the existence of a header file, without trying to use that header file to build a test plugin.
Thierry felt this could explain the recent breakages seen by people like Marek and Jon. He said, "where previously the check would fail [...] the same check now succeeds (i.e. $CC was built with plugins support, but we no longer check if the plugin support is also functional). That means after your change the builders will now by default try to build the plugins and fail, whereas previously they wouldn't attempt to do so because the dependency wasn't met."
He concluded, "that makes the new check a bit less useful than the old one, because rather than defaulting to 'no' when GCC plugins can't be built, we now default to 'yes' when they should be able to get built but can't."
However, Thierry also acknowledged, "it's probably reasonable to expect the installation to be good and that plugins can be built if the gcc-plugin.h header can be found, so I'm not objecting to this patch." But he wondered if simply installing the missing dependency packages was actually the right solution. He explained, "In case where CC != HOSTCC, it's possible that CC was not built against the same version of GMP/MPC as HOSTCC. And even HOSTCC might not necessarily have been built against the versions provided by libgmp-dev or libmpc-dev."
In which case, he said, the dependency might not be as easy to meet as simply installing a particular package on a particular Linux distribution – the package may need a certain version number.
At this point, Linus came into the discussion, saying:
"This seems to be a package dependency problem with the gcc plugins – they clearly want libgmp, but apparently the package hasn't specified that dependency.
"If this turns out to be a big problem, I guess we can't simplify the plugin check after all.
"We historically just disabled gcc-plugins if that header didn't build, which obviously meant that it 'worked' for people, but it also means that clearly the coverage can't have been as good as it could/should be.
"So if it's as simple as just installing the GNU multiprecision libraries ('gmp-devel' on most rpm-based systems, 'libgmp-dev' on most debian systems), then I think that's the right thing to do. You'll get a working build again, and equally importantly, your build servers will actually do a better job of covering the different build options."
Ultimately, this seems like the sort of issue that will be solved with minimal pain. Jon, for example, mentioned that "I have reported this issue to the team that administers the builders. So hopefully, they will install the necessary packages for us now." And it's likely that a small number of other organizations may have to implement similar fixes.
Probably the speed fix will stay in the kernel, and users will need to install the necessary packages from their distro. It doesn't seem likely that this will turn into the kind of problem that would lead to reverting a significant build-time speedup, just to avoid a minor inconvenience to a relatively small number of users.
« Previous 1 2
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
TUXEDO Computers Unveils Linux Laptop Featuring AMD Ryzen CPU
This latest release is the first laptop to include the new CPU from Ryzen and Linux preinstalled.
-
XZ Gets the All-Clear
The back door xz vulnerability has been officially reverted for Fedora 40 and versions 38 and 39 were never affected.
-
Canonical Collaborates with Qualcomm on New Venture
This new joint effort is geared toward bringing Ubuntu and Ubuntu Core to Qualcomm-powered devices.
-
Kodi 21.0 Open-Source Entertainment Hub Released
After a year of development, the award-winning Kodi cross-platform, media center software is now available with many new additions and improvements.
-
Linux Usage Increases in Two Key Areas
If market share is your thing, you'll be happy to know that Linux is on the rise in two areas that, if they keep climbing, could have serious meaning for Linux's future.
-
Vulnerability Discovered in xz Libraries
An urgent alert for Fedora 40 has been posted and users should pay attention.
-
Canonical Bumps LTS Support to 12 years
If you're worried that your Ubuntu LTS release won't be supported long enough to last, Canonical has a surprise for you in the form of 12 years of security coverage.
-
Fedora 40 Beta Released Soon
With the official release of Fedora 40 coming in April, it's almost time to download the beta and see what's new.
-
New Pentesting Distribution to Compete with Kali Linux
SnoopGod is now available for your testing needs
-
Juno Computers Launches Another Linux Laptop
If you're looking for a powerhouse laptop that runs Ubuntu, the Juno Computers Neptune 17 v6 should be on your radar.