Stay nimble
Doghouse – Portability and Costs
Minimizing CPU overhead from the beginning can help you lower costs and maximize portability over time.
A number of years ago cloud computing came on the scene, with one of the first suppliers being Amazon. Amazon's use of their data farms peaked during the Christmas season, which left them with the capacity the rest of the year to sell Internet-accessible server computing to companies who needed it, at a fraction of what it would cost for a small company to supply it themselves. An industry was born.
Almost overnight (at least by most industry terms) companies started to turn over their server computing to other companies (AWS, Google, Microsoft, and others) who had the computers, staff, physical plant, security, and so on necessary to do the work.
I have advocated for the use of many of these cloud services when a fledgling company is starting out. In the open source space you could think of places like SourceForge, GitHub, GitLab, and others as "cloud services" that make the development and collaboration of software (and even hardware) easier and less expensive.
The problem comes with two issues: lock-in and growth. Both of these have been with the computer industry for decades.
Lock-in can happen when the developer uses interfaces or platform features that are nonstandard. Even in the days of simpler programs there were standards that allowed programs to be moved from computer to computer if the programmer only coded using the formal standard. In almost any commercial compiler, there were "extensions" to the standard that offered the programmer easier methods of coding or more efficient execution of the code. These extensions were usually documented in a gray (or some other color) documentation with the warning that this was an extension, allowing the programmer to stay away from that extension if they wanted portable code.
A good programmer might then code what is called a "fallback," which would execute using only the standard code, but if operating in an environment where the extension existed then the extension could be used, either with a recompile or a runtime determination.
I have used a simple example, but these "extensions" to standards occur at every level, from programming languages to library and system calls, to the interfaces of your cloud systems, which cloud service providers sometimes offer as their advantage over their competitors.
All of this might be fine if your cloud service is guaranteed to always be the least expensive, most stable, give you all the services you need as your company grows, and so on. I have, however, worked for some of the largest companies on earth, which I thought would be there "forever," and which are now completely gone. You have to be ready to move and sometimes comparatively quickly.
The second reason for being nimble with regard to portability is the growing expense of cloud computing versus the costs of running your own server systems, especially in certain environments. These costs can be both economical and political. The more your company grows in size and scale, the more you should plan for the contingency of having to move or separate your server loads.
This article was inspired by a whitepaper recently published showing how the cloud service costs of some users rapidly increased over time due to the growth of the data or computer load to the point where it might have been practical for some large companies to create their own data centers again, after moving to cloud services several years ago. Unfortunately, moving even to another cloud service, much less your own physical plant, is often complex and expensive.
Besides maintaining flexibility through portability, working with tools to modify the amount of CPU, data storage, data transfer, and Internet usage can reduce (sometimes dramatically) these charges from whatever cloud supplier you use.
Recently a programmer that I work with went through some older code on their project and found an application that did some dynamic allocation of memory needed for each partial transaction. For reasons too complex to explain here, this caused an overhead of 1,200 milliseconds (yes, if you do the math this means 1.2 seconds) for each transaction … painfully slow for the user but also putting an unnecessary strain on the server. The programmer changed the algorithm to calculate how many allocations would have to be done and then just allocated the space one time, and (in effect) the CPU overhead dropped to zero. This savings is one thing if there is only one user of this program, or if it is running on a laptop, but if there are hundreds of users utilizing it in a server environment, the CPU utilization mounts up quickly.
In summary, what is often inexpensive in small quantities can rapidly become a big expense in larger quantities, so plan to try and minimize the overhead from the very beginning and remember that "performance" is not just how fast your application runs, but how portable your code is and how many programming resources it can save if you have to move your code.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
New Slimbook EVO with Raw AMD Ryzen Power
If you're looking for serious power in a 14" ultrabook that is powered by Linux, Slimbook has just the thing for you.
-
The Gnome Foundation Struggling to Stay Afloat
The foundation behind the Gnome desktop environment is having to go through some serious belt-tightening due to continued financial problems.
-
Thousands of Linux Servers Infected with Stealth Malware Since 2021
Perfctl is capable of remaining undetected, which makes it dangerous and hard to mitigate.
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.