Stay nimble

Doghouse – Portability and Costs

Article from Issue 264/2022
Author(s):

Minimizing CPU overhead from the beginning can help you lower costs and maximize portability over time.

A number of years ago cloud computing came on the scene, with one of the first suppliers being Amazon. Amazon's use of their data farms peaked during the Christmas season, which left them with the capacity the rest of the year to sell Internet-accessible server computing to companies who needed it, at a fraction of what it would cost for a small company to supply it themselves. An industry was born.

Almost overnight (at least by most industry terms) companies started to turn over their server computing to other companies (AWS, Google, Microsoft, and others) who had the computers, staff, physical plant, security, and so on necessary to do the work.

I have advocated for the use of many of these cloud services when a fledgling company is starting out. In the open source space you could think of places like SourceForge, GitHub, GitLab, and others as "cloud services" that make the development and collaboration of software (and even hardware) easier and less expensive.

The problem comes with two issues: lock-in and growth. Both of these have been with the computer industry for decades.

Lock-in can happen when the developer uses interfaces or platform features that are nonstandard. Even in the days of simpler programs there were standards that allowed programs to be moved from computer to computer if the programmer only coded using the formal standard. In almost any commercial compiler, there were "extensions" to the standard that offered the programmer easier methods of coding or more efficient execution of the code. These extensions were usually documented in a gray (or some other color) documentation with the warning that this was an extension, allowing the programmer to stay away from that extension if they wanted portable code.

A good programmer might then code what is called a "fallback," which would execute using only the standard code, but if operating in an environment where the extension existed then the extension could be used, either with a recompile or a runtime determination.

I have used a simple example, but these "extensions" to standards occur at every level, from programming languages to library and system calls, to the interfaces of your cloud systems, which cloud service providers sometimes offer as their advantage over their competitors.

All of this might be fine if your cloud service is guaranteed to always be the least expensive, most stable, give you all the services you need as your company grows, and so on. I have, however, worked for some of the largest companies on earth, which I thought would be there "forever," and which are now completely gone. You have to be ready to move and sometimes comparatively quickly.

The second reason for being nimble with regard to portability is the growing expense of cloud computing versus the costs of running your own server systems, especially in certain environments. These costs can be both economical and political. The more your company grows in size and scale, the more you should plan for the contingency of having to move or separate your server loads.

This article was inspired by a whitepaper recently published showing how the cloud service costs of some users rapidly increased over time due to the growth of the data or computer load to the point where it might have been practical for some large companies to create their own data centers again, after moving to cloud services several years ago. Unfortunately, moving even to another cloud service, much less your own physical plant, is often complex and expensive.

Besides maintaining flexibility through portability, working with tools to modify the amount of CPU, data storage, data transfer, and Internet usage can reduce (sometimes dramatically) these charges from whatever cloud supplier you use.

Recently a programmer that I work with went through some older code on their project and found an application that did some dynamic allocation of memory needed for each partial transaction. For reasons too complex to explain here, this caused an overhead of 1,200 milliseconds (yes, if you do the math this means 1.2 seconds) for each transaction … painfully slow for the user but also putting an unnecessary strain on the server. The programmer changed the algorithm to calculate how many allocations would have to be done and then just allocated the space one time, and (in effect) the CPU overhead dropped to zero. This savings is one thing if there is only one user of this program, or if it is running on a laptop, but if there are hundreds of users utilizing it in a server environment, the CPU utilization mounts up quickly.

In summary, what is often inexpensive in small quantities can rapidly become a big expense in larger quantities, so plan to try and minimize the overhead from the very beginning and remember that "performance" is not just how fast your application runs, but how portable your code is and how many programming resources it can save if you have to move your code.

The Author

Jon "maddog" Hall is an author, educator, computer scientist, and free software pioneer who has been a passionate advocate for Linux since 1994 when he first met Linus Torvalds and facilitated the port of Linux to a 64-bit system. He serves as president of Linux International®.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus