The limits of affordability

maddog's Doghouse

Article from Issue 221/2019
Author(s):

Fifty years ago, limitations in computing had more to do with cost than know-how; maddog takes us back to 1969.

When you have been in the computer industry for 50 years, you tend to have gray in your hair (assuming you still have some hair left). You also have a series of friends who talk about "way back when."

At the same time, you notice a group of "younger people" who have questions about why things are the way they are and sometimes have some slightly strange ideas about how things came about.

So, old timers, pardon me while I walk through time and loosely paint a path of where we are and how we got there.

Imagine the year 1969 (not really so long ago) when even mainframe computers ran only one task at a time. A machine that could fill a room and used water cooling and enough electricity to make a significant dent in your electric bill only had one-quarter to one-half megabyte of core memory.

That machine might also have one or two disk drives that cost $32,000, apiece and each would hold 180MB of disk storage. The processors were able to do between 35,000 to a bit over 16 million instructions per second, depending on the model.

The address space of a task at that time was about 16 million bytes for this mainframe computer, with a 24-bit address space. All of the program had to fit into memory at one time, unless you used a trick called "overlays," which caused pain and suffering to the programmer.

Only large companies and governments could afford anything more, and often not even them. Since most computer companies only built machines that could be sold, improvements in size and capacity moved slowly.

It was not that we did not know how to solve big problems; we could not afford to solve big problems.

"Operators" mounted magnetic tapes, fed punched cards, and tore off printer listings to be distributed to the employees who needed them.

Programs read directly from card readers and wrote directly to printers. This limited the speed of the program to what card readers could read in and what printers could print out. Good programs started reading the next card while they were processing the existing card and scheduled the next line to be printed when the printer sent the interrupt saying that the last line was finished printing. The operators would not launch the next job until the last one had finished printing.

Then "spooling" was invented … a trade off of disk space and memory for faster access to data and printing. Jobs could now be "queued" to run whenever resources in the machine allowed, and more memory was added to allow multiple jobs to run at one time.

Security was still locking the door at night (no terminals attached to this machine), networking was carrying the boxes of cards and tapes around, and graphics was printing ASCII art on line printers.

Graphics, audio, and video that we take for granted today were unknown to the general public, and most people never saw a real computer or touched a computer keyboard until they went to university or started working for a company. There were no computers in the high school or at home.

A single transistor in those days might cost $1.50, and integrated circuits were just beginning to replace discrete transistors in the late 1960s, so having any larger memories or faster CPUs was simply not affordable.

I remember buying 128,000 bytes of semiconductor memory in 1978 for about $128,000, about twice what you might pay for a decent house at the time. A few years later (1981), I paid $10,000 for a megabyte of RAM, and a couple of years after that I paid $2,000 for the same megabyte of RAM.

You needed to balance the speed of the CPU with the amount of RAM you had, the amount of disk storage, and a series of other elements. What would be the sense (in most cases) of having a CPU that could do billions of instructions a second accessing memory that was only kilobytes in size with a disk measured in megabytes? Or having 10GB of RAM with a 5MB storage device?

We had supercomputers for a long time, and we knew how to solve big problems, but the average company could not afford a monolithic supercomputer, so a lot of the "big problems" went unsolved.

The concept of the Beowulf supercomputer allowed people to solve the same problems for approximately 1/40th the price, or to solve problems that are 40 times bigger for the same amount of money.

The graphics, networking, and processing capabilities we have today (including mobile phones) grew out of a combination of need and technology that grew over time and continued to grow, but these capabilities are still limited by our ability to afford them.

The Author

Jon "maddog" Hall is an author, educator, computer scientist, and free software pioneer who has been a passionate advocate for Linux since 1994 when he first met Linus Torvalds and facilitated the port of Linux to a 64-bit system. He serves as president of Linux International®.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News