The continuing drama of RISC vs. CISC
Full Speed Ahead
Crank the compilers and turn up the system clock; maddog explains the basic differences between RISC and CISC architectures.
Recently, it was announced that Hewlett Packard had won a long-outstanding lawsuit against Oracle, citing that Oracle had violated a business agreement requiring support for the Itanium processor. I remember when the Itanium was announced. All I could do was slap my head and say, "Again?" How many times did we have to invent Ultra Wide Instruction Set Computing (UWISC) in computer science?
At that time Digital was putting all of its thrust behind a Reduced Instruction Set Computing (RISC) processor named Alpha, after years of making Complicated Instruction Set Computing (CISC) hardware, such as the PDP-11 and VAX.
The basic difference between CISC and RISC architectures is that for every instruction the programmer codes, tens, hundreds, or even thousands of tiny little instructions (called "microcode") in a CISC computer tell the hardware exactly what to do to solve a potentially complex set of tasks, whereas in a RISC computer, each machine language instruction basically does one tiny little thing. You might need tens, hundreds, or thousands of RISC instructions to do the same work as a CISC instruction – but then again, you might not. It depends on what the compiler generates.
CISC instructions are fabricated to reflect what the user may do. For example, you might have a single CISC instruction that compresses a string of bytes in memory or creates a JPEG. Naturally, these instructions tend to lag the industry, because the standard for what is being done has to be created first. Unfortunately, as the industry shifts, the CISC instruction may become less useful, yet it still has to remain in the instruction set until it can be retired – a long and painful process.
In a CISC computer, a significant amount of space on the chip holds the microcode. This space takes away from the space available for more registers, more cache, and more threads than would be allocated on a RISC processor chip. This is especially true of a 64-bit architecture, which tends to take up more space for registers and cache than a 32-bit system. To be fair, the extra machine-code-level instructions that a RISC processor needs to do the work of a single CISC instruction also slows down the task and takes up cache inside the CPU.
Because it can be difficult for a compiler of a high-level language to match up exactly what it is trying to do with the instruction set of a particular processor, CISC machines may be slower in some processes than RISC machines. RISC machines tend to put a lot of emphasis on highly optimized compilers and high-speed clocks for the CPU.
The argument of CISC vs. RISC has been going on since the days of Maurice Wilkes and Alan Turing, with Dr. Wilkes taking the side of CISC and Alan Turing taking the side of RISC. I am firmly in the RISC camp.
In a UWISC, such as the Itanium, multiple instructions are processed in parallel to make the machine execute "faster." The compiler loads up the CPU with instructions and data and then the CPU executes all of the instructions at one time, making the flow "parallel." This is great in theory if your application allows the parallelism that the CPU is capable of doing to match up with what is needed. Unfortunately, most of the time it does not.
UWISC has other issues. In the VAX line, a program called AXE tested each instruction (and there were a lot of them), each addressing mode (and there were a lot of them), and each data type (and there were a lot of them). Running at a million instructions a second, the AXE program took a month to run through all the combinations ONE TIME.
On a RISC machine, because of the small number of instructions and the almost non-existence of addressing modes, the same AXE program might take a day, shortening the design cycle for the processor.
How long would it take to run AXE for a UWISC system, with all the combinations for a CISC computer and multiple instructions in parallel? AXE might take a year to complete one run.
I believe that Intel created the Itanium (which many people called the Itanic) to stop AMD from duplicating their instruction set. But AMD fooled them, extended the 32-bit instruction set to make a 64-bit system, and blew past Intel in the 64-bit space. The Itanic was taking on water and starting to sink.
I am sure that this article will generate lots of comments. Some people will disagree, but I will stick with Alan Turing. Crank the compilers, turn up the system clock, add more cores and more cache, full steam ahead!
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Canonical Releases Ubuntu 24.04
After a brief pause because of the XZ vulnerability, Ubuntu 24.04 is now available for install.
-
Linux Servers Targeted by Akira Ransomware
A group of bad actors who have already extorted $42 million have their sights set on the Linux platform.
-
TUXEDO Computers Unveils Linux Laptop Featuring AMD Ryzen CPU
This latest release is the first laptop to include the new CPU from Ryzen and Linux preinstalled.
-
XZ Gets the All-Clear
The back door xz vulnerability has been officially reverted for Fedora 40 and versions 38 and 39 were never affected.
-
Canonical Collaborates with Qualcomm on New Venture
This new joint effort is geared toward bringing Ubuntu and Ubuntu Core to Qualcomm-powered devices.
-
Kodi 21.0 Open-Source Entertainment Hub Released
After a year of development, the award-winning Kodi cross-platform, media center software is now available with many new additions and improvements.
-
Linux Usage Increases in Two Key Areas
If market share is your thing, you'll be happy to know that Linux is on the rise in two areas that, if they keep climbing, could have serious meaning for Linux's future.
-
Vulnerability Discovered in xz Libraries
An urgent alert for Fedora 40 has been posted and users should pay attention.
-
Canonical Bumps LTS Support to 12 years
If you're worried that your Ubuntu LTS release won't be supported long enough to last, Canonical has a surprise for you in the form of 12 years of security coverage.
-
Fedora 40 Beta Released Soon
With the official release of Fedora 40 coming in April, it's almost time to download the beta and see what's new.