The continuing drama of RISC vs. CISC

Full Speed Ahead

Article from Issue 190/2016
Author(s):

Crank the compilers and turn up the system clock; maddog explains the basic differences between RISC and CISC architectures.

Recently, it was announced that Hewlett Packard had won a long-outstanding lawsuit against Oracle, citing that Oracle had violated a business agreement requiring support for the Itanium processor. I remember when the Itanium was announced. All I could do was slap my head and say, "Again?" How many times did we have to invent Ultra Wide Instruction Set Computing (UWISC) in computer science?

At that time Digital was putting all of its thrust behind a Reduced Instruction Set Computing (RISC) processor named Alpha, after years of making Complicated Instruction Set Computing (CISC) hardware, such as the PDP-11 and VAX.

The basic difference between CISC and RISC architectures is that for every instruction the programmer codes, tens, hundreds, or even thousands of tiny little instructions (called "microcode") in a CISC computer tell the hardware exactly what to do to solve a potentially complex set of tasks, whereas in a RISC computer, each machine language instruction basically does one tiny little thing. You might need tens, hundreds, or thousands of RISC instructions to do the same work as a CISC instruction – but then again, you might not. It depends on what the compiler generates.

CISC instructions are fabricated to reflect what the user may do. For example, you might have a single CISC instruction that compresses a string of bytes in memory or creates a JPEG. Naturally, these instructions tend to lag the industry, because the standard for what is being done has to be created first. Unfortunately, as the industry shifts, the CISC instruction may become less useful, yet it still has to remain in the instruction set until it can be retired – a long and painful process.

In a CISC computer, a significant amount of space on the chip holds the microcode. This space takes away from the space available for more registers, more cache, and more threads than would be allocated on a RISC processor chip. This is especially true of a 64-bit architecture, which tends to take up more space for registers and cache than a 32-bit system. To be fair, the extra machine-code-level instructions that a RISC processor needs to do the work of a single CISC instruction also slows down the task and takes up cache inside the CPU.

Because it can be difficult for a compiler of a high-level language to match up exactly what it is trying to do with the instruction set of a particular processor, CISC machines may be slower in some processes than RISC machines. RISC machines tend to put a lot of emphasis on highly optimized compilers and high-speed clocks for the CPU.

The argument of CISC vs. RISC has been going on since the days of Maurice Wilkes and Alan Turing, with Dr. Wilkes taking the side of CISC and Alan Turing taking the side of RISC. I am firmly in the RISC camp.

In a UWISC, such as the Itanium, multiple instructions are processed in parallel to make the machine execute "faster." The compiler loads up the CPU with instructions and data and then the CPU executes all of the instructions at one time, making the flow "parallel." This is great in theory if your application allows the parallelism that the CPU is capable of doing to match up with what is needed. Unfortunately, most of the time it does not.

UWISC has other issues. In the VAX line, a program called AXE tested each instruction (and there were a lot of them), each addressing mode (and there were a lot of them), and each data type (and there were a lot of them). Running at a million instructions a second, the AXE program took a month to run through all the combinations ONE TIME.

On a RISC machine, because of the small number of instructions and the almost non-existence of addressing modes, the same AXE program might take a day, shortening the design cycle for the processor.

How long would it take to run AXE for a UWISC system, with all the combinations for a CISC computer and multiple instructions in parallel? AXE might take a year to complete one run.

I believe that Intel created the Itanium (which many people called the Itanic) to stop AMD from duplicating their instruction set. But AMD fooled them, extended the 32-bit instruction set to make a 64-bit system, and blew past Intel in the 64-bit space. The Itanic was taking on water and starting to sink.

I am sure that this article will generate lots of comments. Some people will disagree, but I will stick with Alan Turing. Crank the compilers, turn up the system clock, add more cores and more cache, full steam ahead!

The Author

Jon "maddog" Hall is an author, educator, computer scientist, and free software pioneer who has been a passionate advocate for Linux since 1994 when he first met Linus Torvalds and facilitated the port of Linux to a 64-bit system. He serves as president of Linux International®.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • RISC-V

    The new RISC-V chip promises to be a game changer in the open hardware field.

  • ARM Architecture

    The ARM architecture has shaped modern computer history, and with the rise of mobile computing, ARM is more important than ever. We take a look at the ARM architecture and where it might be heading.

  • Security and SOHO Routers

    Home and small office networks typically place their security in the hands of an inexpensive device that serves as a router, DHCP server, firewall, and wireless hotspot. How secure are these SOHO router devices? We're glad you asked …

  • Doghouse: Assembler

    "maddog" explains why a knowledge of assembler, or other machine language, can be very useful.

  • RIP Dennis
comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95

News