The continuing drama of RISC vs. CISC
Full Speed Ahead

Crank the compilers and turn up the system clock; maddog explains the basic differences between RISC and CISC architectures.
Recently, it was announced that Hewlett Packard had won a long-outstanding lawsuit against Oracle, citing that Oracle had violated a business agreement requiring support for the Itanium processor. I remember when the Itanium was announced. All I could do was slap my head and say, "Again?" How many times did we have to invent Ultra Wide Instruction Set Computing (UWISC) in computer science?
At that time Digital was putting all of its thrust behind a Reduced Instruction Set Computing (RISC) processor named Alpha, after years of making Complicated Instruction Set Computing (CISC) hardware, such as the PDP-11 and VAX.
The basic difference between CISC and RISC architectures is that for every instruction the programmer codes, tens, hundreds, or even thousands of tiny little instructions (called "microcode") in a CISC computer tell the hardware exactly what to do to solve a potentially complex set of tasks, whereas in a RISC computer, each machine language instruction basically does one tiny little thing. You might need tens, hundreds, or thousands of RISC instructions to do the same work as a CISC instruction – but then again, you might not. It depends on what the compiler generates.
[...]
Buy this article as PDF
(incl. VAT)