Comments on the “Great ISA Debate”

20 May, 2010 § 2 Comments

As computer architectures continued to grow and develop over time, many solutions have been proposed. This paper will discuss the competition between the IBM System/360 and Burroughs B5000 as well as its legacy in terms of the Reduced Instruction Set Computer (RISC) versus Complex Instruction Set Computer (CISC). The end of this paper will comment on the current state of computer architecture and where things are headed.

IBM System/360 vs Burroughs B5000

The System/360 and the B5000 differed in many fundamental ways. Figure 1 shows a table of just some of the differences:

Figure 1. Key decisions made by the IBM System/360 and the Burroughs B5000

IBM System/360 [4] Burroughs B5000 [1]
Address size 24 bits Program Reference Table with 1024 entries
Character size 4 bit for binary coded decimal 6 bit (8 characters for 48 bit word)
FP size 32/64 bit 48 bit
Instruction size Variable: 16/32/64 bit 12 bit
Integer size 32 bit 48 bit
Register style General purpose registers Stack based

The IBM System/360 made a significant decision to be binary compatible with their machines and make all of the machines work off a common instruction set [1]. To this day, binary compatibility still survives greatly, with the x86 architecture on desktop PC’s allowing programs written ten years ago to run on machines built yesterday.

Another significant introduction from the IBM System/360 was byte address-ability, whereas the B5000 was word addressed [1]. Byte address-ability allowed a bigger address size than word size, and the ability to use characters located at any location.

The B5000 used a stack based register style [4], which is still in use today by the Java Virtual Machine. This means that there are no registers available to the assembly programmer, and anything that needs to get referenced must be pushed on to the stack. Another important side-effect of the stack based architecture was that the B5000 descriptor system used a base and limit. This, along with the separation between instructions and data, removed the possibility for stack overflow attacks. This allows modern day virtual machines, like the JVM, to have a more secure sandbox than binary compatible systems that lack the stack architecture.

The B5000’s stack based architecture is today called Segmented Virtual Memory and is used by many machines. Burrough was also the first to start talking about how multiple processors would work in the system [4]. It wasn’t too long after that the IBM 360 series had support for multiple processors.


In the early 1980s David Patterson and David Ditzel introduced a reversal of directions for instruction set architectures from the Complex Instruction Set Computer to their Reduced Instruction Set Computer (RISC).

The processor field was then dominated by the VAX-11/78x systems. These systems used a CISC architecture and provided the best performance [3]. Patterson and Ditzel believed they could top CISC with a reduced instruction set, including taking advantage of the memory hierarchy and its increasing performance per dollar in the memory market, increased speed of chip designers, and optimizing what is being used the most instead of creating special instructions for edge cases.

RISC took notice of Moore’s Law and the exponential increases in memory performance and tried to reduce the number of instructions available and increase the number of instructions used [5]. RISC was thought to allow chip designers to implement the instruction set in far less time than CISC and in that amount of time, the top of the line performance would have doubled and the gains introduced by the newly designed CISC would be worthless [5]. But these arguments had their detractors. VAX Systems Architecture argued that the gap between performance in instructions versus performance in main memory was large and that this gap continued to lead to requests for an instruction set that favored high level languages, similar to the design of the B5000.

Patterson and Ditzel cited Amdahl’s Law in trying to optimize the most well-used parts of a system to have the largest effect, and in doing so, pulled work away from the processor and gave it to the compiler. This allowed the CPU to get smaller and make room for other components on the chip. Time has agreed with this argument, as the dominant CPU for mobile devices today is the Advanced RISC Machine (ARM) chip.

One of the main arguments for RISC was that its reduced instruction set would allow implementers to innovate at a much higher pace and surpass the developments of CISC implementers. This claim has not held its ground, as Intel has continued to use the x86 architecture (CISC) in its desktop CPUs and has been able to keep up with the pace of innovation seen in other ISAs.
Both papers were not clear as to the definition of either RISC or CISC. As time has gone on, RISC has remained a load/store architecture but the complexity of the two has increased.

In Conclusion

The IBM System/360 was designed to have an ISA that was good for microcoded processors, paving the way for the CISC, while the Burroughs B5000 was about pipelining and VLSI, leading to RISC.

The key difference in RISC compared to CISC is the separation of hardware and software responsibilities. RISC’s goal was to move the complexity away from the hardware to software where bugs can be fixed cheaper and faster. This also gave more work to compilers to use more efficient instructions. CISC’s goal was to make the hardware more intelligent and make the job of the compiler writer easier.

Today, RISC is used in embedded devices and high-end servers, whereas CISC dominates the desktop market and the lower-end server market.


  1. Amdahl, Blaauw, and Brooks. Architecture of the IBM System/360. IBM Journal of Research and Development, 8(2):87-101, April 1964.
  2. Clark, Douglas W. and Strecker, William D. Comments on “The Case for the Reduced Instruction Set Computer,” by Patterson and Ditzel.
  3. Hennessy, John L. and Patterson, David A. Computer Architecture: A Quantitative Approach. 4th Edition. Page 3. 2006.
  4. Lonergan and King. Design of the B5000 system. Datamation, vol. 7, no. 5, pp. 28-32, May, 1961.
  5. Patterson, David A. and Ditzel, David R. The Case for the Reduced Instruction Set Computer.

Tagged: , , , ,

§ 2 Responses to Comments on the “Great ISA Debate”

  • A.J. says:

    Good article! I still feel Complex Instruction Set Computer is the best. Probably because I am under the assumption that all things equal you get better performance.

    Anyhoo, I like the image with the post. It looks like it is from the board game Risk. We should play that on the computer (over network of course) sometime!

  • Ian Joyner says:

    It’s interesting that by some of your criteria the B5000 can be classified as RISC. Once again these machines blur widely accepted industry distinctions that annoys many people on both sides of debates. Hennessy and Patterson really didn’t like the B5000 and stack architecture. One attribute of RISC is that instructions should be executed in a single clock cycle. The B5000 certainly did not obey this. One instruction could invoke other instructions and not complete for ages, eg., VALC on a function. In fact VALC is polymorphic and would retrieve a simple stored value or the result of another computation. Not really RISC.

    Niklaus Wirth also defined his Lilith architecture – a lot like the B5000 – where he said RISC should stand for ‘Regular Instruction Set Computing’. Indeed we should have simple and elegant architectures like Lilith and the B5000 and these seem to fit into neither classic RISC, nor CISC.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

What’s this?

You are currently reading Comments on the “Great ISA Debate” at JAWS.


%d bloggers like this: