HiveBrain v1.2.0
Get Started
← Back to all entries
patternModerate

Data General MV/8000 virtues of "No mode bit"

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
8000bitmodegeneralvirtuesdata

Problem

I'm reading Tracy Kidder's "The Soul of a New Machine" where a team at Data General design a new machine (codenamed "Eagle", later named MV/8000). It is 32-bit extension of a previous architecture (the 16-bit Eclipse). One of the revolving themes seems to be, that they don't want to create a machine with a mode bit and that they succeeded in this.

However, it leaves out how this is technically achieved, and it also doesn't go into why it was so attractive to create a machine without a mode bit. The book is not a technical book so it could be that the details were somehow distorted. However, you get the feeling reading that book that a "mode bit" solution was common (and hence feasible) at the time, but was deemed unattractive by the engineers maybe for aesthetic reasons. The book also makes it seem like an immensely difficult task to create a design without a mode bit, which was somehow overcome by this particular team.

I found this description of how it was achieved:

http://people.cs.clemson.edu/~mark/330/kidder/no_mode_bit.txt

It seems to basically be about using a previously unused portion of the opcode space for the new instructions. I must admit I was a bit disappointed that it was "just that". Also I think this still leaves some questions unanswered:

Firstly, how did the 16-bit processes live in the 32-bit address space? Because I think that is the key challenge in making a 32-bit extension "without a mode bit". Extending the instruction set, on the other hand, is a relatively common undertaking. Since there's no description of how it happened one could assume that the 16-bit code simply access memory as it always did, maybe it sees some type of virtualized/banked view of memory (with new CPU registers controlling where the first address is) or something like that. But I don't know if there's more to it than that. In that case one could argue it sort-of was a "mode bit" solution. The 16-bit mode processes could run alongside the other processes by virtue

Solution

The answer is that Ed deCastro, President of Data General Management, had established a team of engineers in North Carolina specifically to design the next generation CPU. He assigned the task of support and incremental enhancements to us, the Massachusetts team. Three times we proposed a major new architecture, each time with a very sensible mode bit, and described it as a modest incremental enhancement. Each time, Ed saw through our disguise and rejected the proposal, expecting the North Carolina team to succeed. Ed believed that regardless of how we attempted to disguise our proposals, he would know it was a new generation architecture if it had a mode bit. So we had to propose a new generation architecture with no mode bit, even if that made it less efficient. That's how we got it past Ed deCastro. See The Soul of a New Machine, by Tracy Kidder.

Context

StackExchange Computer Science Q#44915, answer score: 10

Revisions (0)

No revisions yet.