home | about | pictures | reference | trade | links |
Section Three: The Great Dark Cloud Falls: IBM's Choice.Part I: DEC PDP-11, benchmark for the first 16/32 bit generation. (1970) . . . .The DEC PDP-11 was the most popular in the PDP (Programmed Data Processors) line of minicomputers, a successor to the previously popular PDP-8, designed in part by Gordon Bell. It remained in production until the decision to discontinue the line as of September 30, 1997 (over 25 years - see note on the DEC Alpha intended lifetime). Many of the PDP-11 features have been carried forward to newer processors because the PDP-11 was the basis for the C programming language, which became the most prolific programming language in the world (in terms of variety of applications, not number) and which includes several low level processor dependant features which were useful to replicate in newer CPUs for this reason.The PDP-8 continued for a while in certain applications, while the PDP-10 (1967) was a higher capacity 36-bit mainframe-like system (sixteen general registers and floating point operations), much adored and rumoured to have souls. The PDP-11 had eight general purpose 16-bit registers (R0 to R7 - R6 was also the SP and R7 was the PC). It featured powerful register oriented (little-endian, byte addressable) addressing modes. Since the PC was treated as a general purpose register, constants were loaded using an indirect mode on R7 which had the effect of loading the 16 bit word following the current instruction, then incrementing the PC to the next instruction before fetching. The SP could be accessed the same way (and any register could be used for a user stack (useful for FORTH)). A CC (or PSW) register held results from every instruction that executed. Adjascent registers could be implicitly grouped into a 32 bit register for multiply and divide results (Multiply result stored in two registers if destination is an even register, not if it's odd. Divide source must be grouped - quotient is stored in high order (low number) register, remainder in low order). A floating point unit could be added which contains six 64 bit accumulators (AC0 to AC5, can also be used as six 32-bit registers - values can only be loaded or stored using the first four registers). PDP-11 addresses were 16 bits, limiting program space to 64K, though an MMU could be used to expand total address space (18-bits and 22-bits in different PDP-11 versions). The LSI-11 (1975-ish) was a popular microprocessor implementation of the PDP-11 using the Western Digital MCP1600 microprogrammable CPU, and the architecture influenced the Motorola 68000, NS 320xx, and Zilog Z-8000 microprocessors in particular. There was also a 32-bit PDP-11 plan as far back as its 1969 introduction. The PDP-11 was finally replaced by the VAX architecture, (early versions included a PDP-11 emulation mode, and were called VAX-11).
Part II: TMS 9900, first of the 16 bits (June 1976) . .One of the first true 16 bit microprocessors was the TMS 9900, by Texas Instruments (the first are probably National Semiconductor PACE or IMP-16P or AMD 2901 bit slice processors in 16 bit configuration). It was designed as a single chip version of the TI 990 minicomputer series, much like the Intersil 6100 was a single chip PDP-8, and the Fairchild 9440 and Data General mN601 were both one chip versions of Data General's Nova. Unlike the IMS 6100, however, the TMS 9900 had a mature, well thought out design.It had a 15 bit address space and two internal 16 bit registers. One unique feature, though, was that all user registers were actually kept in memory - this included stack pointers and the program counter. A single workspace register pointed to the 16 register set in RAM, so when a subroutine was entered or an interrupt was processed, only the single workspace register had to be changed - unlike some CPUs which required a dozen or more register saves before acknowledging a context switch. This was feasible at the time because RAM was often faster than the CPUs. A few modern designs, such as the INMOS Transputers, use this same design using caches or rotating buffers, for the same reason of improved context switches. Other chips of the time, such as the 650x series had a similar philosophy, using index registers, but the TMS 9900 went the farthest in this direction. Later versions added a write-through register buffer/cache. That wasn't the only positive feature of the chip. It had good interrupt handling features and very good instruction set. Serial I/O was available through address lines. In typical comparisons with the Intel 8086, the TMS9900 had smaller and faster programs. The only disadvantage was the small address space and need for fast RAM. The TMS 9995 was a later version, and the 99000 added fast on-chip memory and several instructions (arithmetic, stack, parallel I/O, in-memory bit manipulation) and expanded addressing range. The 99110 added floating point support (as a sort of on-chip ROM library, I believe - the 99105 version dropped the floating point routines). Memory access could be further expanded with the 99610 MMU. Despite the very poor support from Texas Instruments, the TMS 9900 had the potential at one point to surpass the 8086 in popularity. TI also produced an embedded version, the TMS 9940.
Part III: Zilog Z-8000, another direct competitor . . . .The Z-8000 was introduced not long after the 8086, but had superior features. It was basically a 16 bit processor, but could address up to 23 bits in some versions by using segment registers (to supply the upper 7 bits). There was also an unsegmented version, but both could be extended further with an additional MMU that used 64 segment registers. The Z-8070 was a memory mapped FPU.Internally, the Z-8000 had sixteen 16 bit registers, but register size and use were exceedingly flexible - the first eight Z-8000 registers could be used as sixteen 8 bit subregisters (identified RH0, RL0, RH1 ...), or all sixteen could be grouped into eight 32 bit registers (RR0, RR2, RR4 ...), or four 64 bit registers. They were all general purpose registers - the stack pointer was typically register 15, with register 14 holding the stack segment (both accessed as one 32 bit register (RR14) for painless address calculations). The instruction set included 32-bit multiply (into 64 bits) and divide. The Z-8000 was one of the first to feature two modes, one for the operating system and one for user programs. The user mode prevented the user from messing about with interrupt handling and other potentially dangerous stuff (each mode had its own stack register). Finally, like the Z-80, the Z-8000 featured automatic RAM refresh circuitry. Unfortunately the processor was somewhat slow, but the features generally made up for that. A later version, the Z-80000, was introduced about at the beginning of 1986, at about the same time as the 32 bit MC68020 and Intel 80386 CPUs, though the Z-80000 was quite a bit more advanced. It was fully expanded to 32 bits internally, giving it sixteen 32 bit physical registers (the 16 bit registers became subregisters), doubling the number of 32 bit and 64 bit registers (sixteen 8-bit and 16-bit subregisters, 32-bit physical registers, eight 64-bit double registers). The system stack remained in RR14. In addition to the addressing modes of the Z-8000, larger 24 bit (16Mb) segment addressing was added, as well as an integrated MMU (absent in the 68020 but added later in the 68030) which included an on chip 16 line 256-byte fully associated write-through cache (which could be set to cache only data, instructions, or both, and could also be frozen by software once 'primed' - also found on later versions of the AMD 29K). It also featured multiprocessor support by defining some memory pages to be exclusive and others to be shared (and non-cacheable), with separate memory signals for each (including GREQ (Global memory REQuest) and GACK lines). There was also support for coprocessors, which would monitor the data bus and identify instructions meant for them (the CPU had two coprocessor control lines (one in, one out), and would produce any needed bus transactions). Finally, the Z-80000 was fully pipelined (six stages), while the fully pipelined 80486 and 68040 weren't introduced until 1991. But despite being technically advanced, the Z-8000 and Z-80000 series never met mainstream acceptance, due to initial bugs in the Z-8000 (the complex design did not use microcode - it used only 17,500 transistors) and to delays in the Z-80000. There was a radiation resistant military version, and a CMOS version of the Z-80000 (the Z-320). Zilog eventually gave up and became a second source for the AT&T WE32000 32-bit (1986) CPU instead (a VAX-like microprocessor derived from the Bellmac 32A minicomputer, which also became obsolete). The Z-8001 was used for Commodore's CBM 900 prototype, but the Unix based machine was never released - instead, Commodore bought Amiga, and released the 68000 based machine it was designing. A few companies did produce Z-8000 based computers, with Olivetti being the most famous, and the Plexus P40 being the last - the 68000 quickly became the processor of choice, although the Z8000 continued to be used in embedded systems.
Part IV: Motorola 68000, a refined 16/32 bit CPU (September 1979) . . . . . . . . .The initial 8MHz 68000 was actually a 32 bit architecture internally, but had only a 16 bit data bus and 24 bit address bus to fit in a 64 pin package (address and data shared a bus in the 40 pin packages of the 8086 and Z-8000). Later the 68008 reduced the data bus to 8 bits and address to 20 bits (very slow and not used for much - the cheap and quirky Sinclair QL being the most prominent), and the 68020 was fully 32 bit externally. Addresses were computed as 32 bits (without using segment registers) - unused upper bits in the 68000 or 68008 bits were ignored, but some programmers stored type tags in the upper 8 bits, causing compatibility problems with the 68020's 32 bit addresses. Lack of forced segments made programming the 68000 easier than some competing processors, without the 64K size limit on directly accessed arrays or data structures.Looking back it was a logical design decision, since most 8 bit processors featured direct 16 bit addressing without segments. The 68000 had sixteen 32-bit registers, split into eight data and address registers. One address register was reserved for the Stack Pointer. Data registers could be used for any operation, including offset from an address register, but not as the source of an address itself. Operations on address registers were limited to move, add/subtract, or load effective address. Like the Z-8000, the 68000 featured a supervisor and user mode (each with its own Stack Pointer). The Z-8000 and 68000 were similar in capabilities, but the 68000 was 32 bit units internally (16 bit ALUs, making some 32-bit operations slower than 16-bit - two in parallel for 32-bit data, one for addresses), making it faster and eliminating forced segments. It was designed for expansion, including specifications for floating point and string operations (floating point was added in the 68040 (1991), with eight 80 bit floating point registers compatible with the 68881/2 coprocessor). Like many other CPUs of the time, the 68000 could fetch the next instruction during execution (a 2 stage pipeline). An instruction prefix (0xF) indicated coprocessor instructions (similar to the 80x86), so the coprocessor could "listen" to the instruction stream, and execute instructions it recognized, without a coprocessor bus. The 68010 (1982) added virtual memory support (the 68000 couldn't restart interrupted instructions) and a special loop mode - small decrement-and-branch loops could be executed from the instruction fetch buffer. The 68020 (1984) expanded external data and address bus to 32 bits, simple 3-stage pipeline, and added a 256 byte cache (loop buffer), with either segmented (68451?) or paged (68851, it supported two level pages (logical, physical) rather than the segment/page mapping of the Intel 80386 and IBM S/360 mainframe) memory management unit. The 68020 also added a coprocessor interface. The 68030 (1987) integrated the paged MMU onto the chip . The 68040 (January 1991) added fully cached Harvard busses (4K each for data and instructions, with new MMU), 6 stage pipeline, and on chip FPU (subset of the 68882, with some operations emulated). Someone told me a Motorola techie indicated the 68000 was originally planned to use the IBM S/360 instruction set, but the MMU and architectural differences make this unlikely. The 68000 design was later involved in microprocessor versions of the IBM S/370. The 68060 (April 1994) expanded the design to a superscalar version, like the Intel Pentium and NS320xx (Swordfish) series before it. Like the National Semiconductor Swordfish, and later the Nx586, AMD K5, and Intel's "Pentium Pro", the the third stage of the 10-stage 68060 pipeline translates the 680x0 instructions to a decoded RISC-like form (stored in a 16 entry buffer in stage four). There is also a branch cache, and branches are folded into the decoded instruction stream like the AT&T Hobbit and other more recent processors, then dispatched to two pipelines (three stages: Decode, addr gen, operand fetch) and finally to two of three execution units - 2 integer, 1 floating point) before reaching two 'writeback' stages. Cache sizes are doubled over the 68040. The 68060 also also includes many innovative power-saving features (3.3V operation, execution unit pipelines could actually be shut down, reducing power consumption at the expense of slower execution, and the clock could be reduced to zero) so power use is lower than the 68040 (4-6 watts vs. 3.9-4.9). Another innovation is that simple register-register instructions which don't generate addresses may use the the address stage ALU to execute 2 cycles early. The embedded market became the main market for the 680x0 series after workstation venders (and the Apple Macintosh) turned to faster load-store processors, so a variety of embedded versions were introduced. Later, Motorola designed a successor called Coldfire (early 1995), in which complex instructions and addressing modes (added to the 68020) were removed and the instruction set was recoded, simplifying it at the expense of compatibility (source only, not binary) with the 680x0 line. The Coldfire 52xx (version 2 - the 51xx version 1 was a 68040-based/compatible core) architecture resmbles a stripped (single pipeline) 68060, The 5 stage pipeline is literally folded over itself - after two fetch stages and a 12-byte buffer, instructions pass through the decode and address generate stages, then loop back so the decode becomes the operand fetch stage, and the address generate becomes the execute stage (so only one ALU is required for address and execution calculations). Simple (non-memory) instructions don't need to loop back. There is no translator stage as in the 68060 because Coldfire instructions are already in RISC-like form. The 53xx added a multiply-accumulate (MAC) unit and internal clock doubling. The 54xx adds branch and assignment folding with other instructions for a cheap form of superscalar execution with little added complexity, and uses a Harvard architecture for faster memory access, plus enhancements to the instruciton set to improve code density, performance, and to add fleximility to the MAC unit. At a quarter the physical size and a fraction of the power consumption, Coldfire is about as fast as a 68040 at the same clock rate, but the smaller design allows a faster clock rate to be acheived.
Few people wonder why Apple chose the Motorola 68000 for the Macintosh, while IBM's decision to use Intel's 8088 for the IBM PC has baffled many. It wasn't a straightforward decision though. The Apple Lisa was the predecessor to the Macintosh, and also used a 68000 (eventually - 8086 and slower bitslice CPUs (which Steve Wozniak thought were neat) were initially considered before the 68000 was available). It also included a fully multitasking, GUI based operating system, highly integrated software, high capacity (but incompatible) 'twiggy' 5 1/4" disk drives, and a large workstation-like monitor. It was better than the Macintosh in almost every way, but was correspondingly more expensive. The Macintosh was to include the best features of the Lisa, but at an affordable price - in fact the original Macintosh came with only 128K of RAM and no expansion slots. Cost was such a factor that the 8 bit Motorola 6809 was the original design choice, and some prototypes were built, but they quickly realised that it didn't have the power for a GUI based OS, and they used the Lisa's 68000, borrowing some of the Lisa low level functions (such as graphics toolkit routines) for the Macintosh. Competing personal computers such as the Amiga and Atari ST, and
early workstations by Sun, Apollo, NeXT and most others also used
680x0 CPUs, including one of the earliest workstations, the Tandy
TRS-80 Model 16, which used a 68000 CPU and Z-80
for I/O and VM support - the 68000 could not restart an instruction
stopped by a memory exception, so it was suspended while the Z-80
loaded the page. Early Apollo workstations used a similar solution
with a second 68000 handling paging.
Part V: National Semiconductor 32032, similar but different . . . .Like the 68000, the 320xx family consisted of a CPU which was 32-bit internally, and 16 bits externally (later also 32 and 8), as indicated by the first and last two digits (originally reversed, but 16032 just seemed less impressive). It appeared a little later than the others here, and so was not really a choice for the IBM PC, but is still representative of the era.Elegance and regular design was a main goal of this processor, as well as completeness. It was similar to the 68000 in basic features, such as byte addressing, 24-bit address bus in the first version, memory to memory instructions, and so on (The 320xx also includes a string and array instruction). Unlike the 68000, the 320xx had eight instead of sixteen 32-bit registers, and they were all general purpose, not split into data and address registers. There was also a useful scaled-index addressing mode, and unlike other CPUs of the time, only a few operations affected the condition codes (as in more modern CPUs). Also different, the PC and stack registers were separate from the general register set - they were special purpose registers, along with the interrupt stack, and several "base registers" to provide multitasking support - the base data register pointed to the working memory of the current module (or process), the interrupt base register pointed to a table of interrupt handling procedures anywhere in memory (rather than a fixed location), and the module register pointed to a table of active modules. The 320xx also had a coprocessor bus, similar to the 8-bit Ferranti F100-L CPU, and coprocessor instructions. Coprocessors included an MMU, and a Floating Point unit which included eight 32-bit registers, which could be used as four 64-bit registers. The series found use mainly in embedded applications, and was expanded to that end, with timers, graphics enhancements, and even a Digital Signal Processor unit in the Swordfish version (1991, also known as 32732 and 32764). The Swordfish was among the first truely superscalar microprocessors, with two 5-stage pipelines (integer A, and B, which consisted of an integer and floating point pipeline - an instruction dispatched to B would execute in the appropriate pipe, leaving the other with an empty slot. The integer pipe could cycle twice in the memory stage to synchronise with the result of the floating point pipe, to ensure in-order completion when floating point operations could trap. B could also execute branches). This strategy was influenced by the Multiflow VLIW design. Instructions were always fetched two at a time from the instruction cache which partially decoded the instruction pairs and set a bit to indicate whether they were dependent or could be issued simultaneously (effectively generating two-word VLIWs in the cache from an external stream of instructions). The cache decoder also generated branch target addresses to reduce branch latency as in the AT&T CRISP/Hobbit CPU. The Swordfish implemented the NS32K instruction set using a reduced instruction core - NS32K instructions were translated by the cache decoder into either: one internal instruction, a pair of internal instructions in the cache, or a partially decoded NS32K instruction which would be fully decoded into internal instructions after being fetched by the CPU. The Swordfish also had dynamic bus resizing (8, 16, 32, or 64 bits, allowing 2 instructions to be fetched at once) and clock doubling, 2 DMA channels, and in circuit emulation (ICE) support for debugging. The Swordfish was later simplified into a load-store design and used to implement an instruction set called CompactRISC (also known as Pirhana, an implementation independent instruction set supporting designs from 8 to 64 bits). CompactRISC has been implemented in three stage, 16-bit (CR16A), 20-bit (CR16B), and 32-bit (CR32A) address versions (CR16B also included bit-oriented memory operations).
It seems interesting to note that in the case of the NS320xx and Z-80000, non mainstream processors gained many advanced design features well ahead of the more mainstream processors, which presumably had more development resources available. One possible reason for this is the greater importance of compatibility in processors used for computers and workstations, which limits the freedom of the designers. Or perhaps the non-mainstream processors were just more flexible designs to begin with. Or some might not have made it to the mainstream because the more ambitious designs resulted in more implementation bugs than competitors.
Part VI: MIL-STD-1750 - Military artificial intelligence (February 1979) .The USAF created a draft standard for a 16-bit microprocessor meant to be used in all airborn computers and weapons systems, allowing software developed for one such system to be portable to other similar applications, similar to the intent behind the creation of Ada as the standard high level programming language for the U.S Department of Defense (MIL-STD-1815 accepted October 1979 - 1815 was the year Ada Augusta, Countess of Lovelace and the world's first programmer, was born).Like other 16 bit designs of the time, 1750 was inspired by the PDP-11, but differs significantly - the exact origin of the design isn't known, and may be significantly older. Sixteen 16-bit registers were specified, and any adjascent pairs (such as R0+R1 or R1+R2) could be used as 32-bit registers (the Z-8000 and PDP-11 could only use even pairs, and the PDP-11 only for specific uses) for integer or floating point (FP) values (no separate FP registers), or triples for 48-bit extended precision FP (with the mantissa concatenated after the exponent - eg. 32-bit FP was [1s][23mantissa][8exp], 48-bit was [1s][23mantissa][8exp][16ext], meaning any 48-bit FP was also a valid 32-bit FP, only losing the extra precision). Also, only the upper four registers (R12 to R15) could be used as an address base (2 instruction bits instead of 4), and R0 can't be used as an index (using R0 implies no indexing, similar to the PowerPC. R15 is used as an implicit stack pointer, the program counter is not user accessible. Address space is 16 bit word addressed (not bytes), but the design allows for an MMU to extend this to 20 bits. In addition, program and data memory can be separated using the MMU. A 4-bit Address State field in the processor status word (PSW) selects one of sixteen page groups, each containing sixteen registers for data memory and another sixteen for program memory (16x16x2 = 512 total). The top 4 bits of an address selects a register from the current AS group, which provides the upper 8 bits of a 20 bit address. Each page register also has a 4-bit access key. While other CPUs at the time provided user and supervisor modes, the 1750 provided for sixteen modes, from supervisor (mode 0, could access all pages), fourteen user modes (1 to 14 can only access page with same key, or key 15), and an unprivledged mode (mode 15 can only access page with key 15). Program memory can occupy the same logical address space as data, but will select from the program page registers. Pages can also be write or execute protected. Several I/O instructions are also included, and are used to access processor state registers. The 1750 is a very practical 16 bit design, and is still being produced, mainly in expensive radiation resistant forms. It did not achieve widespread acceptance, likely because of the rapid advance of technology and the rise of the RISC paradigm.
Part VII: Intel 8086, IBM's choice
(1978)
The Intel 8086 was based on the design of the 8080/8085
(source compatible with the 8080) with
a similar register set, but was expanded to 16 bits. The Bus Interface
Unit fed the instruction stream to the Execution Unit through a 6 byte
prefetch queue, so fetch and execution were concurrent - a primitive
form of pipelining (8086 instructions varied from 1 to 4 bytes).
|