Arithmetic Processors: Then and Now
In the beginning there was the microprocessor. A collection of logic that centered around an ALU (Arithmetic and Logic Unit) and a series of registers. It was capable of doing most tasks just fine. Simple math, and boolean logic were the key to most programming needs. As the life of the processor and its extension, the microcontroller, progressed the computing needs became larger. Programmers wanted to be able to manipulate larger numbers, and floating point ones at that. Add and Subtract were no longer sufficient, division, multiplication and a host of other mathematical functions were needed. In the 1970’s transistor counts were in the thousands, frequency in the MHz and line widths were measured in microns. It was not feasible to build these math functions, in hardware, on the same chip (or rather die) as the processor.
Several companies worked to solve this. Perhaps the most successful, and famous, was AMD. AMD in 1977 introduced the AM9511 Arithmetic Processing Unit. It is best described as a scientific calculator on a chip. It could handle 32 bit double precision math (via 16 bit stack/registers) and supported not just the basic ADD, SUB, MUL and DIV, but SIN, COS, TAN, ASIN, ACOS, ATAN, LOG, LN, EXP, and PWR. 14 floating point instructions, in hardware, on a single chip. It ran at up to 3MHz (4MHz in the ‘A’ version) and could interface with pretty much any microprocessor or microcontroller, providing much needed processing power. It was designed as a peripheral, so that the main processor could assign it a task, and then go on about its program while the AM9511 crunched the math. The AM9511 would then notify the host processor via interrupt that it was finished the the data/status was ready to be read.
AMD updated the design to support……