Michael Frank Deering: AI: FAIM1
At Schlumberger Palo Alto Research (SPAR) in the early 1980′s, we had the microprocessor revolution going on (the Fairchild Clipper CPU was being designed further down the same hall as we were on), custom LISP machines coming from several companies, and new massively parallel processor architectures being designed at several universities (Danny Hillis’s Connection Machine work at MIT being the most prominent). SPAR decided that it needed some research in this area, and hired Al Davis from the University of Utah to look at putting not just Lisp, but more sophisticated AI algorithms (for which we had many of the world’s experts in-house) into hardware. Thus the “Fairchild AI Machine One” (FAIM1) was born. I had finished up on in-process IC inspection and was looking for some new combined hardware / software work to get into, and joined his team.
The large scale architecture was audacious even in today’s light: wafer scale integration, with many wafers stacked into one machine. (Gene Amdal’s Trilogy company attempt to use similar techniques at that time had not yet failed.) Each wafer would consist of an intermix of processors, smart RAM, and packet router units (which name “post-offices). The processor was a 40-bit tagged architecture, with special instructions not just for LISP but for more advanced AI algorithms. The structured data in the smart RAM was to be addressable by pattern matching rules. The router was to route messages or remote load/store operations from a local processor to any other processor in the fabric, based on a modified hexagonal interconnect. The router was to off load all the overhead of processing packets from the processor proper: message would go out, response (if any) would arrive back in the memory space of the processor some time later.
I designed a first cut instruction set for the 40-bit processor. The publication below barely describes this; I’m still looking for a more complete ISP document that I have somewhere. The instruction set at the macro assembler level looked like a very CISC VAX crossed with a Lisp machine instruction set: addressing modes were things like “CDDAR”, operations were tagged add. But at the hardware level, the machine was really RISC: “CDDAR” was really three simple conditional load instructions that placed the results into internal argument registers.
The other parts of the machine were being prototyped as simulation and individual test chips. Because of my previous work on PEARL, I was one of the few people in AI at the time who was experienced in optimizing high level knowledge representation techniques into optimized machine code. Unfortunately that meant that I knew a lot of things that wouldn’t work that other AI people hadn’t experienced yet. So as a fairly junior guy (two years out of school) this created some problems.
Because AI was supposed to be mostly symbolic computation, the initial CPU design had no dedicated floating point hardware, just the (then common technique) of faults to an internal software floating point library. I thought that floating point would be more important, and was dismayed when I worked out that the entire local RAM space of a processor node (where all algorithms and most data was to reside) was barely big enough to hold just the floating point library. Nothing else. I was in the process of being told how unimportant floating point was, when DARPA, which was interested in co-funding some of the project, decided that floating point was important to them too and changed this to a requirement. (This was why the 1-bit computer connection machines suddenly acquired full 64-bit floating point units hung of every aggregate of a certain number of processors.)
The FAIM1 was the only project that I had ever worked directly on for which I wasn’t chief architect; I didn’t know enough (at that time) to be the chief architect of that machine; but I did know enough to know that I couldn’t continue to work on a project that had too many design decisions that went against my gut instincts.
The FAIM1 project was never completed at SPAR. Al Davis started a similar project later at HP labs (with some of the same chip designers), some chips were built, but a complete system was never commercialized.
Michael F. Deering, “Hardware and Software Architectures for AI”, in Proc. of AAAI-84, 1984. Reprinted as “Architectures for AI” in BYTE, 10, 4 (April 1985), 193-206.
Postscript on AI Machines
As one of the few people who designed an AI machine, and whom did go on to design a number of commercially successful custom architectures, I should eventually put down some 20 year after the fact thoughts on why people ever tried to build such things, and reasons why they may or may not do so in the future. (But for now, this is just a placeholder.) (Beyond Symbolics and LMI/TI, there was the Japanese 5th generation effort, as well as a custom Prolog machine, and of course the line of machines produced by Thinking Machines Incorporated.)