Billion-buck voyage into unknown

Strategic Computing
February 6, 2004

The subject of this book is a US initiative dubbed strategic computing (SC), which released a large amount of additional research funding in the quest for artificial intelligence (AI). The initiative supported research at all levels, from the high-speed computer-based hardware technology required to make it effective, through to high-level software for algorithms handling aspects such as vision recognition and expert systems.

The authors estimate that just over $1 billion was spent on the endeavour by Darpa (Defense Advanced Research Projects Agency). How and why was such a sum spent on what is deemed by most to be a failure?

Part one presents the three instigators of the initiative: Robert Kahn was the "visionary", whose credentials include co-developing the underlying protocols used by all internet traffic; Robert Cooper acted as the "salesman", whose skills lay in selling the idea to politicians; and Lynn Conway was the "executor", whose reputation was in VLSI (Very Large Scale Integration) hardware design and who developed a process for the initiative.

Without Kahn, the initiative would never have gained enough momentum to happen. Unfortunately, his vision of research-led activity conflicted with Cooper's view of application-driven projects, and this was never really resolved. Kahn thought of SC as a pyramid of technologies, whereas Cooper concentrated on a timeline of activities. In addition, all three of these major instigators left SC in 1985, quite early on, and a succession of subsequent directors tried to imprint their own particular and differing views on the initiative.

Part two covers the various main areas that SC tackled, from the bottom up, approximately following Kahn's original pyramidal model. It looks at Mosis (the Metal Oxide Semiconductor Implementation Service), covering the infrastructure for microelectronics support in SC, and at SC's concentration on Lisp (List Processing) machines with good support for symbolic processing in AI applications.

At the architecture level, the most famous SC project was the "connection machine", a highly parallel supercomputer that led to the formation of the company Thinking Machines, which was ultimately unsuccessful and filed for bankruptcy protection in 1994.

Also unsuccessful was the "autonomous land vehicle", an application project that was meant to drive itself, unaided by humans, at increasing speeds as the project progressed. It depended on effective vision algorithms, which would allow the vehicle to interpret the surrounding landscape and take appropriate action, but the algorithms proved much harder to determine than was anticipated. Huge processing power was required and problems such as changing weather conditions and lighting added to the difficulties. In addition, although this was a real application, there was no real customer, such as the army, who had bought into the idea.

Part three covers the middle (1985-89) and final years of SC, together with a short but useful conclusion. The succession of directors with different views caused changes in direction and emphasis as SC progressed, and there was also the political angle to consider to ensure continued funding. These were the Reagan years of the "strategic defense initiative" and, to some at least, SC formed a computing component of this. The very name has SDI connotations.

As SC neared its end, there was talk of SC 2. But ultimately SC died rather quietly (especially considering its $1 billion price tag) and was, in effect, subsumed by the "high performance computing and communications initiative". The emphasis on AI was removed. SC had largely ignored network research, which is odd given Kahn's pivotal role in developing the internet protocol infrastructure. Ultimately, this proved far more important and long lasting, something he may not have originally predicted.

For those who do not have time to read the entire book, the conclusion provides a good summary and analysis. However, like SC itself, the book is very inward looking. Apart from a brief mention of the Japanese "fifth-generation" programme that threatened to outstrip the US in the race for AI expertise and was used as a reason for the vast funding raised for SC, there is no real international context. Other collaborative programmes, such as the European Strategic Programme for Research in Information Technology (Esprit) or Alvey in the UK, also during the 1980s, are not mentioned.

Research is high risk by nature and Darpa took a high-risk approach that unsurprisingly failed overall, given the number of unknown strands required for success. Darpa funded much of the early research that led to the internet, so it has not wasted all its research funding over the years.

However, while SC expended huge sums on AI, Tim Berners-Lee, with virtually no financial support, developed the fledgling worldwide web. It is interesting to think what might have happened if SC had supported networking research rather than AI.

The book includes more than 70 pages of notes as well as a comprehensive index. It is a worthwhile scholarly work giving a useful overview. While I suspect that not many individual researchers will buy it, it would be a worthwhile addition to any library with a comprehensive computing history section.

Jonathan Bowen is professor of computing, South Bank University, London.

Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993

Author - Alex Roland with Philip Shiman
Publisher - MIT Press
Pages - 4
Price - £34.95
ISBN - 0 262 18226 2

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Have your say

Log in or register to post comments