This volume describes the history of attempts to provide formal proofs that computer programs, and latterly computer hardware, will work as intended. It tackles the many attempts to prove mathematical theorems with computers or to demonstrate that a program conforms to its specification. The story is told chronologically from the early 1950s to the late 1990s, and draws on interviews with leaders in their field.
But 50 years of work has failed to create anything of significant practical use, so the book's subtitle, "Computing, Risk, and Trust", is misleading. There has been almost no impact on how computers are operated, no real reduction of the risks we run in using them, and no justification created for our ever-increasing trust in their correct operation.
The book contains a detailed account of the four-colour theorem and the way in which various types of "reducibility" of the coloured-in maps led to a method of proof that is feasible only with the assistance of a computer. It is now clear that the original 1976 proof is correct (though an easy-to-correct "bug" was found in 1982), but the result is not pretty enough to make mathematicians happy. The proof leaves one with no idea as to why the theorem is true.
Meanwhile, in the computer-science departments, early efforts at program verification foundered because complexity increased exponentially. There was a false dawn in the early 1960s with the "resolution" technique, but by the end of the decade it was clear that verification was not going to make a real impact on the "software crisis", famously identified at the Garmisch-Partenkirchen conference in 1968.
Just as the usual sources of research money were drying up, the dollars started to pour in from the spooks at the National Security Agency, who wanted to prove that their systems would never leak data. Eventually, it was realised that you could never design away the "covert channels" that can divulge a few bits at a time. There were also significant delays and costs associated with using formally verified systems and they dropped out of favour, ironically just at the stage when ideas of "information warfare" were putting new emphasis on making computer systems secure against hackers and cyber-terrorists.
The book ends with the century, and therefore omits promising systems such as the pi calculus and spi calculus, which work at the protocol level to show that there are no systemic faults in what you are asking a program to do.
It does not ignore the critics. A famous 1979 attack by Richard A. de Millo, Richard J. Lipton and Alan J. Perlis asserted that mathematical proof was far more of a social process than a rigorous procedure and the pursuit of program verification was based on a misunderstanding of what a proof really was. In 1989, James H. Fetzer took the view that the whole thing was a "category mistake" because verified programs would still need to run on physical machines, with all their unexpected imperfections. To a working programmer, the latter critique smacks of philosophers dancing on pinheads, but the earlier paper still rings true, with its incisive remarks on the mismatch between laboriously ground-out formal proofs and the messy mutability of real-world specifications and code.
The book's author, Donald MacKenzie, is a sociologist, although you would hardly know this from his command of the minutiae of computer science, maths and philosophy. There is not quite enough on the basics to allow the non-technical reader to grasp what is going on, but with a smattering of computer science to draw on, this is a most readable account of how program verification came to promise so much and deliver so little.
Richard Clayton is researching a PhD in the security group, University of Cambridge, and has worked in the ISP industry.
Mechanizing Proof: Computing, Risk, and Trust
Author - Donald Mackenzie
ISBN - 0 262 13393 8
Publisher - MIT Press
Price - £30.95
Pages - 4