Mathematicians must accept that computers, not humans, will be the only ones able to check ever more proofs, says Brian Davies
Two years ago, the editors of the world's most prestigious pure mathematics journals made a momentous decision. They agreed to accept a paper for publication in the Annals of Mathematics despite admitting that they had no way of fully confirming its contents.
The paper concerned Kepler's conjecture, a 400-year-old problem that is as easy to state - the most efficient way to pack oranges (or any other sphere) is in regular pyramid arrays - as it was difficult to prove.
But in 1998, Thomas Hales, a mathematician at the University of Pittsburgh, announced that he had indeed devised a proof and submitted it to the journal. Eventually, the editors conceded that their panel of referees had tried but been unable to check it in its entirety.
For although Hales's strategy was compelling, it depended on many crucial steps being carried out by computer. There were sharp disagreements about whether the editors should declare their reservations about the completeness of the proof. Eventually, the editors contented themselves with a general statement about computer-based proofs.
Ever since the classical Greeks, many mathematicians have believed their subject to be quite different from all other enterprises. It is considered to be certain in a way that no other human knowledge has ever been: even God cannot change the fact that 2+2=4. Particularly gifted individuals are supposed by some people to be able to catch a glimpse of a mathematical realm existing beyond the limits of space and time.
Fortunately, mathematicians have to convince their peers that their insights are correct by producing arguments that can be examined in detail. Every step can be challenged, and publication has until recently always depended on the referee finally agreeing that a proof is sound. That certainty is now being challenged.
In the past few decades, it has become commonplace for mathematicians to use computers to test conjectures extensively, especially in more applied fields. This convenient tool is now leading us towards a new era. For instance, Euclid's theorem - that every number can be written uniquely as a product of primes - is now less relevant in some situations than the existence of a practical algorithm for implementing it.
We are told that the encryption of financial transactions depends on sophisticated ideas in number theory. What is more, it is better to have a fast algorithm that fails occasionally in this task than to have a mathematically perfect one that takes a computer longer to perform. The famous Cambridge mathematician G. H. Hardy relished the uselessness of number theory in the first half of the 20th century. As a convinced pacificist, he probably would have been horrified at the establishment of the Heilbronn Institute in Bristol, where number theorists are paid to solve problems relating to cryptography.
We accept that computers are better at numerical calculations than humans and that applied mathematics relies on them. But the machines are now doing something much more significant, as the publication of the proof of Kepler's conjecture acknowledges. Back in 1976, the computer-based proof of the four-colour theorem seemed an oddity that could be dismissed. But over the past ten years, the trickle of new computer proofs has become a stream and looks likely to become a river. Pure mathematicians still have to devise the strategies that computers follow, but the footwork is increasingly being left to the machines. As a result, we can no longer survey the entire proofs of an increasing number of important theorems as we once could, and we have to accept our computers' word that they have carried out our instructions and obtained the result that we suspected. Their own calculations are literally unreadable.
At present, only a small number of fields are affected by these developments. But as time passes, more mathematics of this type will appear. Eventually, computers will be used so heavily to extend the boundaries of our understanding that we will have conceded that mathematicians are no longer like Plato's philosopher, looking directly at the ultimate mathematical reality. Instead, we mathematicians will have turned into generals, who marshal great forces and test our troops and advisers but ultimately have to rely on them to win important battles. This does not mean that the classical notion of proof will have been forgotten. Some mathematicians will continue as they always have. But most of us will be playing the new game.
What a change from the days when we told our best students that they should not believe any theorem unless they had personally checked its proof and were themselves satisfied that it was true. This golden age may be drawing to its close, but a new one dawns and we should welcome it. With the assistance of the computer, the future undoubtedly holds discoveries that will be as impressive as any that have been made to date.
E. Brian Davies is professor of mathematics at King's College London and author of Science in the Looking Glass , published by Oxford University Press, Pounds .50.