Cutting edge

十月 23, 1998

Today's computers allow us to do large-scale calculations. But just how trustworthy are the results they produce?

When I started doing research at Manchester University in 1959, the number of computers in the United Kingdom could be counted on one's fingers. At that time nobody would have envisaged that today there would be several million computers in the UK, each one more powerful than the Mercury then in use at Manchester. Moreover, Mercury filled two huge rooms, while most new computers fit on a small table.

I became attracted, together with many others of that era, to identifying which areas of research could be assisted by employing computers. Despite their current popularity, we would certainly not have regarded computer games as an area worth pursuing. My early investigations were related to applying computers to solving problems in number theory. Probably because some of the underlying techniques involved the solution of non-linear equations, I was offered a job in the mathematics division of Shell Research when my research grant expired.

Shell was trying to model a number of situations of interest to the oil industry, such as the storage of liquefied petroleum gas, the stresses on road surfaces and the reaction of oxygen and hydrogen in a combustion chamber. It was this latter problem that impressed on me the importance of ensuring that input data and results are checked and that the method of solution is stable.

To understand what I mean by stability consider the accompanying diagram. It illustrates solutions to the equation shown when n equals 0 to 1, 2 and 3. A formula can be established for the area under the curve. We can find the area for any value of n if the area is known for n=0. As n gets bigger, the computed areas get smaller, but however accurate the starting value, at some point there is a blip - for example, if the starting area is 0.6321 the area for n=7 comes out bigger than that for n=6.

The reason that this formula is not suitable for computation is that by the time n reaches 7, the original small error in the area for n=0 has multiplied up by 1x2x3x4x5x6x7 to reach 5,040.

There is another feature of computers that should send a shiver down our spines, which is that computer arithmetic is not exact. Two distinct computers can produce different answers to the same problem. To illustrate this I have computed the sum of the reciprocals of the first two million natural numbers - in other words 1+1/2+1/3 etc, to 1/2,000,000. Computer 1 gave an answer of 14.01743, while computer 2 said 15.31103 - the correctly rounded answer should be 15.08587. Ideally this answer should have been produced from either computer used.

This is of particular concern when trying to solve large-scale scientific and engineering problems using arrays of personal computers in parallel. Here we want to ensure that an accurate answer is obtained for each job done by a PC and we would want that same answer to be returned whichever PC is accessed. Moreover, we would like the overall result to be correct to some specified precision.

Attempts are being made to address these problems. If they can be solved we can be confident, from the results of our model calculations, that the bridges of the future will not collapse, nuclear power stations will not melt down, and no disasters of any kind will occur as a result of computer simulations.

Alan M Cohen, senior lecturer, school of mathematics, Cardiff University.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.