The parallel universe in need of aninvisible hand

High Performance Computing

February 14, 1997

Can computer speed grow for ever? In principle, parallelism offers this possibility, avoiding the natural limits on the speed of individual electronic components. The idea is to build ever faster computers by harnessing the combined power of many smaller computers, either through increasing miniaturisation, or through accessing the vast unused time on an organisation's network of PCs. This is the hope in Kuck's book.

Unfortunately, the physical limits are not far in the future; maybe ten times faster components is the most we can achieve with known technology, and we'll have reached this point in the first few years of the next decade. If we are to continue to solve the ever more complex problems of the environment, engineering, medicine, business, fundamental science, and much more, then conquering the Greatest Grand Challenge, that of making parallel computing a practical reality, is an imperative for the modern world. Kuck is not optimistic. His book is a warning that we may not succeed.

Parallel computing has been a commercial proposition since the early 1980s. But it has been dogged by too many failures and too few successes. Many parallel computer suppliers have gone out of business. Even so, considerable progress has been made in developing more robust hardware and standard software. As a result aficionados, at least, are optimistic about the future. However, the reality is that parallel computing is still much too difficult. It is far beyond the reach of the vast majority of computer users, who do not write programs but instead use what Kuck calls problem solving environments, which focus on the problem at hand not the program which solves it. Obvious examples are word processors, spreadsheets and mathematical packages. Kuck believes that the market forces operating in high performance computing cannot alone put the power of parallel computing in the hands of the masses.

What are we missing? Kuck's plea is for quantitative and publicly-available information about the performance efficiencies and deficiencies of the parallel computers on the market, as well as the machines of the past. Surely he is right about this. Such a database would inform and guide future designers, aid purchasers and users of high performance computers by dispelling the hype used by sales staff, and accelerate the evolutionary survival of the fittest systems.

The computer scientist in Kuck then takes over and he builds an elaborate theoretical edifice for measuring parallel computer performance. This is overdone, but necessary, and the ideas need to be taken seriously by computer scientists if they are to continue to address important real-world problems. How Kuck's database will come about is left to the reader to speculate, although he points out that the cost would be tiny compared to the capital expenditure on high performance computers. Kuck defines five tests for practical parallelism. The lack of data makes it hard for him to exhibit the effectiveness of these tests, but the examples he gives are tantalising.

Kuck wants not only to direct the market and better inform policy-makers, but to reunite computer science and computational science. The breadth and importance of the problem posed by the Greatest Grand Challenge requires a broad attack. Those theorising about computers, those designing hardware and software, and those applying high performance computers to real-world problems must work together. The academic barriers between computer scientists and the users of advanced computers, erected so quickly on many campuses to deny any derivative aspect of computer science, need to be broken down if this multidisciplinary challenge is to be tackled effectively.

Computer science students, computer users and science policymakers all will find new insights in this book, but, like me, many may be frustrated that, after posing a fascinating problem, Kuck takes too narrow and theoretical a view of it. His perspective is irritatingly US-centric and, worse, he ignores economic factors.

For instance, Kuck does not mention the growing importance of Japan, which today has most of the successful high performance computer manufacturers. There is no discussion of the factors which dictate prices. When only a few hundred organisations can afford the most powerful machines, there is a special price for everybody. When a particular machine is competitive only over a two or three-year period, the time it is brought to market compared with competing systems can make all the difference to its sales. Because the market is small and the profitability of high performance computers is questionable, the technological pressure is going to come from dual-use components: every component will have to be mass-produced for something else. These forces and their consequences are barely alluded to, let alone analysed. Kuck's metrics and database may inject some sanity, but their effect may be secondary to the market dynamics.

Kuck has presented us with the Greatest Grand Challenge. Now the issue is in focus, who will pick up the ball and run with it?

Richard Kenway is director of theEdinburgh Parallel Computing Centre, Edinburgh University.

High Performance Computing: Challenges for Future Systems

Author - David J. Kuck
ISBN - 0 19 509550 2 & 0 19 509551 0
Publisher - Oxford University Press
Price - £46.95 and £29.95
Pages - 336

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored