Slide rule maths doubles computer speeds

February 26, 1999

Computer arithmetic is not a fast-moving field. World conferences on the subject happen only once every two years, which in "internet time" is like waiting for Halley's comet to come round again.

April's conference on computer arithmetic, in Adelaide, will be worth the wait. Nick Coleman, an electronic engineering lecturer at the University of Newcastle upon Tyne, will tell fellow-engineers how to build a microprocessor which runs programs twice as fast. He will probably add that they need a licence to do so. A patent application has been filed.

This speedup does not depend on better silicon technology or more megahertz. It is all about carrying out the four basic operations of arithmetic in fewer steps. It is done by logarithms. As every former slide-rule user knows, numbers can be multiplied quickly by adding their logarithms. Division is performed by subtracting one logarithm from another. Difficult operations are replaced by easy ones. But there is a price for this. Logarithmic notation may make multiplication and division easier, but it makes adding and subtracting more difficult.

Numbers could be converted back into standard notation before adding or subtracting, but a computer cannot convert them quickly on the fly. "There is a way of adding while they are still in logs," Dr Coleman explained. But it involves calculating a function so ugly and complicated that most researchers thought it must slow down computers instead of speeding them up.

Two new algorithms have changed that. One of them carries out the basic operations of addition and subtraction. It looks up rough values of the ugly function in a table, and then uses a new, rapid interpolation method to get an accurate answer. Unfortunately some subtraction sums hit a "singularity" which makes the calculation slow, inaccurate or both. A second algorithm is needed to transform these awkward subtractions into more tractable ones.

The transformation algorithm was published in October 1995. Dr Coleman will present the addition-subtraction algorithm at the Adelaide conference.

This could be the beginning of the end for the "floating point" notation that has been used to represent numbers since the early days of computing. "We can do log additions and subtractions as fast as floating point. We can do log multiplication and division much faster," Dr Coleman said.

With current floating point techniques, division is the slowest operation, taking three times as long as addition, subtraction or multiplication. "If you use logs, multiplication is five times faster than usual, and division is 15 times faster than usual," Dr Coleman explained.

Faster computer arithmetic is likely to benefit users of machine tools, mobile telephones, digital television, aircraft and computer games. Early indications are that microprocessors using the new technique will run programs twice as fast and with twice the precision.

A feasibility study was carried out in 1997-98, and a three-year research project has just begun. Overall the project will receive funding of e717,750 (approximately Pounds 500,000) from the EU's Esprit programme, and additional funds from industrial sources.

The University of Newcastle's partners in the project are University College Dublin, Massana Ltd, the Czech Academy of Sciences and Philips Research. Philips plans to develop a microprocessor which does arithmetic by logarithms. Samples of the chip should be available in two years' time.

Please login or register to read this article

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Have your say

Log in or register to post comments