Computers take elocution lessons

May 10, 1996

A research team from the University of Nottingham is developing a speech synthesiser that should give computers personalised voices.

Mervyn Curtis, a lecturer in the department of electrical and electronic engineering, is leading the team's efforts to end the sci-fi robotic intonations that accompany most speech-enabled computers - made familiar in the United Kingdom by physicist Stephen Hawking's BT advertisement .

The team is seeking funding to test a personalised speech synthesiser system where the clients choose the voice they want incorporated into the communication device. In some cases it could be their own voice.

Dr Curtis said: "We have been looking at employing a variety of trained speakers to record the sounds but where the patients lose speech progressively, it might be possible to pre-record sound so that they can keep their own voices."

The team has been working for the past eight years on the project with the aim of creating high-quality natural sounding speech synthesisers to be used as communication aids for the disabled.

In the UK an estimated 800,000 people have moderate to severe communication problems who could be aided by such a device. There are more than 250 different conditions that may result in the need for an aid of some sort.

The problem at present is the highly artificial sound of all speech synthesisers used with computers. Dr Curtis said that despite the huge need for communication aids the range of choice was also fairly limited.

"Language is the highest skill possessed by mankind and it is vital when verbal language is restricted by handicap to provide an alternative means of communication," he said.

It was essential, Dr Curtis said, that technology should not alienate or distance people from their society yet the majority of speech synthesisers sound robotic and unnatural. Many have an American accent. The system works by recording the 800 neutral demi-syllables in English and adding intonation. The computer would be able to recognise vowel clusters and sentence structure to incorporate emphasis and feeling. Dr Curtis said: "The voice sounds much more fluid, not monotone, and far more representative of human speech."

The project will be in three phases, the first being the development of the demi-syllable database.

Please login or register to read this article

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Have your say

Log in or register to post comments