Unleashing AI as a force for good

The impact of artificial intelligence on future societies will be shaped by the decisions that we make today, says Toby Walsh

March 28, 2019
Technology in higher education
Source: iStock

Computer vision systems that can spot shoplifters, machines learning algorithms that can identify the best fertilised embryos to transplant, and machine translation software that can effortlessly turn English into German at the touch of a button are just three recent developments in artificial intelligence. 

The introduction of AI into our lives is creating a range of challenges and a need for some practical solutions to ensure AI improves the common good. Chief among those challenges is how to increase the public’s trust in the use of AI, given many recent examples where that trust has been misplaced.

Take, for instance, the emergence of “deep fakes”, where AI is used to make audio and video of real people saying things they never said, and doing things they never did. What are the political, legal and ethical implications of such a capability?

There are a few situations where such technology might be of benefit. For instance, it can be used to replace actors who die mid-film production or, all too often these days, are disgraced. 

But there are perhaps many more situations where such technology might be used to cause harm. There is, for example, a very convincing deep fake video online of former US President Barack Obama delivering a speech he never made about – you guessed it – fake news.  

My colleagues are now placing bets on how long it will be before an important political election is swung by a “deep fake” video being released at the last moment.

The tech companies are starting to wake up to their responsibilities in unleashing AI on a largely unsuspecting public. Commercial aspirations will need to be tempered. Important questions around ethical considerations, controls and consequences are now being more seriously contemplated as tech reputations take a deserved battering.

Ultimately such deep fake capability could get so advanced and prevalent that people will not believe something unless they see it with their own eyes.

We are therefore entering a period of technologically driven turbulence. History has seen similar periods of disruption. There was the first industrial revolution in the 19th century, following the introduction of the steam engine; the second revolution in the 20th century with the introduction of mass production (think of the Ford motor car); and then the third revolution with the emergence of computers over the past 50 years. 

Early forms of automation had a profound impact on labour markets as well as on domestic and social life. This in turn led to diversification of jobs and responsibilities, and to a reduction in manual labour and an increase in cognitive tasks. Unions, industrial practices and laws developed and, after a period, behaviours adapted to the various revolutions of industry.

We are now similarly placed with respect to AI and human society: at the start of our fourth industrial revolution.

Does this mean there is a case for global regulation of some form? Yes and no.

The main area where I can see a pressing need for global regulation of AI is in the use of autonomous weapons. Killer robots, as the media like to call autonomous weapons, would create a revolution in the way war is fought. We urgently need control here before the industrialised nations get locked into another unwanted arms race.

Synthetic voice manipulation and fake social media accounts are other areas likely to need regulation, given the profound impact misuse of this technology can have on our political discourse and process. However, such regulation is perhaps more likely and desirable at the national rather than the international level.

Besides this, pioneers in AI need to take responsibility and build confidence in their products and services so we can continue to develop AI for the benefit of humanity.

My new book, 2062: The World that AI Made, talks about the different futures AI could give us: some good, some bad. 

In 2062, all of our devices will be online and interconnected. Most won’t have a keyboard or a screen, but will be voice activated. You will just walk into a room, speak and one of these interconnected devices will obey your commands.

But life in 2062 isn’t yet fixed. We don’t have to worry about technological determinism. How the world looks in 2062 is very much the product of the choices we make today. 

However, we are, I believe, at a critical junction in history where there’s a lot to play for. As a result, we all need to start making choices so that every one can benefit from this industrial revolution.

Ultimately AI will help deliver solutions to some of the wicked problems that confront us, like climate change. If we make the right decisions now, we can build a future where the machines do the sweat work and we can focus on the more important things in life.

Toby Walsh is Scientia Professor of artificial intelligence at UNSW Sydney and leads the algorithmic decision theory group at the Commonwealth Scientific and Industrial Research Organisation’s Data61, Australia's Centre of Excellence for ICT Research.

Please login or register to read this article.

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Related articles

Have your say

Log in or register to post comments

Most commented

Recent controversy over the future directions of both Stanford and Melbourne university presses have raised questions about the role of in-house publishing arms in a world of commercialisation, impact agendas, alternative facts – and ever-diminishing monograph sales. Anna McKie reports

3 October

Sponsored