AI education must help students see inside the black box

Graduates’ common lack of deep understanding about how AI works is hindering industry take-up, says Min Wanli

April 14, 2020
Source: Getty

The ripple effect of AlphaGo’s triumph in 2016 has been growing ever since. The comprehensive defeat of world Go champion Lee Sedol by Google DeepMind’s artificial intelligence application signified that the promise of AI – first widely publicised by DeepBlue’s defeat of then world chess champion Garry Kasparov two decades earlier – was set to be realised.

Today, it is clear that the consequences of failing to be in the vanguard of the new technology could be devastating for national policymakers, corporate chief executives and, most importantly, the next generation of workers. China has launched new initiatives to promote AI education, particularly by boosting the number of AI schools and departments at universities. Meanwhile, the US recently launched its own programme, focused on advanced computing.

Nevertheless, this rush to action on AI education and research could be just as costly as inaction – if not more so – unless it is properly thought through. First, we must fully understand the strengths and weaknesses of current AI capacity and properly position AI education efforts within a framework of coherent short-term objectives and long-term strategy.

The academic community is still debating the scientific nature of AI. Some claim that it is more like alchemy than a modern system of research because researchers don’t fully understand either their AI programs’ problem-solving techniques or the tools we use to build those programs in the first place. It has been reported many times that facial recognition technology can be misled by fake images, which could be serious if false positives end up being passed on to law enforcement authorities, for instance.

Unlike classical physics, with its well-established system of axioms, the current generation of AI – primarily deep neural networks and variants – is not mathematically well established yet. For better or worse, convenient access to supercomputing and large datasets has made it relatively easy to develop projects with an AI flavour based on a complex black-box model of the technological development.

The contrast between the popularity of AI applications such as chatbots, image searches and video analytics and the lack of strong theoretical foundations poses challenges to educational institutes in terms of how they teach AI as an independent academic degree programme.

Such programmes should strike the right balance between teaching “know-how” for researchers and “know-what” for practitioners. And while the latter is all about coding, the former – contrary to popular assumptions – is much more about mathematics. Knowing how algorithms work and under what circumstances they may fail is crucial. The common lack of this know-how among so-called AI researchers is hindering the spread of AI into industry.

The core curriculum of the AI research track should take an interdisciplinary approach, incorporating mathematical and algorithmic theory, with the aim of training a new generation of researchers able either to fill out the theoretical framework around the deep neural network approach or invent a completely new theoretical framework.

The application track, on the other hand, should cover a wide range of uses of AI across industry, from chatbots to medical imaging analysis. The core ability for AI practitioners is hands-on engineering skills and business analytical skills, along with understanding of the processes of specific industries in order to develop relevant AI.

Scarcity of teachers with real experience of AI is becoming a serious problem and calls for far closer collaboration between tech giants and institutions. To that end, AI education programmes must loop in industrial experts to co-design curricula and teaching material. Students should learn how to effectively apply standard AI toolsets to solve practical problems, preferably with access to field study projects. The project could be as simple as training a chatbot, on top of the basic Siri, to converse about specific specialist topic, such as sports or weather. With enough “know-what”, graduates from this application track could spearhead AI adoption by industry.

Autonomous vehicles and robots are progressing very fast, and the effect this could have on human employment has become a popular topic of social debate. But the annihilation and creation of professional occupations is a constant companion to technological evolution: there is nothing special about AI in this regard. The number of jobs requiring AI skills has grown by 450 per cent since 2013, according to the US Bureau of Labor Statistics, and demand currently outstrips supply.

But AI is not a panacea. AI education should also provide students with a balanced view of its limitations, such as its susceptibility to biased training samples or its fundamental lack of creativity; smart moves in Go might look creative, but all the technology is really doing is exhaustively enumerating and assessing the underlying scenarios. 

As with any previous technological revolution, we should be thinking all the time about what AI really means to us, and not privilege it over human beings. Ultimately, it is the people who use the AI that count. Accordingly, AI education programmes should balance the short-term goal of training AI researchers and practitioners with the long-term one of equipping students with skills that the machines cannot replace. 

Wanli Min is the CEO of North Summit Capital, a tech investment firm, and was formerly chief scientist at Alibaba Cloud, a Chinese cloud computing company.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored