In the summer of 2009, I sat in a conference room in the heart of Silicon Valley and listened to two contrasting speeches about the future of artificial intelligence.
The protagonists were Marvin Minsky from the Massachusetts Institute of Technology and Larry Page, co-founder of Google. Both were engaging, profoundly inspiring and left me with a bad case of impostor syndrome – but it was clear that AI meant wildly different things to the two speakers. Minsky, the enfant terrible of AI then in his early eighties, was pitching for the classic goal of “general AI”, the backdrop to HAL 9000 in Stanley Kubrick’s 2001 , Ironman’s filmic sidekick J.A.R.V.I.S. and every other cynical artificial sentient being from fiction. Page was promoting a much more limited, but pragmatically deliverable, “narrow AI” based on systems far from true intelligence but able to provide limited solutions based on big data and intensive number crunching.
Almost a decade later, AI has come by default to mean “narrow AI” in most contexts, and many new books promote this still-developing field with something approaching religious fervour, appearing to regard it as offering the inevitable solution to pretty much every challenge that we face. Yet the vibrant tech-future painted by these authors, in a rich palette of deeply marketing-led language, deserves, I believe, to be tempered with a healthy scepticism.
Thankfully, Meredith Broussard is – among many other things – a coder, which gives her important new book a depth of understanding missing from some other titles. Assuming little prior knowledge, she leads us carefully through the foothills of current computer technology, giving a real insight into how AI systems actually work, how limited they currently are in scope and understanding, and why we should be cautious about accepting their decisions without careful scrutiny.
Grounding us in sound engineering practice, Broussard lays out a practical example of how a simple machine learning project can be built and operated – along with the potential pitfalls and problems – using the Python programming language as the medium and the passenger list from the sinking of the Titanic as the dataset. Among the lessons are that real-world data collections are dirty, messy and often incomplete, and that how AI systems deal with this is almost inevitably based on the assumptions, background and biases of whoever develops the algorithm. It is reassuring, in the light of recent global events, to see decision-support systems in judicial environments, election expenditure reporting structures and the internal ethics of self-driving cars linked to a common set of arguments in favour of truly human oversight and accountability.
Illustrated with examples from Broussard’s own work and experience, this is an intensely personal journey that gives a real sense of travelling with a friend. Her descriptions of hackathons and other aspects of start-up culture are honest and atmospheric, capturing the social as well as the technical aspects of the marketplace in a way that anchors moments of technical innovation in their time and place. Hopefully, this book will gather a wide general, as well as academic, audience. It deserves to become a classic – but, even more, it deserves to be read and debated.
John Gilbey teaches in the department of computer science at Aberystwyth University.
Artificial Unintelligence: How Computers Misunderstand the World
By Meredith Broussard
MIT Press, 248pp, £20.00
Published 29 May 2018