Will ChatGPT transform research? It already has, say Nobelists

Nobel-winning scientists are now using large language models, but experts say their impact on research is only just starting

September 17, 2023
A robot hand presses a computer space bar, symbolising ChatGPT
Source: iStock

“I think ChatGPT can make anyone 30 per cent smarter – that’s impressive,” reflected Michael Levitt, the South Africa-born biophysicist who took the Nobel Prize in Chemistry in 2013.

“It’s a conversational partner that makes you think outside the box or a research team who have read a million books and many million journal papers.”

pioneer of the computer modelling of molecules, Professor Levitt is not easily dazzled by technological wizardry but admits he has been impressed by the large language models (LLMs) that have emerged over the past year. “I didn’t expect to this kind of stuff in my lifetime – they’re a very powerful tool. I still write code every day but ChatGPT also writes programmes very well,” he said.

Based at Stanford University, the biophysicist has seen first-hand how technology can rapidly alter how knowledge is accessed – but nothing compares to the potential of LLMs, he insisted. “I started using Google in 1998 – two years before it was released publicly – because its founder Sergey Brin was in my class. A very smart guy who rejected my suggestion to make it a subscription service. Google has similar mind-bending powers, but ChatGPT is even more potent,” he said.

ChatGPT’s so-called “hallucinations” – in which it invents fictitious scientific papers and authors – concern some scientists but Professor Levitt was not perturbed, reckoning that researchers should be able to spot troublesome results. “It’s like having an incredibly clever friend who doesn’t always tell the truth – we’re capable of spotting these errors. Half of the work that scientists do is flawed but we’re good at sorting the data,” he said.

That enthusiasm for AI to aid research is shared by Martin Chalfie, the Columbia University biochemist who won the chemistry Nobel in 2008. “I was visiting my doctor recently and he  did all the usual checks and made his diagnosis but mentioned he’d also used AI to analyse the results – I almost stood up and cheered,” recalled Professor Chalfie, known for his work on fluorescent green proteins used for microscopic imaging.

“He was doing everything that a doctor does but also getting a second opinion which might maybe cause him to think differently,” he added, drawing a parallel with how researchers might use AI to think differently about their results. “Obviously if my doctor suggested that he plugged me into a machine and let it decide my treatment, I wouldn’t have been happy. But that’s not what happening in research – I don’t see why you wouldn’t want this kind of assistance.”

Other Nobelists are, however, not entirely convinced that the outputs of ChatGPT and other chatbots scanning the entire corpus of scientific literature should be treated as an unalloyed good. In a discussion at the annual Lindau Nobel Laureates Meeting, which saw dozens of Nobel winners gather in the island town of Lindau, in southern Germany, this summer, Israeli chemistry laureate Avram Hershko worried that researchers were too trusting of the insights provided by LLMs.

“We have to know what datasets it is using – it should be transparent,” said Professor Hershko, who is based at Technion Israel Institute of Technology. Regulation should require LLMs to “say what the margin of certainty is” or, at least, acknowledge scientific papers with contradictory conclusions that could prompt researchers to seek out different views, he argued.

That said, AI would be an important force for good in coming years, Professor Hershko acknowledged. Others go further, saying the Nobel committee should give serious thought to changing its rules to allow AI – or AI researchers, at the very least – to become eligible for winning science’s top prize. DeepMind’s AlphaFold technology, which solved the “protein folding problem” that has vexed science for nearly 50 years and allowed scientists to determine a molecule’s 3D shape based on its amino acid sequence, is a good example of discipline-changing advance that should be eligible, some say.

“The Nobel Prize lives on its reputation and its history is deeply important to them so I understand why its committee would not want to give it to a computer – this is same prize that Albert Einstein won a century ago,” said Professor Levitt. “But it’s a fair question to ask because AI has changed everything.”

Indeed, the issue of whether AI will win a Nobel is moot because there are already several nailed-on future Nobel prizes that have relied heavily on the technology, said Shwetak Patel, winner of the 2018 ACM Prize in Computing, a $250,000 (£197,000) award given to outstanding early and mid-career researchers, the second biggest prize in computing after ACM’s $1 million Turing Award, dubbed the “Nobel of computing”.

“Whoever wins the Nobel for the Covid vaccine will certainly have used AI, which was crucial in sequencing the SARS-CoV-2 genome so quickly,” said Professor Patel, director of Google’s health technologies section, and endowed professor of computing and electrical engineering at the University of Washington.

His research field of collecting health data using mobile phones and wearable tech such as smartwatches has been transformed by the emergence of LLMs in the past few months, he admits. Methods created by his lab to monitor a patient’s heart rate or check insulin levels in the blood using standard mobile phone cameras, or check for tuberculosis using a phone’s microphone, are undoubtedly exciting innovations, but the American computer scientist explained that a major barrier to this kind of research was processing the mountains of real-time data arriving from digital devices. Thanks to an LLM, researchers no longer needed to code the arriving datasets as algorithms and were able to process and even interpret this data with a minimal amount of training, said Professor Patel. “It almost as accurate as the system that we’d been working to develop for five years,” he added.

With LLMs able to parse and interpret data from wearable devices, health researchers used to running checks on a handful of patients could soon be receiving data from millions of people, explained Professor Patel.

“That’s incredibly useful if you want to tackle ‘long tail’ problems like diagnosing rare diseases before symptoms begin to appear – we’ve already been able to train a model to find a certain health problem based on just three things we were looking for in the data,” he said.

According to Professor Patel, combining AI with the ubiquitous digital devices of modern life will “push the boundaries of what research can achieve in an unprecedented way”, adding that LLM-enabled devices could also be used to create bespoke fitness and nutrition plans to improve public health.

“Instead of telling people that they should exercise or eat less, health ministries should give out smart watches and AI would create very specific plans for fitness and nutrition based around individuals’ personalities and routines – if these could access your phone, then each health plan would be tailored to that individual, making them more likely to succeed,” he said.

Some pundits have wondered publicly whether diminished scientific productivity is now the norm in modern science, with larger teams, costlier equipment and more time required to find truly novel ideas that yield far less impact than breakthroughs of the past; a 2020 study in the American Economic Review, titled “Are Ideas Harder to Find?” estimated that scientific productivity is about 3 per cent of what it was in the 1930s.

Like Professor Levitt, Professor Patel could not disagree more. “This kind of research has exploded in the past few years, but it’s really gone to a new level in the past few months,” he said. “Now is the most exciting time to be a researcher.”

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Sponsored