
Harnessing AI to expand scientific discovery
When it comes to generative artificial intelligence, or GenAI for short, I am an optimist. Sure, universities need to be cautious. The technology is powerful, fast-moving and, in the wrong hands, potentially risky. AI – especially the emerging class of agentic AI, systems that can assist with complex tasks such as setting goals and making decisions – is not a threat to scholarship if meaningful human oversight and control over important decisions is maintained. In fact, it is an opportunity to extend it far beyond what we humans could achieve alone.
As a chemical engineer, I study how AI can be used in catalysis science, an area that applies intelligent systems to accelerate catalytic materials discovery and systems optimisation for energy and environmental applications. The ability to analyse vast datasets, simulate complex processes and automate tasks helps make science progress faster.
The idea of an autonomous agent is not new. It dates back to the 1960s, when researchers first explored systems that could perceive, reason and act far more quickly than humans can. What’s different today is the language models that the tools use. Modern AI can comprehend what we are saying. Large language models (LLMs) have unlocked the ability for these systems to interpret human intent, break goals into sub-tasks and take impactful actions – sometimes using tools or code they generate themselves.
That autonomy is what makes AI exciting – with the proper safeguards, of course. To focus only on the risk inherent in AI is to miss its immense potential. AI agents can already perform complex computational workflows, a series of tasks that would take humans months or years to complete. Unlike the human brain, AI can operate in enormous, high-dimensional spaces. It can sift through millions of possible molecular structures, physical configurations or unstructured datasets to identify patterns and solutions that would be invisible to us. In materials science, for example, AI systems can explore vast chemical spaces to identify compounds that capture carbon dioxide, purify water or catalyse chemical reactions more efficiently. Similar approaches could advance drug design, climate modelling or sustainable manufacturing.
- GenAI as a research assistant: transforming start-up due diligence
- ‘GenAI and critical thinking can – and should – work together’
- Four ways to shape the future of analytical labs now
This is the power of AI; it can not only chat with us but explore possibilities that are beyond our perception. Humans bring creativity, intuition and ethical judgement – qualities AI lacks. Machines bring speed, memory and the ability to analyse complexity, meaning leaps in innovation become achievable.
Together, humans and machines achieve what neither could without the other.
Harnessing AI responsibly in research
Scientists must use AI tools with a clear purpose; that is where responsible use begins. Before introducing AI into a research workflow, we must define the goal and the boundaries: What problem are we trying to solve? What data will we use? Which decisions will remain human-made?
Safeguards exist at two levels. Internally, developers must design architectures that respect ethical and operational limits. Externally, policymakers and institutions must create governance structures to prevent misuse. Just as we have spam filters to counter malicious email, we will need “anti-AI” systems to detect and mitigate harmful behaviour from other models.
Overcoming fear and collaborating systems-wide
Fear often creeps into conversations about agentic AI. For all the talk of artificial general intelligence, the notion that machines will soon replace human researchers is unrealistic. AI and humans learn in fundamentally different ways. Humans can generalise from a single example; AI often needs many. AI is exceptional at pattern recognition but weak at contextual understanding and common sense. Creativity, curiosity and moral reasoning remain distinctly human strengths.
The most productive mindset is collaboration between humans and AI, not competition. AI can manage data overload and identify hidden relationships, while humans frame the questions, guide enquiry, interpret meaning and safeguard integrity. This balance mirrors how scientific teams already operate, with distributed expertise contributing to a shared goal.
No single lab, company or university can manage this transformation alone. We need collective effort – shared standards for ethics, reproducibility and data literacy. Researchers should understand not just how to prompt an AI system but how to interrogate its outputs. Verification and transparency must remain non-negotiable.
A human-centred future
Science advances when we expand the tools available to ask and answer questions. Generative and agentic AI represent the next such expansion – one that allows us to explore ideas at a scale previously impossible. AI can process data that would take humans lifetimes to complete, revealing patterns and solutions beyond human perception.
But tools are only as wise as their users. If we treat AI as a collaborator rather than a competitor, it can help us tackle humanity’s hardest problems, from environmental change and disease to sustainable energy, within our lifetimes. And we can do so with ethics and governance, so we don’t inadvertently magnify our worst tendencies.
The choice is ours. AI will not replace the human mind, but it can amplify human purpose. The future of research will not simply be automated; it will be augmented – by intelligence, human and machine, working together.
Hongliang Xin is a professor of chemical engineering at Virginia Tech.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.




