The AI dilemma: balancing innovation and regulation
AI is advancing quickly, but different nations can have different appetites for risk and innovation associated with this evolving technology. Researchers at Sorbonne University Abu Dhabi are exploring how to find the perfect balance

Sponsored by

Sponsored by

AI offers great potential for driving innovation and solving complex challenges in the world. However, keeping up with the risks posed by AI is a challenge for industries and nations, as reflected in the growing number of regulatory frameworks across the world. Their approaches are diverse: from the comprehensive European Union (EU) AI Act to the facilitative approach adopted by countries such as Singapore. Many categorise regulation by the different levels of risk posed by different AI systems and use cases, and increasingly look to international collaborations to inform policy.
Nicolas Catelan, associate professor of law at the Sorbonne University Dhabi, explains the complexity: “AI is a tool like any other. But digital borders do not exist, so we need regulation. The EU AI Act covers 27 countries, but on a global stage, it’s hard to find a compelling standard emerging.”
Building international regulatory frameworks is challenging because different countries have different appetites for both risk and innovation, Catelan says. According to a study by PwC, AI could contribute up to $15.7 trillion (£11.6 trillion) to the global economy by 2030.
“The UAE feels that regulations are not good for innovation and that it’s too early,” says Catelan. With innovation and investment in mind, the UAE has avoided creating “hard laws” around the use of AI, favouring broad principles instead, he says.
“There’s a balance to be found between innovation and protecting human rights. AI can be a tremendous tool but there are certain areas where we must be cautious,” he says. “If you don’t want to stifle innovation, you can introduce broad principles – such as that AI must be explainable and transparent and not biased. But this will also depend on the industry, level of risk and the culture of the country.” For example, the use of AI in surgical settings is approached differently from students’ use of generative AI tools in academic work.
Good legislation can maintain this balance, he argues. The EU AI Act, which came into effect in 2024, is a detailed and technical piece of legislation with over 110 articles. But this suits the compliance-friendly culture in many European countries. For decades, countries have had to develop laws around new technologies without having the broader picture of risks and implications. For example, in 1988, France introduced a law against computer fraud, which is still used as a legal provision today. “Sub-clauses can make it easier to keep policies relevant, but the goal is to find something that you don’t have to change,” says Catelan. “Ultimately, regulation supports good business because compliance is less expensive than not abiding by the law.”
Find out more about Sorbonne University Abu Dhabi.
