Academic publishers and AI do not need to be enemies

Generative AI firms should stop ripping off publishers and instead work with them to enrich scholarship, says Oxford University Press’ David Clark

四月 4, 2024
A black and white pair of chess knights face each other
Source: Sezeryadigar: iStock

Fear, despair, optimism, anxiety – for much of the last year, all these emotions and more have been referenced when generative AI has been mentioned.

We know that this technology, driven by large language models, will shape all our futures, not least as a tool for the discovery and retrieval of knowledge. This might feel like it is out of our hands but scholarly publishers, and the researchers we partner with, are at a crossroads – do we resist or do we engage?

Earlier this month, the Publishers Association, the member organisation for UK publishers, wrote a letter to technology companies to express our concerns about the use of copyrighted works in the training, development and operation of AI models. It underlined that we “do not outside of any agreed licensing arrangements to the contrary, authorise or otherwise grant permission for the use of any of their copyright-protected works in relation to, without limitation, the training, development or operation of AI models including large language models or other generative AI products”. The bottom line is that we are not willing to let the works we have published, the work of hard-working researchers and authors across the world, be used unless appropriate licences have been agreed.

But this is not about resisting generative AI and what it can offer. This is a request to engage, and to engage in good faith.

Experience tells us that we should not stand aside in this moment. Much as Google and other search engines have become the leading way in which most scholars retrieve academic literature, and academic publishers have engaged with them to make that happen, large language models will create the tools that enable scholars and students to access and understand the latest developments in research.


Campus resource collection: AI transformers like ChatGPT are here, so what next?


But how we allow AI to shape the future of scholarly communications must be insight-led, understanding the perceptions, concerns and potential opportunities for the scholarly community. At Oxford University Press, we are currently holding a survey of academic researchers to better understand the impacts of AI technology throughout the research process. We need to understand experiences across the research spectrum, be that early-career or established researchers, different disciplines and subject areas, or different countries and languages. Understanding how students, researchers and librarians engage with generative AI technologies will be critical to understanding how we should engage with these technologies and the companies developing them.

There are, of course, good reasons to be concerned. As stated in the Publishers Association’s letter, publishers across the industry are aware of the use of “vast amounts of copyright-protected works without the authorisation of the right holder in the training, development, and operation of AI models”. The risk for publishers and, fundamentally, for research authors is the potential power of AI technologies to absorb, retain and re-use knowledge. Against these risks, publishers are balancing the need to adapt – and quickly – to this new world, with the need to ensure that published material is neither overlooked as a critical source of knowledge nor simply taken without appropriate authorisation, remuneration and attribution.

Scholarly publishers and authors have a responsibility to play an active role in how the knowledge paradigm shifts and, in doing so, create the opportunity to preserve the ecosystem that supports academia and the intellectual property which sustains it. Chief among the opportunities is the chance to ensure that generative AI respects authorship and intellectual property, discovers content and refers users to the original or primary sources, and does not encourage intentional or unintentional plagiarism.

The recently proposed policy for monographs to be made freely available under open-access licences within two years of publication as part of the requirements for the UK’s Research Excellence Framework also raises critical questions about the intersection of AI and open access for the scholarly community. Making books available for open access under Creative Commons Attribution (CC BY) licences risks enabling commercial generative AI uses of those works with limited safeguards or recompense for authors.

It is unclear, for example, how authors can be properly attributed for their work within a generative AI environment. We advocate strongly for a broader working relationship between technology companies and publishers that centres on the core principles of authorisation and attribution, whatever the publishing model by which research is made available.

Over time, new uses of generative AI will emerge, driving new ways of using content. This will lead to new uses of scholarship and new scholarship itself, as well as new funding opportunities. If future AI technologies are developed working with publishers, researchers and authors, it will lead to better, more sustainable and less biased tools, which will in turn be used to create improved research outcomes. A winning outcome for all.

David Clark is managing director of Oxford University Press’ Academic Division.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.