More academic journals allowing AI-generated manuscripts


While many university educators are worried about AI chatbots being used to skip the hard work on assignments, academic publishers appear ready to accept that texts will soon be co-written by artificial intelligence. — Photo: Hannes P. Albert/dpa

LONDON: In a sign that academia is warming to the idea that many published manuscripts could soon be co-written by artificial intelligence, a growing number of journals are now allowing the use of text generated by AI.

The International Studies Association, a partner of Oxford University Press and overseer of six journals covering international affairs and geopolitics, announced on June 11 that “recent developments” with AI mean that “human authors” would have to give “detailed statements of the exact use of AI tools” in what they submit in future.

Authors “should”, the ISA said, “include information on the exact AI tool and where it was used in the creation of the manuscript” and give “rough percentages of reliance on AI tools in writing.”

But while the ISA said AI bots “do not qualify as authors”, it did not rule out the eventual publication of manuscripts conjured up solely by AI, saying only that its editors “are not reviewing or accepting manuscripts compiled exclusively by an AI tool at this time.”

The organization urged “the ISA community to check back for further guidance” as “the situation is changing rapidly.”

The ISA’s guidelines followed the Committee on Publication Ethics, an umbrella group of university publishers, in February 2023 telling authors that - rather than avoid AI altogether - they should be transparent about “how the AI tool was used,” warning them they would be “liable for any breach of publication ethics.”

COPE’s statement in turn came hot on the heels of the American Medical Association’s JAMA network of journals calling for “responsible use of AI language models and transparent reporting of how these tools are used.”

JAMA at the time described ChatGPT’s answers to questions as “mostly well written” but at the same time “formulaic, not up to date.”

More worryingly, perhaps, given that it applies to medical research, was JAMA's citing of findings that the AI chatbot sometimes comes out with “concocted nonexistent evidence for claims or statements it makes” and provides material that is “false or fabricated, without accurate or complete references.” – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

TikTok's rise from fun app to US security concern
Musk, president? Trump says 'not happening'
Jeff Bezos says most people should take more risks. Here’s the science that proves he’s right
Bluesky finds with growth comes growing pains – and bots
How tech created a ‘recipe for loneliness’
How data shared in the cloud is aiding snow removal
Trump appoints Bo Hines to presidential council on digital assets
Do you have a friend in AI?
Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?

Others Also Read