More academic journals allowing AI-generated manuscripts


While many university educators are worried about AI chatbots being used to skip the hard work on assignments, academic publishers appear ready to accept that texts will soon be co-written by artificial intelligence. — Photo: Hannes P. Albert/dpa

LONDON: In a sign that academia is warming to the idea that many published manuscripts could soon be co-written by artificial intelligence, a growing number of journals are now allowing the use of text generated by AI.

The International Studies Association, a partner of Oxford University Press and overseer of six journals covering international affairs and geopolitics, announced on June 11 that “recent developments” with AI mean that “human authors” would have to give “detailed statements of the exact use of AI tools” in what they submit in future.

Authors “should”, the ISA said, “include information on the exact AI tool and where it was used in the creation of the manuscript” and give “rough percentages of reliance on AI tools in writing.”

But while the ISA said AI bots “do not qualify as authors”, it did not rule out the eventual publication of manuscripts conjured up solely by AI, saying only that its editors “are not reviewing or accepting manuscripts compiled exclusively by an AI tool at this time.”

The organization urged “the ISA community to check back for further guidance” as “the situation is changing rapidly.”

The ISA’s guidelines followed the Committee on Publication Ethics, an umbrella group of university publishers, in February 2023 telling authors that - rather than avoid AI altogether - they should be transparent about “how the AI tool was used,” warning them they would be “liable for any breach of publication ethics.”

COPE’s statement in turn came hot on the heels of the American Medical Association’s JAMA network of journals calling for “responsible use of AI language models and transparent reporting of how these tools are used.”

JAMA at the time described ChatGPT’s answers to questions as “mostly well written” but at the same time “formulaic, not up to date.”

More worryingly, perhaps, given that it applies to medical research, was JAMA's citing of findings that the AI chatbot sometimes comes out with “concocted nonexistent evidence for claims or statements it makes” and provides material that is “false or fabricated, without accurate or complete references.” – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Sirius XM found liable in New York lawsuit over subscription cancellations
US Supreme Court tosses case involving securities fraud suit against Facebook
Amazon doubles down on AI startup Anthropic with $4 billion investment
Factbox-Who are bankrupt Northvolt's creditors?
UK should use new powers to probe Apple-Google mobile browser duopoly, report says
EU regulators scrap probe into Apple's e-book rules after complaint was withdrawn
Hyundai recalls over 145,000 electrified US vehicles on loss of drive power
'World of Warcraft' still going strong as it celebrates 20 years
Northvolt CEO steps down, saying group needs up to $1.2 billion
Bitcoin at record highs, sets sights on $100,000

Others Also Read