AI detectors ‘biased’ against non-Anglophones, Stanford tests suggest


AI chatbots like ChatGPT have spurred fears about increasing cheating and plagiarism in schools and academia, but tools designed to detect AI-generated content are also causing unease as they tend to ‘incorrectly’ label writing by non-native English speakers as generated by AI, according to researchers at Stanford University. — dpa

DUBLIN: The spread of generative artificial intelligence (AI) has stoked concerns about cheating and plagiarism in education and academia.

In response, a cottage industry of watchdog systems has sprung up, as teachers and publishers turn to so-called detector programmes to scan essays and articles for signs that they were generated by AI chatbot ChatGPT.

But these bot stewards too are causing unease as they tend to "incorrectly" label writing by non-native English speakers as generated by AI, according to researchers at Stanford University.

They put seven of the most-widely-used detectors to the test, using 91 essays written by non-native English speakers for the benchmark Test of English as a Foreign Language (TOEFL).

More than half the essays were dismissed as AI-generated, with one detector turning out to be almost completely off the mark, labelling nearly 98% of the tests as written by AI.

When it came to native speakers, the detectors proved more accurate and were able to "correctly classify more than 90% of essays written by eighth-grade students from the US as human-generated," the researchers said.

At the same time, however, the tests suggested that the use of "complex and fancier words" – not typically a staple of eighth-grade essays – were "more likely to be classified as human written."

Either way, the researchers said their findings mean the detectors should not be seen as reliable.

"Our current recommendation is that we should be extremely careful about and maybe try to avoid using these detectors as much as possible," said Stanford's James Zou.

"It can have significant consequences if these detectors are used to review things like job applications, college entrance essays or high school assignments," Zou warned.

Part of the problem, it seems, is that the detectors are geared up to red-flag "low perplexity" English as AI-generated.

In other words, if you use layman's terms or plain English – or "common words," as the Stanford teams puts it – the detector bots might dismiss you as another bot. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Apple set to face fine under EU's landmark Digital Markets Act, sources say
Amazon CEO denies full in-office mandate is 'backdoor layoff'
Musk now says it's 'pointless' to build a $25,000 Tesla for human drivers
Google defeats lawsuit over gift card fraud
Russian court fines Apple for not deleting two podcasts, RIA reports
GlobalFoundries forecasts upbeat Q4 results on strong demand from smartphone makers
Emerson sharpens automation focus with offer for rest of AspenTech in $15 billion deal
Palantir shares surge to record as AI boom powers forecast raise
Netflix under tax fraud investigation as offices in France and Netherlands raided
Singapore's Keppel to buy Japanese AI-ready data centre

Others Also Read