GPTZero app seeks to thwart AI plagiarism in schools, online media


Meanwhile, the definition of plagiarism is also evolving with the emergence of AI. — AFP

Journalists, screenwriters and college professors are among widening groups of people who are concerned about eventually losing their livelihoods to artificial intelligence programs like ChatGPT, which can produce copy faster and possibly better than humans. But one entrepreneur is pursuing technology to make it easier to distinguish between text written by people and that composed by a machine.

Edward Tian, a 22-year-old Princeton University student studying computer science and journalism, developed an app called GPTZero to deter the misuse of the viral chatbot ChatGPT in classrooms. The app has racked up 1.2 million registered users since January.

He’s now launching a new program called Origin aimed at "saving journalism,” by distinguishing AI-generated disinformation from fact in online media. Tian has secured US$3.5mil (RM15.5mil) in funding co-led by Uncork Capital and Neo Capital, with tech investors including Emad Mostaque, chief executive officer of Stability AI Ltd, and Jack Altman.

GPTZero analyzes the randomness of text, known as perplexity, and the uniformity of this randomness within the text - called burstiness - to identify when AI is being used. The tool has an accuracy rate of 99% for human text and 85% on AI text, according to the company.

The 10-person team now wants to empower journalism and is talking with large media organizations like the BBC and industry executives including New York Times former Chief Executive Officer Mark Thompson, to discuss partnerships for AI detection and analysis. The company also sees its technology for use in fields of trust-and-safety, government, copyright, finance, law and more.

"We believe we can get the smartest people working on AI detection in a room together,” said Tian. "The field of detection is so new and we believe it deserves more attention and support.”

Lack of tools

Open AI, the company behind ChatGPT, has launched an AI text classifier to detect machine-generated content, but it’s far from foolproof. The tool correctly identifies only 26% of AI-written text as "likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time. The classifier also works "significantly worse” in languages other than English and is "unreliable” on code and shorter texts. For inputs that are very different from text in the tool’s training set, the classifier could also be wrong, according to OpenAI.

"Our classifier has a number of important limitations,” the company acknowledges on the website. "It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.”

The lack of reliability of the detection tool poses a dilemma for educators. Even if a teacher finds a suspicious article from a student that’s flagged with a 70% likelihood of being AI-generated, as long as the accuracy of those detection tools isn’t 100%, it’s very hard for teachers to take decisive action.

"I don’t think we know what to do with a flag that says there might be an issue,” said Jack Cushman, director of the Harvard Library Innovative Lab, which explores topics such as the impact of the internet. "All you can do at that point is talk with a student and say you might have committed academic dishonesty according to this tool.”

Meanwhile, the definition of plagiarism is also evolving with the emergence of AI. "It is going to challenge the whole notion of academic honesty because sometimes having a tool that recommends a sentence or two or help with citations is going to be legitimate in the same way as using calculator to do math work,” he said. "The best answer is you shouldn’t let it write the whole thing.”

Rise of deepfakes

Nick Loui, co-founder and CEO of PeakMetrics, a startup that helps governments and large companies combat disinformation, said his clients aren’t concerned about the threat of AI-generated texts as much because the potential for harm is less than from the proliferation of deepfake videos, for example, where there have been more malicious instances of manipulated content.

The technical limitations so far of any detection technology and a lack of a clear path to monetization has made it difficult to attract investment. The current detection tools are transitory products, said Sheila Gulati, managing director at Tola Capital, a VC firm that focuses on AI startups, as blocking new and emergent technology is generally not a great way for people to leverage it. "I think the eventual state of this will just be much more sophisticated.”

Some industry observers say open sourcing, which makes software’s source code publicly available and allows users to view, modify and distribute it freely, is good for large language model products as it reduces costs, increases transparency and promotes innovation.

However, open-source is also more easily hackable and can make the detection tools more prone to exploits. "It’s a bit like showing a burglar the blueprint for how your home surveillance network is set up,” said Alex Cui, chief technology officer and a co-founder of GPTZero. – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?
What does watching all those videos do to kids' brains?
How the Swedish Dungeons & Dragons inspired 'Helldivers 2'
'The Mind Twisting Quadroids' review: Help needed conquering the galaxy
Albania bans TikTok for a year after killing of teenager
As TikTok runs out of options in the US, this billionaire has a plan to save it
Google offers to loosen search deals in US antitrust case remedy
Is Bluesky the new Twitter for teachers in the US?
'Metaphor: ReFantazio', 'Dragon Age', 'Astro Bot' and an indie wave lead the top video games of 2024

Others Also Read