NEW YORK: The news rating group NewsGuard has found dozens of news websites generated by AI chatbots proliferating online, according to a report, raising questions about how the technology may supercharge established fraud techniques.
The 49 websites, which were independently reviewed by Bloomberg, run the gamut. Some are dressed up as breaking news sites with generic-sounding names like News Live 79 and the Daily Business Post, while others share lifestyle tips, and celebrity news or publish sponsored content.
But none disclose that they’re populated using AI chatbots such as OpenAI Inc’s ChatGPT and potentially Alphabet Inc’s Google Bard, which can generate detailed text based on simple user prompts.
Many of the websites began publishing this year as AI tools began to be widely used by the public.
In several instances, NewsGuard documented how the chatbots generated falsehoods for published pieces.
In April alone, a website called CelebritiesDeaths.com published an article titled: “Biden dead. Harris acting President, address 9am”.
Another concocted facts about the life and works of an architect as part of a falsified obituary.
And a site called TNewsNetwork published an unverified story about the deaths of thousands of soldiers in the Russia-Ukraine war based on a YouTube video.
The majority of the sites appear to be content farms, low-quality websites run by anonymous sources that churn out posts to bring in advertising.
The websites are based all over the world and are published in several languages, including English, Portuguese, Tagalog and Thai, NewsGuard said in its report.
A handful of sites generated some revenue by advertising “guest posting”, in which people can order mentions of their business on the websites for a fee to help their search ranking.
Others appeared to attempt to build an audience on social media, such as ScoopEarth.com, which publishes celebrity biographies and whose related Facebook page has a following of 124,000.
More than half the sites make money by running programmatic ads, where space for ads on the sites is bought and sold automatically using algorithms.
The concerns are challenging for Google, whose AI chatbot Bard may have been utilised by the sites and whose advertising technology generates half the revenue.
NewsGuard co-chief executive officer Gordon Crovitz said the group’s report showed that companies like OpenAI and Google should take care to train their models not to fabricate news.
“Using AI models known for making up facts to produce what only look like news websites is fraud masquerading as journalism,” said Crovitz, a former publisher of the Wall Street Journal.
OpenAI didn’t immediately respond to a request for comment but has previously stated that it uses a mix of human reviewers and automated systems to identify and enforce the misuse of its model, including issuing warnings or, in severe cases, banning users.
In response to questions from Bloomberg about whether the AI-generated websites violated their advertising policies, Google spokesperson Michael Aciman said that the company doesn’t allow ads to run alongside harmful or spammy content or content that has been copied from other sites.
“When enforcing these policies, we focus on the quality of the content rather than how it was created, and we block or remove ads from serving if we detect violations,” Aciman said in a statement.
Google added that after Bloomberg got in touch, it removed ads from serving on some individual pages across the sites, and in instances where the company found pervasive violations, it removed ads from the websites entirely.
Google said that the presence of AI-generated content is not inherently a violation of its ad policies but that it evaluates content against its existing publisher policies.
And it said that using automation, including AI, to generate content with the purpose of manipulating ranking in search results violates the company’s spam policies.
The company regularly monitors abuse trends within its advertising ecosystem and adjusts its policies and enforcement systems accordingly, it said.
Noah Giansiracusa, an associate professor of data science and mathematics at Bentley University, said the scheme may not be new, but it’s gotten easier, faster and cheaper.
The actors pushing this brand of fraud “are going to keep experimenting to find what’s effective,” Giansiracusa said.
“As more newsrooms start leaning into AI and automating more, and the content mills are automating more, the top and the bottom are going to meet in the middle” to create an online information ecosystem with vastly lower quality. — Bloomberg