GENEVA: "This". "I can’t even". "Ratio'd". "Beast mode". "Cheat code". "Same". Questions written minus the question mark as if they’re statements.
Social media such as Twitter are riddled with such quick-to-age neologisms, ironically often used in an attempted quip or profundity.
Their use brings to mind a in cult Monty Python film "The Life of Brian" where the eponymous central character addresses a crowd of devotees gathered under his bedroom window.
"You’re all individuals," Brian tells the throng, to which they reply, in unison, "Yes, we are all individuals.” "You’re all different,” he continues. "Yes, we are all different,” they chime.
With life unwittingly imitating art, perhaps it is little wonder that AI bots can generate tweets that, to some eyes, read no different to tweets posted by humans.
That’s been shown in a by the University of Zurich of almost 700 people in Australia, Canada, Ireland, the UK and the US, who, it turns out, "had trouble distinguishing between tweets made by humans versus those generated by an artificial intelligence (AI).”
Not only that, but the survey outcome, published in June in the journal Science Advances, showed the respondents to be sometimes stumped when it came to figuring out which AI-generated tweets were accurate versus those that were inaccurate.
"The findings imply that AI model GPT-3 and other large language models (LLMs) may both inform and disinform social media users more effectively than humans can,” said Giovanni Spitale of university’s Institute of Biomedical Ethics and History of Medicine (IBME).
The people surveyed were asked by Spitale and colleagues to "evaluate human- and GPT-3-generated tweets containing accurate and inaccurate information about a range of topics, including vaccines and autism, 5G technology, Covid-19, climate change and evolution.” – dpa