Hostile and hateful remarks are thick on the ground on social networks in spite of persistent efforts by Facebook, Twitter, Reddit and YouTube to tone them down. Now researchers at the OpenWeb platform have turned to artificial intelligence to moderate Internet users' comments before they are even posted. The method appears to be effective because one third of users modified the text of their comments when they received a nudge from the new system, which warned that what they had written might be perceived as offensive.
The study conducted by OpenWeb and Perspective API analysed 400,000 comments that some 50,000 users were preparing to post on sites like AOL, Salon, Newsweek, RT and Sky Sports.