The performances of conversational AI like ChatGPT represent a real challenge for review platforms like TripAdvisor. These sites have to deploy new tools and expertise to check if reviews left by users are trustworthy. In this context, the American giant recently published its "Review Transparency Report," shining light on how the platform spots fake or fraudulent reviews.
Whether finding a restaurant or a hotel for your next vacation, or browsing ideas for days out and activities, TripAdvisor has long been a key tool for planning a trip. This can be especially helpful when you have no local knowledge of a given place, especially overseas, and the comments of travelers who have already been there can give you an idea of the experience that awaits. But sometimes, fake and fraudulent reviews can find their way onto the American platform, tarnishing the reputation of the reviews site. In fact, some travelers might now question whether they can trust what they read.
But TripAdvisor is taking action. To shine light on the site's moderation practices, TripAdvisor recently published a "Review Transparency Report" explaining that, for 2022 data, 30.2 million reviews were posted by the site's 17.4 million members. More than half of them were written from a European country (51.86%) with the next largest proportion hailing from North America (25.21%). In this context, the share of reviews determined to be fake or fraudulent by TripAdvisor amounts to some 1.3 million posts, or 4.4%. According to the platform, 72% of them were detected before they were even visible, and contested reviews are reportedly dealt with in less than six hours in 78.80% of cases.
Fake reviews written in India and Russia
To achieve this, two methods are used, starting with an algorithm that does most of the sorting. A team of moderators complements this by conducting further investigations to verify if a review is really valid. Not only is it a question of establishing whether a given review complies with TripAdvisor's terms of use, but above all investigating whether a negative review might come from a competing company or, on the contrary, whether a glowing review might come from a person closely linked to the business in question. In addition, the American giant claims to be able to detect reviews that have been posted by workers paid for this task. It reports that this type of fake review tends to come from certain specific countries, such as India, where 15.68% of these false reviews are written. This misleading content also originates from Russia (13%), and to a lesser extent from the United States (8.18%), Turkey (6.60%) and Italy (5.95%).
At a time when artificial intelligence is in the spotlight, and given the many possibilities offered by ChatGPT, the AI chatbot developed by OpenAI, this technology represents a new challenge for TripAdvisor, the American platform says. If the firm identifies AI as a major innovation, it is also aware that this tool could be used as a way "to manipulate content." "Our 'Trust & Safety' team will continue to monitor the use of these tools on the platform and will take all available steps to stay ahead of threats to TripAdvisor’s brand integrity," the platform says. – AFP Relaxnews