MILAN (Reuters) -Italy's data protection authority has told OpenAI that its artificial intelligence chatbot application ChatGPT breaches data protection rules, the watchdog said on Monday, as it presses ahead with an investigation started last year.
The authority, known as Garante, is one of the European Union's most proactive in assessing AI platform compliance with the bloc's data privacy regime. Last year, it banned ChatGPT over alleged breaches of European Union (EU) privacy rules.
The service was reactivated after OpenAI addressed issues concerning, amongst other things, the right of users to decline to consent to the use of personal data to train algorithms.
At the time, the regulator said it would continue its investigations. It has since concluded that elements indicate one or more potential data privacy violations, it said in a statement without providing further detail.
In a e-mailed statement OpenAI said it believes its practices are aligned with EU's privacy laws. "We actively work to reduce personal data in training our systems like ChatGPT," it said, adding it "plans to continue to work constructively with the Garante".
The Garante on Monday said Microsoft-backed OpenAI has 30 days to present defence arguments, adding that its investigation would take into account work done by a European task force comprising national privacy watchdogs.
Italy was the first West European country to curb ChatGPT, whose rapid development has attracted attention from lawmakers and regulators.
Under the EU's General Data Protection Regulation (GDPR) introduced in 2018, any company found to have broken rules faces fines of up to 4% of its global turnover.
In December, EU lawmakers and governments agreed provisional terms for regulating AI systems such as ChatGPT, moving a step closer to setting rules to govern the technology.
(Reporting by Elvira PollinaEditing by Emelia Sithole-Matarise, David Goodman and Barbara Lewis)