STOCKHOLM/MILAN/BERLIN, April 3 (Reuters) – Italy’s move to temporarily ban ChatGPT has inspired other European countries to study if harsher measures are needed to rein in the wildly popular chatbots and whether to coordinate such actions.
While European parliamentarians disagree over the content and reach of the EU AI Act, some regulators are finding that existing tools, such as the General Data Protection Regulation (GDPR) that gives users control over their personal information, can apply to the rapidly emerging category of generative AI companies.
Generative AI, such as OpenAI’s ChatGPT, relies on algorithms to generate remarkably human responses to text queries based on analyzing large volumes of data, some of which may be owned by internet users.
The Italian agency, also known as Garante, accused Microsoft Corp-backed (MSFT.O) OpenAI of failing to check the age of ChatGPT users and the “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot.
“The points they raise are fundamental and show that GDPR does offer tools for the regulators to be involved and engaged into shaping the future of AI,” said Dessislava Savova, partner at law firm Clifford Chance.
Privacy regulators in France and Ireland have reached out to counterparts in Italy to find out more about the basis of the ban. Germany could follow in Italy’s footsteps by blocking ChatGPT over data security concerns, the German commissioner for data protection told the Handelsblatt newspaper.
“We are following up with the Italian regulator,” said a spokesperson for Ireland’s Data Protection Commissioner. “We will coordinate with all EU data protection authorities in relation to this matter.”
The privacy regulator in Sweden, however, said it had no plan to ban ChatGPT nor was it in contact with the Italian watchdog. Spain’s regulator said it had not received any complaint about ChatGPT but did not rule out a future investigation.
Italy’s Garante, like other privacy regulators, is independent of the government and was also among the first to formally warn Chinese-owned TikTok about breaching of existing European Union privacy rules.
While the privacy commissioners favour more regulation, the governments are more lenient.
Italy’s deputy prime minister has criticized its own regulator’s decision by calling it “excessive” and a German government spokesman said a ban of ChatGPT would not be necessary.
The Italian authority’s move last week was aimed at starting a dialogue with the company to address the issues raised over ChatGPT’s compliance to EU data protection rules and not to ban the tool, a source familiar with the matter said.
OpenAI has not responded to regulators over the weekend the source said. Meanwhile, OpenAI has taken ChatGPT offline in Italy on Friday. It did not respond to questions about other European regulators looking into potential violation in their countries.
It has no offices in the European Union.
OpenAI, whose artificial intelligence platform took the world by storm after its launch in November, said on Friday it actively works to reduce personal data in training its AI systems.
The Italian investigation into OpenAI was launched after a nine-hour cyber security breach last month led to people being shown excerpts of other users’ ChatGPT conversations and their financial information.
Italy is the first Western country to take action against a chatbot powered by artificial intelligence.
While the Italian regulator has only singled out ChatGPT so far because of its popularity, other AI platforms such as Google Inc’s (GOOGL.O) Bard might be questioned too, several experts said.
“Unlike ChatGPT, Google is more likely to have taken that into account already because of its history in Europe and because of the size of the organization,” Savova said.
Reporting by Rachel More in Berlin, Padraic Halpin in Dublin, Tassilo Hummel in Paris, Supantha Mukherjee in Stockholm, Elvira Pollina in Milan and Emma-Victoria Farr in Frankfurt
Editing by Matthias Williams, David Goodman, Kenneth Li and Richard Chang
Our Standards: The Thomson Reuters Trust Principles.