Italy’s privacy watchdog has temporarily banned the popular artificial intelligence service ChatGPT made by Microsoft-backed OpenAI, as policymakers across the world seek to respond to the rise of AI products.
The nation’s data protection authority on Friday said it would block access to the chatbot in Italy, while it examines the US company’s collection of personal information during a recent cyber security breach among other issues.
The move comes as AI experts and ethicists sound the alarm about the enormous amounts of data that services such as ChatGPT consume from tens of millions of users around the world, raising concerns about how companies may jeopardise their privacy and safety.
Italy’s watchdog launched an investigation into OpenAI after a cyber security breach last week led to people being shown excerpts of other users’ ChatGPT conversations and their financial information.
The data exposed for a nine-hour period included first and last names, billing addresses, credit card types, credit card expiration dates and the last four digits of their credit cards, according to an email sent by OpenAI to an affected customer, and seen by the Financial Times.
The Rome-based watchdog said OpenAI, led by chief executive Sam Altman, should stop operating ChatGPT in Italy shortly, in accordance with the order, after which it will have 20 days to respond with counter-evidence. If OpenAI fails to respond within the deadline it could face a fine of up to €20mn.
A spokesperson for OpenAI said: “We have disabled ChatGPT for users in Italy at the request of the Italian Garante [the data protection authority]. We are committed to protecting people’s privacy and we believe we comply with GDPR and other privacy laws.”
The regulator has acted against chatbots previously, having banned Replika.ai in February. That service is known for users seeking to generate conversations that discuss erotic scenarios.
Italy’s move represents the first regulatory action against the popular chatbot, with policymakers across the world seeking to respond to the rise of generative AI services.
Experts have been concerned about the huge amount of data being hoovered up by language models behind ChatGPT. OpenAI had more than 100mn monthly active users two months into its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was being used by more than 1mn people in 169 countries within two weeks of its release in January.
This week the likes of Elon Musk and Yoshua Bengio, one of the founding fathers of modern AI methods, called for a six-month pause in developing systems more powerful than the newly launched GPT-4, citing major risks to society.
Some industry experts and insiders said the call was hypocritical and it was merely a way to allow AI “laggards to catch-up” with OpenAI, at a time when large tech companies are competing aggressively to release AI products such as ChatGPT and Google’s Bard.
The Italian regulator said it launched an investigation after noting the “absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of ‘training’ the algorithms” underlying ChatGPT.
It also said that, according to its internal analysis, ChatGPT did “not always provide accurate information”, which leads to a misuse of personal information.
OpenAI has previously said it has resolved cyber security issues related to the leak of information. However, OpenAI will be blocked from processing Italian users’ data through ChatGPT while the probe is in progress.
The regulator also criticised OpenAI’s lack of a filter to verify that children under 13 were not using its service. Specifically, the watchdog claimed underage children were being exposed to content and information that was not appropriate for their “level of self-consciousness”.
Generative AI technologies fall under the regulatory purview of existing data and digital laws such as the GDPR and the Digital Services Act, which oversee some aspects of it. However, the EU is preparing a regulation that will govern how AI is used in Europe, with companies that violate the bloc’s rules facing fines of up to €30mn, or 6 per cent of global annual turnover, whichever is larger.