Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Italy Temporarily Blocks ChatGPT Over Privacy Concerns

Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules.

Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government’s privacy watchdog said Friday.

The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users’ data. 

U.S.-based OpenAI, which developed the chatbot, said late Friday night it has disabled ChatGPT for Italian users at the government’s request. The company said it believes its practices comply with European privacy laws and hopes to make ChatGPT available again soon.

While some public schools and universities around the world have blocked ChatGPT from their local networks over student plagiarism concerns, Italy’s action is “the first nation-scale restriction of a mainstream AI platform by a democracy,” said Alp Toker, director of the advocacy group NetBlocks, which monitors internet access worldwide.

The restriction affects the web version of ChatGPT, popularly used as a writing assistant, but is unlikely to affect software applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.

The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

The Italian watchdog said OpenAI must report within 20 days what measures it has taken to ensure the privacy of users’ data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.

The agency’s statement cites the EU’s General Data Protection Regulation and pointed to a recent data breach involving ChatGPT “users’ conversations” and information about subscriber payments.

Advertisement. Scroll to continue reading.

OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users’ chat history.

“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company had said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”

Italy’s privacy watchdog, known as the Garante, also questioned whether OpenAI had legal justification for its “massive collection and processing of personal data” used to train the platform’s algorithms. And it said ChatGPT can sometimes generate — and store — false information about individuals.

Finally, it noted there’s no system to verify users’ ages, exposing children to responses “absolutely inappropriate to their age and awareness.”

OpenAI said in response that it works “to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals.” 

“We also believe that AI regulation is necessary — so we look forward to working closely with the Garante and educating them on how our systems are built and used,” the company said.

The Italian watchdog’s move comes as concerns grow about the artificial intelligence boom. A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.

The president of Italy’s privacy watchdog agency told Italian state TV Friday evening he was one of those who signed the appeal. Pasquale Stanzione said he did so because “it’s not clear what aims are being pursued” ultimately by those developing AI. 

If AI should “impinge” on a person’s “self-determination” then “this is very dangerous,″ Stanzione said. He also described the absence of filters for users younger than 13 as ”rather grave.” 

San Francisco-based OpenAI’s CEO, Sam Altman, announced this week that he’s embarking on a six-continent trip in May to talk about the technology with users and developers. That includes a stop planned for Brussels, where European Union lawmakers have been negotiating sweeping new rules to limit high-risk AI tools, as well as visits to Madrid, Munich, London and Paris.

European consumer group BEUC called Thursday for EU authorities and the bloc’s 27 member nations to investigate ChatGPT and similar AI chatbots. BEUC said it could be years before the EU’s AI legislation takes effect, so authorities need to act faster to protect consumers from possible risks.

“In only a few months, we have seen a massive take-up of ChatGPT, and this is only the beginning,” Deputy Director General Ursula Pachl said. 

Waiting for the EU’s AI Act “is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people.”

Related: ChatGPT Data Breach Confirmed as Security Firm Warns of Vulnerable Component Exploitation

Related: ChatGPT Integrated Into Cybersecurity Products as Industry Tests Its Capabilities

Related: ChatGPT and the Growing Threat of Bring Your Own AI to the SOC

Related: ‘Grim’ Criminal Abuse of ChatGPT is Coming, Europol Warns 

Related: Microsoft Invests Billions in ChatGPT-Maker OpenAI

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Compliance

The three primary drivers for cyber regulations are voter privacy, the economy, and national security – with the complication that the first is often...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Cybersecurity Funding

Los Gatos, Calif-based data protection and privacy firm Titaniam has raised $6 million seed funding from Refinery Ventures, with participation from Fusion Fund, Shasta...