Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

EU Unveils AI Rules to Tackle Big Brother Fears

European rules on AI

European rules on AI

The EU unveiled a plan Wednesday to regulate the sprawling field of artificial intelligence, aimed at helping Europe catch up in the new tech revolution while curbing the threat of Big Brother-like abuses.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” EU competition chief Margrethe Vestager said. 

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”

The European Commission, the bloc’s executive arm, has been preparing the proposal for more than a year and a debate involving the European Parliament and 27 member states is to go on for months more before a definitive text is in force.

The EU is looking to set the terms with its first ever legal package on AI and catch up with the US and China in a sector that spans from voice recognition to insurance and law enforcement.

The bloc is trying to learn the lessons after largely missing out on the internet revolution and failing to produce any major competitors to match the giants of Silicon Valley or their Chinese counterparts.

But there have been competing concerns over the plans from both big tech and civil liberties groups arguing that the EU is either overreaching or not going far enough.

– ‘High-risk’ –

Advertisement. Scroll to continue reading.

To promote innovation, Brussels wants to provide a clear legal framework for companies across the bloc’s 27 member states.

“Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market,” EU internal market commissioner Thierry Breton said.

The draft regulation lays out a “risk-based approach” that would lead to bans on a very limited number of uses that are deemed as presenting an “unacceptable risk” to EU fundamental rights.

This would make “generalized surveillance” of the population off-limits as well as any tech “used to manipulate the behaviour, opinions or decisions” of citizens.

Anything resembling a social rating of individuals based on their behavior or personality would also be prohibited.

On the rung below, the regulation requires companies to get a special authorisation for applications deemed “high-risk” before they reach the market.

These systems would include “remote biometric identification of persons in public places” — including facial recognition — as well as “security elements in critical public infrastructure”.

Special exceptions are envisioned for allowing the use of mass facial recognition systems in cases such as searching for a missing child, averting a terror threat, or tracking down someone suspected of a serious crime.

Military applications of artificial intelligence will not be covered by the rules.

Other uses, not classified as “high risk”, will have no additional regulatory constraints beyond existing ones.

Infringements, depending on their seriousness, may bring heavy fines for companies. 

– Difficult balance –

Google and other tech giants are taking the EU’s AI strategy very seriously as Europe often sets a standard on how tech is regulated around the world.

Last year, Google warned that the EU’s definition of artificial intelligence was too broad and that Brussels must refrain from over-regulating a crucial technology.

Alexandre de Streel, co-director of the Centre on Regulation in Europe think tank, said there is a difficult balance to be struck between protection and innovation.

The text “sets a relatively open framework and everything will depend on how it is interpreted,” he told AFP.

Tech lobbyist Christian Borggreen, from the Computer and Communications Industry Association, welcomed the EU’s risk-based approach, but warned against stifling industry. 

“We hope the proposal will be further clarified and targeted to avoid unnecessary red tape for developers and users,” he said in a statement. 

Civil liberties activists warned ahead of the unveiling that the  rules do not go far enough in curbing potential abuses in the cutting-edge technologies.

“It still allows some problematic uses, such as mass biometric surveillance,” said Orsolya Reich of umbrella group Liberties. 

“The EU must take a stronger position… and ban indiscriminate surveillance of the population without allowing exceptions.”

RelatedHunting the Snark with Machine Learning, AI, and Cognitive Computing

Related: Privacy Fears Over Artificial Intelligence as Crimestopper

Written By

AFP 2023

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Compliance

The three primary drivers for cyber regulations are voter privacy, the economy, and national security – with the complication that the first is often...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Cybercrime

Daniel Kelley was just 18 years old when he was arrested and charged on thirty counts – most infamously for the 2015 hack of...

Cybercrime

No one combatting cybercrime knows everything, but everyone in the battle has some intelligence to contribute to the larger knowledge base.