Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

EU Proposes Rules for Artificial Intelligence to Limit Risks

Artificial Intelligence Regulation in Europe

The European Union unveiled proposals Wednesday to regulate artificial intelligence that call for strict rules and safeguards on risky applications of the rapidly developing technology.

Artificial Intelligence Regulation in Europe

The European Union unveiled proposals Wednesday to regulate artificial intelligence that call for strict rules and safeguards on risky applications of the rapidly developing technology.

The report is part of the bloc’s wider digital strategy aimed at maintaining its position as the global pacesetter on technological standards. Big tech companies seeking to tap Europe’s vast and lucrative market, including those from the U.S. and China, would have to play by any new rules that come into force.

The EU’s executive Commission said it wants to develop a “framework for trustworthy artificial intelligence.” European Commission President Ursula von der Leyen had ordered her top deputies to come up with a coordinated European approach to artificial intelligence and data strategy 100 days after she took office in December.

“We will be particularly careful where essential human rights and interests are at stake,” von der Leyen told reporters in Brussels. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights.”

EU leaders, keen on establishing “technological sovereignty,” also released a strategy to unlock data from the continent’s businesses and the public sector so it can be harnessed for further innovation in artificial intelligence. Officials in Europe, which doesn’t have any homegrown tech giants, hope to to catch up with the U.S. and China by using the bloc’s vast and growing trove of industrial data for what they anticipate is a coming wave of digital transformation.

They also warned that even more regulation for foreign tech companies is in store with the upcoming “Digital Services Act,” a sweeping overhaul of how the bloc treats digital companies, including potentially holding them liable for illegal content posted on their platforms. A steady stream of Silicon Valley tech bosses, including Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Microsoft President Brad Smith, have visited Brussels in recent weeks as part of apparent lobbying efforts.

“It is not us that need to adapt to today’s platforms. It is the platforms that need to adapt to Europe,” said Thierry Breton, commissioner for the internal market. “That is the message that we delivered to CEOs of these platforms when they come to see us.”

If the tech companies aren’t able to build systems “for our people, then we will regulate, and we are ready to do this in the Digital Services Act at the end of the year,” he said.

Advertisement. Scroll to continue reading.

The EU’s report said clear rules are needed to address “high-risk AI systems,” such as those in recruitment, healthcare, law enforcement or transport, which should be “transparent, traceable and guarantee human oversight.” Other artificial intelligence systems could come with labels certifying that they are in line with EU standards.

Artificial intelligence uses computers to process large sets of data and make decisions without human input. It is used, for example, to trade stocks in financial markets, or, in some countries, to scan faces in crowds to find criminal suspects.

While it can be used to improve healthcare, make farming more efficient or combat climate change, it also brings risks. It can be unclear what data artificial intelligence systems work off. Facial recognition systems can be biased against certain social groups, for example. There are also concerns about privacy and the use of the technology for criminal purposes, the report said.

Human-centered guidelines for artificial intelligence are essential because “none of the positive things will be achieved if we distrust the technology,” said Margrethe Vestager, the executive vice president overseeing the EU’s digital strategy.

Under the proposals, which are open for public consultation until May 19, EU authorities want to be able to test and certify the data used by the algorithms that power artificial intelligence in the same way they check cosmetics, cars and toys.

It’s important to use unbiased data to train high-risk artificial intelligence systems so they can avoid discrimination, the commission said.

Specifically, AI systems could be required to use data reflecting gender, ethnicity and “other possible grounds of prohibited discrimination.”

Other ideas include preserving data to help trace any problems and having AI systems clearly spell out their capabilities and limitations. Users should be told when they’re interacting with a machine and not a human while humans should be in charge of the system and have the final say on decisions such as rejecting an application for welfare benefits, the report said.

EU leaders said they also wanted to open a debate on when to allow facial recognition in remote identification systems, which are used to scan crowds to check people’s faces to those on a database. It’s considered the “most intrusive form” of the technology and is prohibited in the EU except in special cases.

RelatedThe Starter Pistol Has Been Fired for AI Regulation in Europe

RelatedPrivacy Fears Over Artificial Intelligence as Crimestopper

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

Shaun Khalfan has joined payments giant PayPal as SVP, CISO.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Cybersecurity Funding

Los Gatos, Calif-based data protection and privacy firm Titaniam has raised $6 million seed funding from Refinery Ventures, with participation from Fusion Fund, Shasta...

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...