Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

The Starter Pistol Has Been Fired for Artificial Intelligence Regulation in Europe

Artificial Intelligence Regulation - It is needed?

Regulation of Artificial Intelligence Could Potentially be More Complex and Far Reaching Than GDPR

Artificial Intelligence Regulation - It is needed?

Regulation of Artificial Intelligence Could Potentially be More Complex and Far Reaching Than GDPR

Paul Nemitz is principal advisor in the Directorate-General Justice and Consumers of the European Commission. It was Nemitz who transposed the underlying principles of data privacy into the legal text that ultimately became the European Union’s General Data Protection Regulation (GDPR).

Now Nemitz has fired the starting gun for what may eventually become a European Regulation providing consumer safeguards against abuse from artificial intelligence (AI). In a new paper published in the Philosophical Transactions of the Royal Society, he warns that democracy itself is threatened by unbridled use of AI.

In the paper titled, ‘Constitutional democracy and technology in the age of artificial intelligence‘, he warns that too much power, including AI research, is concentrated in the hands of what he calls the ‘frightful five’ (a term used by the New York Times in May 2017): Google, Apple, Facebook, Amazon and Microsoft, also known as GAFAM. His concern is that these and other tech companies have always argued that tech should be above the law because the law does not understand tech and cannot keep up with it.

Their argument, he suggests, is epitomized in Google’s argument in the Court of Justice of the European Union (CJEU) disputing the applicability of EU law on data protection to its search engine, “basically claiming that the selection process of its search engine is beyond its control due to automation in the form of an algorithm.”

The implication of this argument is that the working of AI should not be subject to national laws simply because the purveyors of AI don’t understand how its decisions are reached. Nemitz believes this attitude undermines the very principles of democracy itself. While democracy and laws are concerned with the good of the people, big business is concerned almost exclusively with profit.

He gets some support from the UK’s Information Commissioner Elizabeth Denham. In an unrelated blog published November 6, 2018 discussing the ICO’s investigation into the Facebook/Cambridge Analytica issue, she writes, “We are at a crossroads. Trust and confidence in the integrity of our democratic processes risks being disrupted because the average person has little idea of what is going on behind the scenes.”

“It is these powerful internet technology corporations which have already demonstrated that they cannot be trusted to pursue public interest on a grand scale without the hard hand of the law and its rigorous enforcement setting boundaries and even giving directions and orientation for innovation which are in the public interest,” writes Nemitz. He continues, “In fact, some representatives of these corporations may have themselves recently come to this conclusion and called for legislation on AI.”

Advertisement. Scroll to continue reading.

Here he specifically refers to a Bloomberg article titled, ‘Microsoft Says AI Advances Will Require New Laws, Regulations’. But what the article actually says is, “Over the next two years, Microsoft plans to codify the company’s ethics and design rules to govern its AI work, using staff from [Brad] Smith’s legal group and the AI group run by Executive Vice President Harry Shum. The development of laws will come a few years after that, Smith said.”

In other words, Microsoft expects regulation to take account of what it decides to do in AI, not that AI needs regulation before Microsoft codifies what it wants to do. Again, this implies that big business believes — and acts — as if business is more important than government: that profit supersedes democracy.

Nemitz believes that this attitude towards early stage development of the internet has allowed the development of a lawless internet. “Avoiding the law or intentionally breaking it, telling half truth to legislators or trying to ridicule them, as we recently saw in the Cambridge Analytica hearings by Mark Zuckerberg of Facebook, became a sport on both sides of the Atlantic in which digital corporations, digital activists and digital engineers and programmers rubbed shoulders.”

He does neither himself nor his argument any favors, however, in warning that the unregulated internet has evolved into a medium for populists to communicate their ideologies in a manner not suited to democratic discourse. “Trump ruling by Tweet is the best example for this.” While he may be accurate in principle, this personalization opens his argument to the criticism of bias. 

Nemitz believes that the long-standing attitude by big business towards privacy and the internet must not be allowed to embed itself into AI and the internet. The implication is that this can only be controlled by regulation, and that regulation must be imposed by law rather than reached by consensus among the tech companies.

Business is likely to disagree. The first argument will be that you simply cannot regulate something as nebulous as artificial intelligence, nor should you wish to.

“Is regulatory control necessary over the navigation algorithm in my Roomba vacuum cleaner?” asks Raj Minhas, VP and director of the PARC Interactions and Analytics Lab at PARC (a Xerox company). “Is regulatory control necessary over the algorithm in my camera that automatically determines the exposure settings? Market forces can easily take care of these and many other similar AI systems.”

It should be noted, however, that Nemitz is not calling for the regulation of AI itself, but for regulation over the use of AI and its effect on consumers. Indeed, in this sense, the European Union already has some AI regulation within GDPR — automatic data subject profiling is prohibited. So, if AI within a vacuum cleaner collects data on its user, or if AI in a camera collects information on user interests for either cleaning companies’ or holiday companies’ targeted advertising purposes, without consent, this is already illegal under GDPR.

So, it is the abuse of AI driven by big business’ need for profit rather than AI itself that concerns him. GDPR does not attempt to regulate targeted advertising — instead it seeks to regulate the abuse of personal privacy used in targeted advertising. Nemitz believes the same principle-based technology-neutral approach to regulating AI abuses, even though we do not yet know what these future abuses might be, should be the way forward.

His first principle is to remove the subjective elements of human illegality, such as ‘intent’ or ‘negligence’. Then, “it will be important to codify in law the principle that an action carried out by AI is illegal if the same action carried out by a human, abstraction made of subjective elements, would be illegal.”

But he believes the foundation for AI regulation could be required impact assessments. For government use of AI, theses assessments would need to be made public. They would underp
in ‘the public knowledge and understanding’ of AI, which currently lacks ‘transparency’. The standards for such assessments would need to be set in law. “And as in the GDPR, the compliance with the standards for the impact assessment would have to be controlled by public authorities and non compliance should be subject to sufficiently deterrent sanctions.”

But perhaps the key requirement he proposes is that “the use of AI should have a right, to be introduced by law, to an explanation of how the AI functions, what logic it follows, and how its use affect the interests of the individual concerned, thus the individual impacts of the use of AI on a person, even if the AI does not process personal data.”

In other words, the argument put forward by Google that it is not responsible for the automated decisions of its search algorithms should be rejected, and the same rejection applied to all algorithms within AI. This will force responsibility for the effect of AI onto the user of that AI, regardless of the outcome on the object.

Such ideas and proposals can be viewed as the starting gun for GDPR-style legislation for AI. Nemitz is not a European Commissioner, so this is not an official viewpoint. But he is senior adviser in the most relevant EC office. It would be unrealistic to think these views are unknown or contrary to current early thinking within the EC. The likelihood is that there will be some GDPR-like legislation in the future. It is many years off — but the arguments start now.

One of the biggest problems is that it could be seen as a governing party issue. Whether Nemitz views it like this or not, it could be claimed that he is asserting the right of an unelected European Commission to rule over citizens who could directly impose their will against what they use by pure market forces without the interference of bureaucrats

It could also be claimed that it is more driven by politico-economic wishes than by altruism. The ‘frightful five’ are all non-EU companies (i.e. U.S. companies) dominating the market and suppressing EU companies by force of their success. In short, it could be claimed that AI regulation is driven by anti-American economic bias.

Such arguments are already being made. Raj Minhas, while accepting that some of the Nemitz arguments and conclusions are fair, thinks that overall Nemitz is being too simplistic. He points out that the paper makes no mention of the ‘good’ achieved by the internet. “Would even a small fraction of that have been realized if the development of the internet had been shackled?” he asked SecurityWeek

“He portrays technology companies (e.g. Google, Apple, Facebook, Amazon, and Microsoft) as shady cabals that are working to undermine democracy. Of course, the reality is far more complex,” he said. “The technologies produced by those companies has done more to spread democracy and individual agency than most governments. The fact that they make lots of money should not automatically be considered a nefarious activity.”

These large corporations are described as monoliths that single-mindedly work to undermine democracy. “Again, the reality is far more complex. These companies face immense pressure from their own employees to act in transparent and ethical ways — they push them to give up lucrative military/government contracts because they don’t align with the values of those employees. The fact that all these companies have a code of ethics for AI research is an outcome of those values rather than a diabolical plot to usurp democracy (as alleged by the author).”

The implication is that regulation is best left to self-regulation by the companies and their employees. This is a view confirmed by Nathan Wenzler, senior director of cybersecurity at Moss Adams. He accepts that there will inevitably need to be some regulation to “at least define where liability will rest and allow businesses to make sound decisions around whether or not it’s worth it to pursue the course.” He cites the moral and ethical issues around driverless vehicles when AI might be forced to decide between who to injure most in an unavoidable collision situation.

But as to more general AI regulation, he suggests, “Government regulators aren’t exactly known for responding quickly to changes in technology matters, and as rapidly as AI programs are moving into becoming integrated into nearly everything, we may quickly reach a point where it simply won’t be possible to regulate it all effectively… In the meantime, the best course of action we have presently is for the businesses involved in developing AI-powered tools and services to make the ethical considerations an integral part of their business decisions. It may be the only way we see the advantages of this technology take flight, while avoiding the potentially devastating down sides.”

Kenneth Sanford, analytics architect and U.S. lead at Dataiku takes a nuanced view. He separates the operation of AI from the environment in which it is made and deployed. AI itself cannot be regulated. “Algorithms such as deep neural networks and ensemble models create an infinite number of possible recommendations that can never be regulated.,” he told SecurityWeek.

He doesn’t think that AI-based decision-making is actually changing much. “We have had personalized suggestions and persuasive advertising for years derived from generalizations and business rules.  The main difference today is that these rules are codified in more finely determined micro segments and are delivered in a more seamless fashion in a digital world. In short, the main difference between now and 20 years ago is that we are better at it.”

Any scope for regulation, he suggests, lies in the environment of AI. “What data are collected and how these data are used are a more realistic target for guardrails on the industry,” he suggests.

This, however, is already regulated by GDPR. The unsaid implication is that no further AI-specific regulation is necessary or possible. But if the EU politicians take up the call for AI regulation as put forward by Paul Nemitz — and his influence should not be discounted — then there will be AI regulation. That legislation will potentially be more complex and far reaching than GDPR. The bigger question is not whether it will happen, but to what extent will GAFAM be able to shape it to their own will.

Related: The Malicious Use of Artificial Intelligence in Cybersecurity 

Related: The Role of Artificial Intelligence in Cyber Security 

Related: Privacy Fears Over Artificial Intelligence as Crimestopper 

Related: Financial Regulator’s Algorithm Compliance Concerns Relevant to All Businesses 

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

Professional services company Slalom has appointed Christopher Burger as its first CISO.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Compliance

The three primary drivers for cyber regulations are voter privacy, the economy, and national security – with the complication that the first is often...

Compliance

Government agencies in the United States have made progress in the implementation of the DMARC standard in response to a Department of Homeland Security...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Application Security

Virtualization technology giant VMware on Tuesday shipped urgent updates to fix a trio of security problems in multiple software products, including a virtual machine...