Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Data Protection

UN Urges Moratorium on AI Tech That Threatens Rights

Regulating AI

Regulating AI

The UN called Wednesday for a moratorium on artificial intelligence systems like facial recognition technology that threaten human rights until “guardrails” are in place against violations.

UN High Commissioner for Human Rights Michelle Bachelet warned that “AI technologies can have negative, even catastrophic effects if they are used without sufficient regard to how they affect people’s human rights.”

She called for assessments of how great a risk various AI technologies pose to things like rights to privacy and freedom of movement and of expression.

She said countries should ban or heavily regulate the ones that pose the greatest threats.

But while such assessments are under way, she said that “states should place moratoriums on the use of potentially high-risk technology”.

Presenting a fresh report on the issue, she pointed to the use of profiling and automated decision-making technologies.

She acknowledged that “the power of AI to serve people is undeniable.”

“But so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility,” she said.

Advertisement. Scroll to continue reading.

“Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”

– ‘Damage human lives’ –

The report, which was called for by the UN Human Rights Council, looked at how countries and businesses have often hastily implemented AI technologies without properly evaluating how they work and what impact they will have.

The report found that AI systems are used to determine who has access to public services, job recruitment and impact what information people see and can share online, Bachelet said.

Faulty AI tools have led to people being unfairly denied social security benefits, while innocent people have been arrested due to flawed facial recognition.

“The risk of discrimination linked to AI-driven decisions –- decisions that can change, define or damage human lives –- is all too real,” Bachelet said.

– Discriminatory data –

The report highlighted how AI systems rely on large data sets, with information about people collected, shared, merged and analysed in often opaque ways.

The data sets themselves can be faulty, discriminatory or out of date, and thus contribute to rights violations, it warned.

For instance, they can erroneously flag an individual as a likely terrorist.

The report raised particular concern about the increasing use of AI by law enforcement, including as forecasting tools.

When AI and algorithms use biased historical data, their profiling predictions will reflect that, for instance by ordering increased deployments to communities already identified, rightly or wrongly, as high-crime zones.

Remote real-time facial recognition is also increasingly deployed by authorities across the globe, the report said, potentially allowing the unlimited tracking of individuals.

Such “remote biometric recognition technologies” should not be used in public spaces until authorities prove they comply with privacy and data protection standards and do not have significant accuracy or discriminatory issues, it said.

“We cannot afford to continue playing catch-up regarding AI –- allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact,” Bachelet said.

Related: EU Proposes Rules for Artificial Intelligence to Limit Risks

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Written By

AFP 2023

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this event as we dive into threat hunting tools and frameworks, and explore value of threat intelligence data in the defender’s security stack.

Register

Learn how integrating BAS and Automated Penetration Testing empowers security teams to quickly identify and validate threats, enabling prompt response and remediation.

Register

People on the Move

Threat intelligence firm Team Cymru has appointed Joe Sander as its Chief Executive Officer.

Madhu Gottumukkala has been named Deputy Director of the cybersecurity agency CISA.

Wendi Whitmore has taken the role of Chief Security Intelligence Officer at Palo Alto Networks.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.