Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

White House Unveils Artificial Intelligence ‘Bill of Rights’

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people’s personal data and limit surveillance.

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people’s personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said.

“This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies,” said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. “We can and should expect better and demand better from our technologies.”

The office said the white paper represents a major advance in the administration’s agenda to hold technology companies accountable, and highlighted various federal agencies’ commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft.

{ ReadBias in Artificial Intelligence: Can AI be Trusted? }

It puts forward five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

The non-binding principles cite academic research, agency studies and news reports that have documented real-world harms from AI-powered tools, including facial recognition tools that contributed to wrongful arrests and an automated system that discriminated against loan seekers who attended a Historically Black College or University.

The white paper also said parents and social workers alike could benefit from knowing if child welfare agencies were using algorithms to help decide when families should be investigated for maltreatment.

Advertisement. Scroll to continue reading.

Earlier this year, after the publication of an AP review of an algorithmic tool used in a Pennsylvania child welfare system, OSTP staffers reached out to sources quoted in the article to learn more, according to multiple people who participated in the call. AP’s investigation found that the Allegheny County tool in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children.

In May, sources said Carnegie Mellon University researchers and staffers from the American Civil Liberties Union spoke with OSTP officials about child welfare agencies’ use of algorithms. Nelson said protecting children from technology harms remains an area of concern.

“If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions,” said Nelson, who also serves as deputy assistant to President Joe Biden.

OSTP did not provide additional comment about the May meeting.

Still, because many AI-powered tools are developed, adopted or funded at the state and local level, the federal government has limited oversight regarding their use. The white paper makes no specific mention of how the Biden administration could influence specific policies at state or local levels, but a senior administration official said the administration was exploring how to align federal grants with AI guidance.

The white paper does not have power over tech companies that develop the tools nor does it include any new legislative proposals. Nelson said agencies would continue to use existing rules to prevent automated systems from unfairly disadvantaging people.

The white paper also did not specifically address AI-powered technologies funded through the Department of Justice, whose civil rights division separately has been examining algorithmic harms, bias and discrimination, Nelson said.

Tucked between the calls for greater oversight, the white paper also said when appropriately implemented, AI systems have the power to bring about lasting benefits to society, such as helping farmers grow food more efficiently or identifying diseases.

“Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values,” the document said.

RelatedCyber Insights 2022: Adversarial AI

RelatedEthical AI, Possibility or Pipe Dream?

RelatedAre AI and ML Just a Temporary Advantage to Defenders?

 

RelatedThe Malicious Use of Artificial Intelligence in Cybersecurity

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cloud Security

Cloud security researcher warns that stolen Microsoft signing key was more powerful and not limited to Outlook.com and Exchange Online.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Cybercrime

Daniel Kelley was just 18 years old when he was arrested and charged on thirty counts – most infamously for the 2015 hack of...

Cybercrime

No one combatting cybercrime knows everything, but everyone in the battle has some intelligence to contribute to the larger knowledge base.