Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Biden Administration Seeks Input on AI Safety Measures

The Biden administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released.

AI

President Joe Biden’s administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released, though it hasn’t decided if the government will have a role in doing the vetting.

The U.S. Commerce Department on Tuesday said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems.

“There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

The NTIA, more of an adviser than a regulator, is seeking feedback about what policies could make commercial AI tools more accountable.

Biden last week said during a meeting with his council of science and technology advisers that tech companies must ensure their products are safe before releasing them to the public.

{ Feature: ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications }

The Biden administration also last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, but that was before the release of ChatGPT, from San Francisco startup OpenAI, and similar products from Microsoft and Google led to wider awareness of the capabilities of the latest AI tools that can generate human-like passages of text, as well as new images and video.

Advertisement. Scroll to continue reading.

“These new language models, for example, are really powerful and they do have the potential to generate real harm,” Davidson said in an interview. “We think that these accountability mechanisms could truly help by providing greater trust in the innovation that’s happening.”

The NTIA’s notice leans heavily on requesting comment about “self-regulatory” measures that the companies that build the technology would be likely to lead. That’s a contrast to the European Union, where lawmakers this month are negotiating the passage of new laws that could set strict limits on AI tools depending on how high a risk they pose.

RelatedCyber Insights 2023 | Artificial Intelligence

RelatedWhite House Unveils Artificial Intelligence ‘Bill of Rights’

RelatedBias in Artificial Intelligence: Can AI be Trusted?

RelatedThe Starter Pistol Has Been Fired for Artificial Intelligence Regulation in Europe

Written By

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

SecurityWeek’s Threat Detection and Incident Response Summit brings together security practitioners from around the world to share war stories on breaches, APT attacks and threat intelligence.

Register

Securityweek’s CISO Forum will address issues and challenges that are top of mind for today’s security leaders and what the future looks like as chief defenders of the enterprise.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Cyberwarfare

US National Cybersecurity Strategy pushes regulation, aggressive 'hack-back' operations.

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...