Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Biden Administration Seeks Input on AI Safety Measures

The Biden administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released.

AI

President Joe Biden’s administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released, though it hasn’t decided if the government will have a role in doing the vetting.

The U.S. Commerce Department on Tuesday said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems.

“There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

The NTIA, more of an adviser than a regulator, is seeking feedback about what policies could make commercial AI tools more accountable.

Biden last week said during a meeting with his council of science and technology advisers that tech companies must ensure their products are safe before releasing them to the public.

{ Feature: ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications }

The Biden administration also last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, but that was before the release of ChatGPT, from San Francisco startup OpenAI, and similar products from Microsoft and Google led to wider awareness of the capabilities of the latest AI tools that can generate human-like passages of text, as well as new images and video.

“These new language models, for example, are really powerful and they do have the potential to generate real harm,” Davidson said in an interview. “We think that these accountability mechanisms could truly help by providing greater trust in the innovation that’s happening.”

Advertisement. Scroll to continue reading.

The NTIA’s notice leans heavily on requesting comment about “self-regulatory” measures that the companies that build the technology would be likely to lead. That’s a contrast to the European Union, where lawmakers this month are negotiating the passage of new laws that could set strict limits on AI tools depending on how high a risk they pose.

RelatedCyber Insights 2023 | Artificial Intelligence

RelatedWhite House Unveils Artificial Intelligence ‘Bill of Rights’

RelatedBias in Artificial Intelligence: Can AI be Trusted?

RelatedThe Starter Pistol Has Been Fired for Artificial Intelligence Regulation in Europe

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Discover strategies for vendor selection, integration to minimize redundancies, and maximizing ROI from your cybersecurity investments. Gain actionable insights to ensure your stack is ready for tomorrow’s challenges.

Register

Dive into critical topics such as incident response, threat intelligence, and attack surface management. Learn how to align cyber resilience plans with business objectives to reduce potential impacts and secure your organization in an ever-evolving threat landscape.

Register

People on the Move

Cyber exposure management firm Armis has promoted Alex Mosher to President.

Software giant Atlassian has named David Cross as its new CISO.

Dan Pagel has been named the new CEO of risk management and remediation firm Brinqa.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.