Artificial Intelligence

Biden Administration Seeks Input on AI Safety Measures

The Biden administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released.

President Joe Biden’s administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released, though it hasn’t decided if the government will have a role in doing the vetting.

The U.S. Commerce Department on Tuesday said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems.

“There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

The NTIA, more of an adviser than a regulator, is seeking feedback about what policies could make commercial AI tools more accountable.

Biden last week said during a meeting with his council of science and technology advisers that tech companies must ensure their products are safe before releasing them to the public.

{ Feature: ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications }

The Biden administration also last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, but that was before the release of ChatGPT, from San Francisco startup OpenAI, and similar products from Microsoft and Google led to wider awareness of the capabilities of the latest AI tools that can generate human-like passages of text, as well as new images and video.

“These new language models, for example, are really powerful and they do have the potential to generate real harm,” Davidson said in an interview. “We think that these accountability mechanisms could truly help by providing greater trust in the innovation that’s happening.”

Advertisement. Scroll to continue reading.

The NTIA’s notice leans heavily on requesting comment about “self-regulatory” measures that the companies that build the technology would be likely to lead. That’s a contrast to the European Union, where lawmakers this month are negotiating the passage of new laws that could set strict limits on AI tools depending on how high a risk they pose.

RelatedCyber Insights 2023 | Artificial Intelligence

RelatedWhite House Unveils Artificial Intelligence ‘Bill of Rights’

RelatedBias in Artificial Intelligence: Can AI be Trusted?

RelatedThe Starter Pistol Has Been Fired for Artificial Intelligence Regulation in Europe

Related Content

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Artificial Intelligence

Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to...

Artificial Intelligence

While over 400 AI-related bills are being debated this year in statehouses nationwide, most target one industry or just a piece of the technology...

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version