Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Harris to Meet With CEOs About Artificial Intelligence Risks

The Biden administration plans to announce an investment of $140 million to establish seven new AI research institutes, administration officials said.

White House

Vice President Kamala Harris will meet on Thursday with the CEOs of four major companies developing artificial intelligence as the Biden administration rolls out a set of initiatives meant to ensure the rapidly evolving technology improves lives without putting people’s rights and safety at risk.

The Democratic administration plans to announce an investment of $140 million to establish seven new AI research institutes, administration officials told reporters in previewing the effort.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There will also be an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

Harris and administration officials on Thursday plan to discuss the risks they see in current AI development with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI. The government leaders’ message to the companies is that they have a role to play in reducing the risks and that they can work together with the government.

President Joe Biden noted last month that AI can help to address disease and climate change but also could harm national security and disrupt the economy in destabilizing ways.

The release of the ChatGPT chatbot this year has led to increased debate about AI and the government’s role with the technology. Because AI can generate human-like writing and fake images, there are ethical and societal concerns.

OpenAI, which developed ChatGPT, has been secretive about the data its AI systems have been trained upon. That makes it hard for those outside the company to understand why ChatGPT is producing biased or false answers to requests or to address concerns about whether it’s stealing from copyrighted works.

Companies worried about being liable for something in their training data might also not have incentives to properly track it, said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

Advertisement. Scroll to continue reading.

“I think it might not be possible for OpenAI to actually detail all of its training data at a level of detail that would be really useful in terms of some of the concerns around consent and privacy and licensing,” Mitchell said in an interview Tuesday. “From what I know of tech culture, that just isn’t done.”

Theoretically, at least, some kind of disclosure law could force AI providers to open up their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won’t be easy for companies to provide greater transparency after the fact.

“I think it’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not,” Mitchell said. “Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cloud Security

Cloud security researcher warns that stolen Microsoft signing key was more powerful and not limited to Outlook.com and Exchange Online.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Cyberwarfare

US National Cybersecurity Strategy pushes regulation, aggressive 'hack-back' operations.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.