Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Former OpenAI Employees Lead Push to Protect Whistleblowers Flagging Artificial Intelligence Risks

A group of OpenAI’s current and former workers is calling for AI firms to protect whistleblowing employees who flag safety risks about AI technology.

A group of OpenAI’s current and former workers is calling on the ChatGPT-maker and other artificial intelligence companies to protect whistleblowing employees who flag safety risks about AI technology.

An open letter published Tuesday asks tech companies to establish stronger whistleblower protections so researchers can raise concerns about the development of high-performing AI systems internally and with the public without fear of retaliation.

Former OpenAI employee Daniel Kokotajlo, who left the company earlier this year, said in a written statement that tech companies are “disregarding the risks and impact of AI” as they race to develop better-than-human AI systems known as artificial general intelligence.

“I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence,” he wrote. “They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood.”

OpenAI said in a statement responding to the letter that it already has measures for employees to express concerns, including an anonymous integrity hotline.

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said the company’s statement. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”

The letter has 13 signatories, most of whom are former employees of OpenAI and two who work or worked for Google’s DeepMind. Four are listed as anonymous current employees of OpenAI. The letter asks that companies stop making workers enter into “non-disparagement” agreements that can punish them if they criticize the company.

Social media outrage over language in OpenAI’s paperwork for departing workers recently led the company to release all its former employees from their non-disparagement agreements.

Advertisement. Scroll to continue reading.

The open letter has the support of pioneering AI scientists Yoshua Bengio and Geoffrey Hinton, who together won computer science’s highest award, and Stuart Russell. All three have warned about the risks that future AI systems could pose to humanity’s existence.

The letter comes as OpenAI has said it is beginning to develop the next generation of the AI technology behind ChatGPT and forming a new safety committee just after losing a set of leaders, including co-founder Ilya Sutskever, who were part of a team focused on safely developing the most powerful AI systems. The broader AI research community has long battled over the gravity of AI’s short-term and long-term risks and how to square them with the technology’s commercialization.

Those conflicts contributed to the ouster, and swift return, of OpenAI CEO Sam Altman last year, and continue to fuel distrust in his leadership.

More recently, a new product showcase drew the ire of Hollywood star Scarlett Johansson, who said she was shocked to hear ChatGPT’s voice sounding “eerily similar” to her own despite having previously rejected Altman’s request that she lend her voice to the system.

Learn More at SecurityWeek’s AI Risk Summit | Ritz-Carlton, Half Moon Bay

Written By

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

The AI Risk Summit brings together security and risk management executives, AI researchers, policy makers, software developers and influential business and government stakeholders.

Register

People on the Move

Gabriel Agboruche has been named Executive Director of OT and Cybersecurity at Jacobs.

Data security startup Reco adds Merritt Baer as CISO

Chris Pashley has been named CISO at Advanced Research Projects Agency for Health (ARPA-H).

More People On The Move

Expert Insights