Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Musk, Scientists Call for Halt to AI Race Sparked by ChatGPT

A group computer scientists and tech experts are calling for a 6-month pause to consider the profound risks of AI to society and humanity.

OpenAI GDPR Violation

Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?

That’s the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.

Their petition published Wednesday is a response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications.

What do They Say?

The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.

It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Read: ChatGPT Data Breach Confirmed as Security Firm Warns of Vulnerable Component Exploitation

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter says. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Advertisement. Scroll to continue reading.

A number of governments are already working to regulate high-risk AI tools. The United Kingdom released a paper Wednesday outlining its approach, which it said “will avoid heavy-handed legislation which could stifle innovation.” Lawmakers in the 27-nation European Union have been negotiating passage of sweeping AI rules.

Who Signed It?

The petition was organized by the nonprofit Future of Life Institute, which says confirmed signatories include the Turing Award-winning AI pioneer Yoshua Bengio and other leading AI researchers such as Stuart Russell and Gary Marcus. Others who joined include Wozniak, former U.S. presidential candidate Andrew Yang and Rachel Bronson, president of the Bulletin of the Atomic Scientists, a science-oriented advocacy group known for its warnings against humanity-ending nuclear war.

Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI’s existential risks. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion that partners with Amazon and competes with OpenAI’s similar generator known as DALL-E.

What’s the Response?

OpenAI, Microsoft and Google didn’t respond to requests for comment Wednesday, but the letter already has plenty of skeptics.

“A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously,” says James Grimmelmann, a Cornell University professor of digital and information law. “It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars.”

Is this AI Hysteria?

While the letter raises the specter of nefarious AI far more intelligent than what actually exists, it’s not “superhuman” AI that some who signed on are worried about. While impressive, a tool such as ChatGPT is simply a text generator that makes predictions about what words would answer the prompt it was given based on what it’s learned from ingesting huge troves of written works.

Gary Marcus, a New York University professor emeritus who signed the letter, said in a blog post that he disagrees with others who are worried about the near-term prospect of intelligent machines so smart they can self-improve themselves beyond humanity’s control. What he’s more worried about is “mediocre AI” that’s widely deployed, including by criminals or terrorists to trick people or spread dangerous misinformation.

“Current technology already poses enormous risks that we are ill-prepared for,” Marcus wrote. “With future technology, things could well get worse.”

Related: ChatGPT Integrated Into Cybersecurity Products as Industry Tests Its Capabilities

Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity

Related: ChatGPT and the Growing Threat of Bring Your Own AI to the SOC

Related: ‘Grim’ Criminal Abuse of ChatGPT is Coming, Europol Warns 

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cybersecurity Funding

2022 Cybersecurity Year in Review: Top news headlines and trends that impacted the security ecosystem

Endpoint Security

Today, on January 10, 2023, Windows 7 Extended Security Updates (ESU) and Windows 8.1 have reached their end of support dates.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...