Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Elon Musk Says He’ll Create ‘TruthGPT’ to Counter AI ‘Bias’

Elon Musk plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a “maximum truth-seeking AI that tries to understand the nature of the universe.”

Billionaire Twitter owner Elon Musk is again sounding warning bells on the dangers of artificial intelligence to humanity — and claiming that a popular chatbot has a liberal bias that he plans to counter with his own AI creation.

Musk told Fox News host Tucker Carlson in a segment aired Monday night that he plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a “maximum truth-seeking AI that tries to understand the nature of the universe.”

The idea, Musk said, is that an AI that wants to understand humanity is less likely to destroy it.

Musk also said he’s worried that ChatGPT “is being trained to be politically correct.”

In the first of a two-part interview with Carlson, Musk also advocated for the regulation of artificial intelligence, saying he’s a “big fan.” He called AI “more dangerous” than cars or rockets and said it has the potential to destroy humanity.

Separately, Musk has incorporated a new business called X.AI Corp,, according to a Nevada business filing. The website of the Nevada secretary of state’s office says the business was formed on March 9 and lists Musk as its director and his longtime adviser, Jared Birchall, as secretary.

Musk was an early investor in OpenAI — the startup behind ChatGPT — and co-chaired its board upon its 2015 founding as a nonprofit AI research lab. But Musk only lasted there for a few years, resigning from the board in early 2018 in a move that the San Francisco startup tied to Tesla’s work on building automated driving systems. “As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon,” OpenAI said in a February 2018 blog post.

“I came up with the name and the concept,” Musk told Carlson, lamenting that OpenAI is now closely allied with Microsoft and is no longer a nonprofit.

Advertisement. Scroll to continue reading.

Musk elaborated on his departure in 2019, saying it was also related to his need to focus on engineering problems at Tesla and some differences of opinion with OpenAI’s leaders. It was “just better to part ways on good terms,” he said.

“Tesla was competing for some of same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do,” Musk tweeted, without specifying.

But there have been questions surrounding the quality of Tesla’s AI systems. U.S. safety regulators last month announced an investigation into a fatal crash involving a Tesla suspected of using an automated driving system when it ran into a parked firetruck in California.

The firetruck probe is part of a larger investigation by the agency into multiple instances of Teslas using the automaker’s Autopilot system crashing into parked emergency vehicles that are tending to other crashes. NHTSA has become more aggressive in pursuing safety problems with Teslas in the past year, announcing multiple recalls and investigations.

In the year after Musk resigned from the board, OpenAI was still far away from working on ChatGPT but publicly unveiled the first generation of its GPT system, on which ChatGPT is founded, and began a major shift to incorporate itself as a for-profit business.

By 2020, Musk was tweeting that “OpenAI should be more open” while noting that he had “no control & only very limited insight” into it.

At times, he has been complementary. In the days after the Nov. 30 release of ChatGPT, Musk tweeted to OpenAI CEO Sam Altman that it is “scary good” and complained that news media wasn’t widely covering it because “ChatGPT is not a far left cause.”

Since then, however, Musk has repeatedly highlighted examples that he says show left-wing bias or censorship. Like other chatbots, ChatGPT has filters that try to prevent it from spewing out toxic or offensive answers.

Related: ​​ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

RelatedMicrosoft Puts ChatGPT to Work on Automating Cybersecurity

RelatedChatGPT and the Growing Threat of Bring Your Own AI to the SOC

RelatedMicrosoft Invests Billions in ChatGPT-Maker OpenAI

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

Professional services company Slalom has appointed Christopher Burger as its first CISO.

More People On The Move

Expert Insights

Related Content

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Data Protection

The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption.

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.