Artificial Intelligence

Elon Musk Says He’ll Create ‘TruthGPT’ to Counter AI ‘Bias’

Elon Musk plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a “maximum truth-seeking AI that tries to understand the nature of the universe.”

Elon Musk plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a "maximum truth-seeking AI that tries to understand the nature of the universe.”

Billionaire Twitter owner Elon Musk is again sounding warning bells on the dangers of artificial intelligence to humanity — and claiming that a popular chatbot has a liberal bias that he plans to counter with his own AI creation.

Musk told Fox News host Tucker Carlson in a segment aired Monday night that he plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a “maximum truth-seeking AI that tries to understand the nature of the universe.”

The idea, Musk said, is that an AI that wants to understand humanity is less likely to destroy it.

Musk also said he’s worried that ChatGPT “is being trained to be politically correct.”

In the first of a two-part interview with Carlson, Musk also advocated for the regulation of artificial intelligence, saying he’s a “big fan.” He called AI “more dangerous” than cars or rockets and said it has the potential to destroy humanity.

Separately, Musk has incorporated a new business called X.AI Corp,, according to a Nevada business filing. The website of the Nevada secretary of state’s office says the business was formed on March 9 and lists Musk as its director and his longtime adviser, Jared Birchall, as secretary.

Musk was an early investor in OpenAI — the startup behind ChatGPT — and co-chaired its board upon its 2015 founding as a nonprofit AI research lab. But Musk only lasted there for a few years, resigning from the board in early 2018 in a move that the San Francisco startup tied to Tesla’s work on building automated driving systems. “As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon,” OpenAI said in a February 2018 blog post.

“I came up with the name and the concept,” Musk told Carlson, lamenting that OpenAI is now closely allied with Microsoft and is no longer a nonprofit.

Advertisement. Scroll to continue reading.

Musk elaborated on his departure in 2019, saying it was also related to his need to focus on engineering problems at Tesla and some differences of opinion with OpenAI’s leaders. It was “just better to part ways on good terms,” he said.

“Tesla was competing for some of same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do,” Musk tweeted, without specifying.

But there have been questions surrounding the quality of Tesla’s AI systems. U.S. safety regulators last month announced an investigation into a fatal crash involving a Tesla suspected of using an automated driving system when it ran into a parked firetruck in California.

The firetruck probe is part of a larger investigation by the agency into multiple instances of Teslas using the automaker’s Autopilot system crashing into parked emergency vehicles that are tending to other crashes. NHTSA has become more aggressive in pursuing safety problems with Teslas in the past year, announcing multiple recalls and investigations.

In the year after Musk resigned from the board, OpenAI was still far away from working on ChatGPT but publicly unveiled the first generation of its GPT system, on which ChatGPT is founded, and began a major shift to incorporate itself as a for-profit business.

By 2020, Musk was tweeting that “OpenAI should be more open” while noting that he had “no control & only very limited insight” into it.

At times, he has been complementary. In the days after the Nov. 30 release of ChatGPT, Musk tweeted to OpenAI CEO Sam Altman that it is “scary good” and complained that news media wasn’t widely covering it because “ChatGPT is not a far left cause.”

Since then, however, Musk has repeatedly highlighted examples that he says show left-wing bias or censorship. Like other chatbots, ChatGPT has filters that try to prevent it from spewing out toxic or offensive answers.

Related: ​​ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

RelatedMicrosoft Puts ChatGPT to Work on Automating Cybersecurity

RelatedChatGPT and the Growing Threat of Bring Your Own AI to the SOC

RelatedMicrosoft Invests Billions in ChatGPT-Maker OpenAI

Related Content

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Artificial Intelligence

Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to...

Artificial Intelligence

While over 400 AI-related bills are being debated this year in statehouses nationwide, most target one industry or just a piece of the technology...

Artificial Intelligence

Five Eyes cybersecurity agencies have released joint guidance on securely deploying and operating AI systems. 

Artificial Intelligence

Military planners envision a scenario in which hundreds, even thousands of AI-powered machines engage in coordinated battle.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version