Connect with us

Hi, what are you looking for?


Artificial Intelligence

Vulnerability Could Have Been Exploited for ‘Unlimited’ Free Credit on OpenAI Accounts

A vulnerability in OpenAI’s account validation allowed anyone to obtain virtually unlimited free credit by registering new accounts with the same phone number.

A vulnerability in OpenAI’s account validation process allowed anyone to obtain virtually unlimited free credit for the company’s services by registering new accounts using the same phone number, application security firm Checkmarx says.

An artificial intelligence company, OpenAI has been making the news over the past several months, especially due to its ChatGPT project.

When users sign up for a new account, OpenAI provides them with free credit as part of a trial period.

To prevent abuse, the company implemented an email and phone number validation mechanism, where attempting to use the same email or phone number to register multiple accounts would no longer result in users receiving the free credit.

The registration process begins with providing an email to which an activation link is sent. Clicking on the link then requires providing a phone number to which a validation code is sent via SMS.

“Both email and phone number must be unique, otherwise, the user would be informed that the account already exists, and no free credits would be granted,” Checkmarx notes.

According to the security firm, the validation mechanism could be bypassed by using a catch-all email account on a private domain and by abusing a vulnerability in the phone number verification process OpenAI had implemented.

Advertisement. Scroll to continue reading.

By intercepting and modifying the OpenAI API request, Chechmarx discovered that all an attacker would need to bypass the validation was to provide variations of the same phone number, but still have the free credit provided for multiple accounts.

The issue, they explain, was that the user supplied phone number was first checked by one component against the previously registered numbers, to ensure it had not been used before. Next, the phone number was passed to a different component that sanitized it before using it for validation purposes.

Because of that, an attacker could prepend zeros and inline non-ASCII bytes to the same number to bypass the first check – since these permutations would not be identical to the original value – but still have the number used for validation, potentially for an unlimited number of new accounts.

“This late-stage normalization can cause a massive, if not infinite, set of different values (e.g., 0123, 00123, 12\u000a3, 001\u000a\u000b2\u000b3 etc.) that are treated as unique identifiers to collapse into a single value (123) upon use, which allows bypassing the initial validation mechanism altogether,” Checkmarx explains.

The issue, the security firm notes, could have been resolved by running the normalization before processing the value, ensuring it is the same for both checks.

Checkmarx reported the vulnerability to OpenAI in December 2022 and was informed in March 2023 that it had been resolved.

Related: OpenAI: ChatGPT Back in Italy After Meeting Watchdog Demands

Related: Insider Q&A: OpenAI CTO Mira Murati on Shepherding ChatGPT

Related: ChatGPT Creator OpenAI Ready to Pay Hackers for Security Flaws

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

SecurityWeek’s Threat Detection and Incident Response Summit brings together security practitioners from around the world to share war stories on breaches, APT attacks and threat intelligence.


Securityweek’s CISO Forum will address issues and challenges that are top of mind for today’s security leaders and what the future looks like as chief defenders of the enterprise.


Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...

Artificial Intelligence

Microsoft making a multiyear, multibillion dollar investment in the artificial intelligence startup OpenAI, maker of ChatGPT and other tools.