Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Vulnerabilities

ChatGPT Creator OpenAI Ready to Pay Hackers for Security Flaws

OpenAI announced a bug bounty program that will pay hackers up to $20,000 for security vulnerabilities found in ChatGPT and other products and OpenAI corporate assets.

ChatGPT

OpenAI, the company behind the wildly popular ChatGPT artificial-intelligence (AI) chatbot, on Tuesday launched a bug bounty program offering up to $20,000 for advance notice on security vulnerabilities found by hackers.

The rollout of the new bug bounty program comes on the heels of OpenAI patching account takeover vulnerabilities in ChatGPT that were being exploited in the wild

The Microsoft-backed AI company plans to offer bounties for bugs in its flagship ChatGPT, along with APIs, API keys, third-party corporate targets and assets belonging to the OpenAI research organization.

The company is specifically looking for security defects in the ChatGPT chatbot, including ChatGPT Plus, logins, subscriptions, OpenAI-created plugins and third-party plugins.

The program, which is being managed by BugCrowd, is also looking for security issues in a target group that includes confidential OpenAI corporate information that may be exposed through third parties. 

Some examples of the types of vendors which would qualify in this category include Google Workspace, Asana, Trello, Jira, Monday.com, Zendesk, Salesforce and Stripe, the company said.

OpenAI said the program will offer cash rewards based on the severity and impact of the reported issues. 

“Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries,” the company said without elaborating on the types of vulnerabilities that would qualify for top-end rewards.

Advertisement. Scroll to continue reading.

Late last month, OpenAI experienced a data breach caused by a bug in an open source library that resulted in ChatGPT users being shown chat data belonging to others.

The company also patched severe vulnerabilities in late March that could have allowed attackers to take over user accounts and view chat histories.

Related: ​​ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity

Related: ChatGPT and the Growing Threat of Bring Your Own AI to the SOC

Related: Microsoft Invests Billions in ChatGPT-Maker OpenAI

Written By

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. He is a security community engagement expert who has built programs at major global brands, including Intel Corp., Bishop Fox and GReAT. Ryan is a founding-director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn how the LOtL threat landscape has evolved, why traditional endpoint hardening methods fall short, and how adaptive, user-aware approaches can reduce risk.

Watch Now

Join the summit to explore critical threats to public cloud infrastructure, APIs, and identity systems through discussions, case studies, and insights into emerging technologies like AI and LLMs.

Register

People on the Move

Cloud security startup Upwind has appointed Rinki Sethi as Chief Security Officer.

SAP security firm SecurityBridge announced the appointment of Roman Schubiger as the company’s new CRO.

Cybersecurity training and simulations provider SimSpace has appointed Peter Lee as Chief Executive Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.