Vulnerabilities

ChatGPT Creator OpenAI Ready to Pay Hackers for Security Flaws

OpenAI announced a bug bounty program that will pay hackers up to $20,000 for security vulnerabilities found in ChatGPT and other products and OpenAI corporate assets.

OpenAI, the company behind the wildly popular ChatGPT artificial-intelligence (AI) chatbot, on Tuesday launched a bug bounty program offering up to $20,000 for advance notice on security vulnerabilities found by hackers.

The rollout of the new bug bounty program comes on the heels of OpenAI patching account takeover vulnerabilities in ChatGPT that were being exploited in the wild

The Microsoft-backed AI company plans to offer bounties for bugs in its flagship ChatGPT, along with APIs, API keys, third-party corporate targets and assets belonging to the OpenAI research organization.

The company is specifically looking for security defects in the ChatGPT chatbot, including ChatGPT Plus, logins, subscriptions, OpenAI-created plugins and third-party plugins.

The program, which is being managed by BugCrowd, is also looking for security issues in a target group that includes confidential OpenAI corporate information that may be exposed through third parties. 

Some examples of the types of vendors which would qualify in this category include Google Workspace, Asana, Trello, Jira, Monday.com, Zendesk, Salesforce and Stripe, the company said.

OpenAI said the program will offer cash rewards based on the severity and impact of the reported issues. 

“Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries,” the company said without elaborating on the types of vulnerabilities that would qualify for top-end rewards.

Advertisement. Scroll to continue reading.

Late last month, OpenAI experienced a data breach caused by a bug in an open source library that resulted in ChatGPT users being shown chat data belonging to others.

The company also patched severe vulnerabilities in late March that could have allowed attackers to take over user accounts and view chat histories.

Related: ​​ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity

Related: ChatGPT and the Growing Threat of Bring Your Own AI to the SOC

Related: Microsoft Invests Billions in ChatGPT-Maker OpenAI

Related Content

Artificial Intelligence

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Artificial Intelligence

Artificial intelligence computing giant NVIDIA patches flaws in ChatRTX for Windows and warns of code execution and data tampering risks.

Artificial Intelligence

Three types of vulnerabilities related to ChatGPT plugins could have led to data exposure and account takeovers. 

Artificial Intelligence

Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

Artificial Intelligence

Prompt Security emerges from stealth with $5 million in seed to help businesses with generative-AI security tasks.

Artificial Intelligence

SecurityWeek interviews a wide spectrum of security experts on AI-driven cybersecurity use-cases that are worth immediate attention.

Artificial Intelligence

ChatGPT maker OpenAI outlines a plan to prevent its tools from being used to spread election misinformation in 2024.

Artificial Intelligence

A researcher has shown how malicious actors can create custom GPTs that can phish for credentials and exfiltrate them to external servers. 

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version