Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

OpenAI Launches Bug Bounty Program for Abuse and Safety Risks

Through the new program, OpenAI will reward reports covering design or implementation issues leading to material harm.

OpenAI

OpenAI has announced a new public safety bug bounty program focused on AI-specific abuse and safety risks in its products.

The new program complements OpenAI’s existing security bug bounty program and is open to issues that do not meet the criteria for a security vulnerability.

“Submissions will be triaged by OpenAI’s Safety and Security Bug Bounty teams and may be rerouted between the two programs depending on scope and ownership,” OpenAI says.

AI-specific safety scenarios covered by the new program include third-party prompt injection and data exfiltration attacks, disallowed actions performed by agentic OpenAI products on the company’s website at scale, and other harmful actions performed by the products.

The program also accepts submissions regarding issues that lead to the exposure of OpenAI’s proprietary information, as well as weaknesses in account and platform integrity.

“If researchers identify flaws that facilitate direct paths to user harm and actionable, discrete remediation steps, these may be considered in scope for rewards on a case-by-case basis,” OpenAI notes.

Advertisement. Scroll to continue reading.

The program runs on Bugcrowd and follows the same rules as the company’s security bug bounty program, with several additions.

Per the rules, design and implementation issues in OpenAI products that could lead to material harm are within the scope of the program, including flaws resulting in abuse protection bypasses.

Researchers are encouraged to identify abuse risks in agentic OpenAI products that perform actions on behalf of the user or access data as the user, including Atlas Browser, Codex, Operator, Connectors, and other ChatGPT tools.

Vulnerabilities in connectors and MCP integrators that can be abused to cause material harm are also accepted.

Researchers may earn up to $7,500 for reports that detail consistently reproducible issues of high severity, and which include a clear set of recommended steps or mitigations. However, OpenAI says reward decisions and amounts are up to its discretion.

Related: Google Paid Out $17 Million in Bug Bounty Rewards in 2025

Related: OpenAI Rolls Out Codex Security Vulnerability Scanner

Related: Microsoft Bug Bounty Program Expanded to Third-Party Code

Related: From Open Source to OpenAI: The Evolution of Third-Party Risk

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

In cyber-physical systems (CPS), just one hour of downtime can outweigh an entire annual security budget. Learn how to master the Return on Security Investment (ROSI) to align security goals with the bottom-line priorities.

Register

Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization.

Register

People on the Move

Malwarebytes has named Chung Ip as Chief Financial Officer.

Semperis has appointed John Podboy as Chief Information Security Officer.

Randy Menon has become Chief Product and Marketing Officer at One Identity.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.