Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

OpenAI Offering $100K Bounties for Critical Vulnerabilities

OpenAI has raised its maximum bug bounty payout to $100,000 (up from $20,000) for high-impact flaws in its infrastructure and products.

OpenAI bug bounty program expansion

Artificial intelligence tech giant OpenAI has raised its maximum bug bounty payout to $100,000 (up from $20,000) as part of plans to outsource the discovery of critical, high-impact vulnerabilities in its infrastructure and products.

The new bounty program is part of a broader set of security initiatives from OpenAI that includes funding for security research projects, continuous adversarial red teaming, and engagements with open-source software communities.

In addition to the higher payouts for critical security findings, OpenAI said it will provide bonus promotions for qualifying reports during limited-time periods.

The company also announced an expansion of the Cybersecurity Grant Program that has already funded 28 research initiatives since its rollout in 2023.

OpenAI said the funded projects have addressed issues such as prompt injection, secure code generation, and the development of autonomous cybersecurity defenses. 

The program is now inviting hackers to propose projects on software patching, model privacy, threat detection and response, security integration, and resilience against sophisticated attacks. 

OpenAI said the program is also introducing microgrants in the form of API credits to help researchers rapidly prototype creative security solutions.

In parallel, OpenAI said it is collaborating with experts from academic, government, and commercial labs to benchmark skills gaps and improve its models’ ability to identify and patch vulnerabilities. 

Advertisement. Scroll to continue reading.

The company is also partnering with venture-backed startup SpecterOps to conduct continuous adversarial red teaming across corporate, cloud, and production environments.

The company said the simulated attacks are aimed at finding potential weaknesses before they can be exploited by malicious actors.  

Related: Can AI Early Warning Systems Reboot the Threat Intel Industry?

Related: ChatGPT Creator OpenAI Ready to Pay Hackers for Security Flaws

Related: Microsoft Catches APTs Using ChatGPT for Malware Scripting

Related: OpenAI Unveils Million-Dollar Cybersecurity Grant Program

Written By

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. He is a security community engagement expert who has built programs at major global brands, including Intel Corp., Bishop Fox and GReAT. Ryan is a founding-director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this event as we dive into threat hunting tools and frameworks, and explore value of threat intelligence data in the defender’s security stack.

Register

Learn how integrating BAS and Automated Penetration Testing empowers security teams to quickly identify and validate threats, enabling prompt response and remediation.

Register

People on the Move

Wendi Whitmore has taken the role of Chief Security Intelligence Officer at Palo Alto Networks.

Phil Venables, former CISO of Google Cloud, has joined Ballistic Ventures as a Venture Partner.

David Currie, former CISO of Nubank and Klarna, has been appointed CEO of Vaultree.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.