Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

WhiteRabbitNeo: High-Powered Potential of Uncensored AI Pentesting for Attackers and Defenders

Version 2.5 of WhiteRabbitNeo is designed to think like a seasoned red team expert, capable of identifying and exploiting vulnerabilities with remarkable speed and precision.

Hacking with AI using WhiteRabbitNeo

The latest release of WhiteRabbitNeo (version 2.5) represents a leap forward in cybersecurity and DevSecOps. This generative AI tool—now available on Hugging Face—is designed to think like a seasoned red team expert, capable of identifying and exploiting vulnerabilities with remarkable speed and precision.

WhiteRabbitNeo has been trained to find vulnerabilities and test their exploitability by writing exploit code. We’ve known since the availability of ChatGPT in late 2022 that this was coming, but have yet to see much evidence of AI-generated malware in the wild (although see HP Wolf’s report for a possible exception).

One of the reasons is the censorship (guardrails) placed around the big LLM models A paper published by OpenAI in October 2024 discussed how several adversarial groups had attempted to use ChatGPT to assist their operations. OpenAI noted, however, “These interactions did not provide CyberAv3ngers with any novel capability, resource, or information, and only offered limited, incremental capabilities that are already achievable with publicly available, non-AI powered tools.”

Most people applaud this censorship of the very large general purpose LLMs. But not everyone, always. “It also means you can’t ask security questions about your own infrastructure,” Andy Manoske, VP of Product at Kindo, told SecurityWeek. “LLMs have the capability to assist your own pentesting, but the major LLMs are not allowed to do so.” Kindo is a firm focused on the orchestration of gen-AI solutions. It is also the primary sponsor of WhiteRabbitNeo. We talked to Manoske for a better understanding of WhiteRabbitNeo, its purposes and its capabilities. 

“WhiteRabbitNeo is an offensive security gen-AI model,” he told SecurityWeek. “Its purpose is to allow security teams to examine their infrastructures, detect vulnerabilities that can be exploited (by developing exploits for the vulnerabilities), and provide remediations for those vulnerabilities,” explains Manoske. “To achieve this, it must be uncensored – and that makes it a dual use tool. It can equally be used by adversaries to detect vulnerabilities and automatically develop exploits.”

The dual use nature of WhiteRabbitNeo and its ease of use was demonstrated to SecurityWeek by Len Noe, a whitehat hacker and technical evangelist at CyberArk. 

But for such capabilities to be useful, the gen-AI output must be accurate; that is, free from bias and hallucinations. The first is down to the developers and requires, for most models available from Hugging Face, a leap of faith – but augmented by their open source nature. The second is down to the model’s training data, which must be accurate, complete, and timely.

Back to Manoske. “For training we use things like MITRE’s vulnerability databases, and open source threat intelligence gathered from systems like Open Threat Exchange and ThreatConnect,” he said. Timeliness and completeness (and elimination of hallucinations) is ensured by use of retrieval-augmented generation (RAG). The training data is effectively continually updated from its sources. 

Advertisement. Scroll to continue reading.

MITRE’s CVEs are a good example. “When you introduce new CVEs,” he continued, “WhiteRabbitNeo knows that this is a CVE, and it understands the format. It can connect the dots and provide an updated answer: do you have this vulnerability, can it be exploited within your infrastructure, and how can you remediate the vulnerability?”

But WhiteRabbitNeo remains fundamentally a double-edged sword – something that Manoske understands. “There is a balance in creating a dual use tool. When we created WhiteRabbitNeo we tried to walk that balance by thinking of it as an open source AI version of Metasploit. Yes, we can provide instructions on how you investigate things in an uncensored way. So, we’re not stopping potentially dubious questions that would go into this, but at the same time we’re immediately resolving these and focusing on that resolution aspect. We’re trying to build a model that allows human beings to understand highly technical information and then derive value from that information, including education.”

The comparison with Metasploit is apt. When Metasploit was first developed by HD Moore, there existed considerable antipathy toward its potential use by adversaries. Over time, it became clear that these adversaries were able to produce their own exploits as fast and sometimes faster than Metasploit – but the existence of Metasploit became a boon for red teamers and pentesters as a rapid method of finding vulnerabilities in their own systems. WhiteRabbitNeo is like Metasploit on legs, but with the additional advantage of providing almost real time remediation.

Apart from the offensive capabilities, added Manoske, “it is also trained on infrastructure choices. So, it’s trained on common APIs; it knows how to write firewall rules; it knows how to use Terraform; it knows how to use the identity and access management systems from all major cloud providers, and how to encode an IAM profile and such things.”

In short, WhiteRabbitNeo is a find and fix vulnerability tool that operates at AI speed. But it remains a double-edged tool. Here Manoske comments, “WhiteRabbitNeo attempts to make its exploit code easily recognizable by security controls by being difficult to obfuscate. Genuine pentesters don’t need obfuscation.”

He adds the deciding factor for developing WhiteRabbitNeo: the idea that our adversaries are not already developing, or have already developed, a purely offensive malware-generating AI is a fool’s paradise. But WhiteRabbitNeo also adds the remediation. It is likely, then, that after raising a few eyebrows, WhiteRabbitNeo will gain rapid acceptance by existing security leaders. We asked a few.

Jason Soroko, senior fellow at Sectigo, responded, “WhiteRabbitNeo’s new release is an important asset for offensive cybersecurity simulating adversarial tactics. Thinking like an adversary is difficult, and this tool will help to provide this missing gap.”

Amit Zimerman, co-founder and CPO at Oasis Security, welcomes the offensive security potential of WhiteRabbitNeo, but cautions about its dual use nature. “Offensive cybersecurity involves proactive measures where security teams mimic the tactics of real-world attackers to uncover vulnerabilities within a system. It’s not about waiting for an attack but rather about simulating one to strengthen the overall security posture of an organization. AI bolsters offensive cybersecurity,” he said.

But he also warned, “The power of AI in offensive operations could be misused, potentially leading to ethical dilemmas if AI-driven tools fall into the wrong hands. This is particularly critical in cybersecurity, where tools intended to protect could be repurposed for malicious attacks. It’s crucial that organizations adopt strict governance and ethical guidelines when deploying AI in these contexts.”

Mayuresh Dani, manager, security research at Qualys Threat Research Unit expanded on this: “WhiteRabbitNeo looks like the next progression of AI-aided offensive security by using uncensored training data sources. What’s even better is that these offensive cybersecurity AI models are open source. Since often used sources, such as NVD for vulnerability intelligence and IOCs for threat research, are also being consumed, I opine this would be a formidable resource for all security teams – Red, Blue and Purple. Last I checked, their LLaMA-3 fine tunes were impressive. The only thing I think we should be cautious about is that since the data is uncensored, everything that the models give out should be correctly vetted prior to its use.”

Gen-AI driven offensive cybersecurity with built-in remediation is already being accepted (albeit with some initial concerns). If this follows the same path as Metasploit, products like WhiteRabbitNeo will become mainstream cybersecurity tools very rapidly.

Related: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis

Related: Insider Q&A: CIA’s Chief Technologist’s Cautious Embrace of Generative AI

Related: Secrets Exposed in Hugging Face Hack

Related: AI Models in Cybersecurity: From Misuse to Abuse

Related: Security Firm Shows How Threat Actors Could Abuse Google’s Gemini AI Assistant

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join SecurityWeek and Hitachi Vantara for this this webinar to gain valuable insights and actionable steps to enhance your organization's data security and resilience.

Register

Event: ICS Cybersecurity Conference

The leading industrial cybersecurity conference for Operations, Control Systems and IT/OT Security professionals to connect on SCADA, DCS PLC and field controller cybersecurity.

Register

People on the Move

Jared Bartel has been named CISO at Idaho State University.

Automated phishing protection and scam prevention company Bolster has appointed Rod Schultz as CEO.

Bugcrowd has appointed Trey Ford as CISO for the Americas.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.