Exploitation attempts targeting a recent authentication bypass vulnerability in PraisonAI started less than four hours after public disclosure, application protection firm Sysdig warns.
PraisonAI is a multi-agent framework that allows organizations to deploy autonomous AI agents for the execution of complex tasks.
Tracked as CVE-2026-44338, the newly disclosed security defect exists because PraisonAI versions 2.5.6 to 4.6.33 shipped with a legacy Flask API server that had authentication disabled by default.
“When that server is used, any caller that can reach it can access /agents and trigger the configured agents.yaml workflow through /chat without providing a token,” a NIST advisory reads.
With authentication disabled, /agents returns the configured agent metadata, while /chat accepts any JSON body with a message key and executes the agents.yaml workflow, ignoring the message value.
“Within three hours and 44 minutes of the advisory becoming public, a scanner identifying itself as CVE-Detector/1.0 was probing the exact vulnerable endpoint on internet-exposed instances,” Sysdig says.
The cybersecurity firm assesses that the observed activity was associated with a scanner, not interactive exploitation.
“Two passes ran eight minutes apart, each pushing approximately 70 requests in roughly 50 seconds. The first pass swept generic disclosure paths (/.env, /admin, /users/sign_in, /eval, /calculate, /Gemfile.lock). The second pass narrowed to AI-agent surfaces,” the company says.
The activity only targeted /agents, but did not send requests to /chat, suggesting that the attempt was focused on reconnaissance and validation.
“Enumerate the agent list, confirm the auth bypass works, log the host as exploitable, and move on. Follow-on tooling is typically separate,” Sysdig notes.
Achieving remote code execution (RCE) using this vulnerability, Sysdig explains, is not straightforward, as the unauthenticated attacker can only trigger what agents.yaml is configured for.
In production environments, the workflow typically makes calls to various LLM providers (such as Anthropic, Bedrock, OpenAI, and others), grants access to various tools (including code interpreters, shells, and file I/O), or returns the agent file name and agent list.
“The bypass itself is not arbitrary code execution. But because it removes authentication from a workflow trigger that an operator deliberately exposed to do something useful, the impact ceiling is whatever that workflow is allowed to do,” Sysdig notes.
The vulnerability was resolved in PraisonAI version 4.6.34. Organizations should update their deployments as soon as possible.
“AI-assisted tooling is enabling attackers to move from an advisory publication to a working exploit in timeframes that simply did not exist before. Consequently, the timeframe that organizations have to patch and mitigate, or even detect active probing, has shrunk. Rapid exploitation following disclosure is no longer an edge case reserved for zero-days. It is becoming a baseline,” Black Duck AI research engineer Vineeta Sangaraju said.
“The assumptions of traditional risk models about attacker sophistication and time to exploit no longer hold. Organizations need to build the capability to detect and respond within hours, not days, of a high-severity advisory affecting their stack. In the post-AI era, the mere definition of AppSec terms like vulnerability likelihood, script kiddies, etc., needs to be redefined,” Sangaraju added.
Related: New ‘Dirty Frag’ Linux Vulnerability Possibly Exploited in Attacks
Related: Ivanti Patches EPMM Zero-Day Exploited in Targeted Attacks
Related: Palo Alto Zero-Day Exploited in Campaign Bearing Hallmarks of Chinese State Hacking
Related: Critical cPanel & WHM Vulnerability Exploited as Zero-Day for Months
