Google has analyzed AI indirect prompt injection attempts involving sites on the public web and noticed an increase in malicious attacks over the past months, but the tech giant’s researchers say their sophistication is relatively low.
Direct prompt injection is a ‘jailbreak’ where a user interacts with the AI to bypass its rules, whereas indirect prompt injection is a ‘hidden trap’ where the AI is tricked by malicious instructions found in external data.
Cybersecurity researchers have discovered many indirect prompt injection methods in recent years, using specially crafted prompts planted on websites, in emails, and developer resources to trick Gemini, Copilot, ChatGPT, and other gen-AI tools into bypassing security and facilitating data theft.
While many theoretical attack methods exist, threat intelligence experts at Google recently set out to determine the extent to which these AI vulnerabilities are being exploited in the wild.
Specifically, their research focused on indirect prompt injection attempts set up on websites on the public internet. They scanned the website snapshots saved by Common Crawl for known prompt injection patterns and used Gemini and human reviews to weed out false positives.
An analysis of the identified prompt injections found harmless pranks, attempts to deter AI agents, search engine optimization, and helpful guidance, as well as some malicious attacks.
Prank prompt injections can, for instance, instruct visiting AI assistants to change their behavior (eg, act like a baby bird and tweet like a bird).
Some website owners place helpful instructions for AI tasked with summarizing a site, but others add prompts designed to prevent assistants from crawling the website, including by telling the AI that the content is dangerous and sensitive.
Google researchers have also come across websites whose administrators attempt to boost SEO by instructing AI assistants to claim their company is the best.
The most important, however, from a security standpoint are the malicious prompt injection attempts. The researchers uncovered two types of such attacks: exfiltration and destruction.
Some websites contained prompts instructing AI to collect data, including IPs and credentials, and send it to an attacker-specified email address.
“However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.”

In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.
While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.
“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.
Related: Why Cybersecurity Must Rethink Defense in the Age of Autonomous Agents
Related: Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US
