GenAI is here to stay. We’re now at the stage where firms like Shopify are even mandating AI use and stipulating that teams must prove AI cannot do a task before requesting headcount.
At the same time, recent research shows just how fast adoption is happening: the average enterprise is using 254 distinct AI-enabled apps.
However, while usage is surging, security and governance are being left behind.
Chinese GenAI apps swell in use
Of the 254 AI-enabled apps in use, 7% have been developed by companies based in China.
DeepSeek clearly got a lot of headlines in January and encouraged a high degree of experimentation – so much so that even the Pentagon and government lawmakers scrambled to block the app after it found staff using it.
But this is merely a precursor of what could be to come – Manus, Ernie Bot, Kimi Moonshot, Qwen Chat and Baidu Chat are all gaining traction and backed by the massive resources of Alibaba and Baidu. The Chinese government can likely request access to any data shared with these apps and it’s safe to presume that it should be considered property of the Chinese Communist Party – to the point that DeepSeek has been deemed a national security risk by a US House Panel. Together, these tools signal a shift in global AI competition, as China’s ecosystem matures rapidly, rivaling Western counterparts in both consumer applications and enterprise adoption.
It’s the ability of apps to launch seemingly from ‘nowhere’ and gain massive amounts of users very quickly that should make us pause for concern. Once they become available, it’s perfectly rational that tech minded employees wanting to get ahead in their role will experiment with the latest and greatest.
Employees can be oblivious to the consequences of their actions
Employees often don’t care about the consequences; such is the appeal of ChatGPT for instance that research from Fishbowl found that 68% of users hide it from their bosses and nearly half would refuse to stop if banned. Quite simply, AI tools are just too appealing for employees to use, and they will go to extreme lengths to get their hands on them – even without approved licenses. This is a particular problem given that of the 176,000 prompts we studied between January and March, some 6.7% potentially disclosed company data.
Don’t use free tiers
Firms should also consider which core AI tools they want to subscribe to and bring under enterprise licenses. We found that nearly half (45.4%) of sensitive data submissions came from employees using a personal email address to log into a GenAI tool. This means sensitive content (everything from legal documents to source code) is being routed through accounts that sit entirely outside corporate control.
This isn’t just a hypothetical risk. In fact, on average, 21% of all sensitive data captured in the study was submitted to ChatGPT’s free tier, where prompts can be retained and used for training purposes. And while companies may assume their internal AI policy has things locked down, it’s clear that employees are finding workarounds because they want the productivity benefits of AI, and many don’t realize the security implications of how they’re accessing it.
Even if an organization is trying to control what gets sent to public LLMs, the moment the interaction moves to a personal account, there is no oversight. There’s no logging, no data retention management, and no real way to know what was shared.
It’s likely that in most cases, employees are not trying to be reckless. Instead, they’re just trying to get work done and if the official tools aren’t available then they’ll use whatever else is at their disposal. However, the result is that organizations are facing an enormous governance gap.
Blocking does not make sense
Trying to block AI tools outright is a losing strategy. SaaS and AI are increasingly inseparable, and AI isn’t limited to tools like ChatGPT or Copilot anymore. Nearly every SaaS product now has AI baked in, so broad blocks quickly become impractical.
More importantly, employees won’t sit on the sidelines. If access is restricted, they’ll find workarounds. They’ll switch to personal devices, hop off the VPN, or use mobile apps. The result? Less visibility, more risk, and no real control.
Time for a proactive approach
This problem isn’t going to magically solve itself. It needs a mindset shift so that security teams proactively understand what AI tools employees are using and work with them to enable secure use.
This does not mean blocking AI. Instead, organizations need to implement sensible controls that act as guardrails for secure AI use.
In doing so, therein lies a massive opportunity for the security team. They can move away from the “department of no”, to being an internal champion of AI that can have wide-ranging business benefits.
