Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Is AI Use in the Workplace Out of Control?

Trying to block AI tools outright is a losing strategy. SaaS and AI are increasingly inseparable, and AI isn’t limited to tools like ChatGPT or Copilot anymore.

AI: Privacy of Social Media and Internet Apps

GenAI is here to stay. We’re now at the stage where firms like Shopify are even mandating AI use and stipulating that teams must prove AI cannot do a task before requesting headcount.

At the same time, recent research shows just how fast adoption is happening: the average enterprise is using 254 distinct AI-enabled apps.

However, while usage is surging, security and governance are being left behind.

Chinese GenAI apps swell in use

Of the 254 AI-enabled apps in use, 7% have been developed by companies based in China.

DeepSeek clearly got a lot of headlines in January and encouraged a high degree of experimentation – so much so that even the Pentagon and government lawmakers scrambled to block the app after it found staff using it.

But this is merely a precursor of what could be to come – Manus, Ernie Bot, Kimi Moonshot, Qwen Chat and Baidu Chat are all gaining traction and backed by the massive resources of Alibaba and Baidu. The Chinese government can likely request access to any data shared with these apps and it’s safe to presume that it should be considered property of the Chinese Communist Party – to the point that DeepSeek has been deemed a national security risk by a US House Panel. Together, these tools signal a shift in global AI competition, as China’s ecosystem matures rapidly, rivaling Western counterparts in both consumer applications and enterprise adoption.

It’s the ability of apps to launch seemingly from ‘nowhere’ and gain massive amounts of users very quickly that should make us pause for concern. Once they become available, it’s perfectly rational that tech minded employees wanting to get ahead in their role will experiment with the latest and greatest.

Employees can be oblivious to the consequences of their actions

Employees often don’t care about the consequences; such is the appeal of ChatGPT for instance that research from Fishbowl found that 68% of users hide it from their bosses and nearly half would refuse to stop if banned. Quite simply, AI tools are just too appealing for employees to use, and they will go to extreme lengths to get their hands on them – even without approved licenses. This is a particular problem given that of the 176,000 prompts we studied between January and March, some 6.7% potentially disclosed company data.

Advertisement. Scroll to continue reading.

Don’t use free tiers

Firms should also consider which core AI tools they want to subscribe to and bring under enterprise licenses. We found that nearly half (45.4%) of sensitive data submissions came from employees using a personal email address to log into a GenAI tool. This means sensitive content (everything from legal documents to source code) is being routed through accounts that sit entirely outside corporate control.

This isn’t just a hypothetical risk. In fact, on average, 21% of all sensitive data captured in the study was submitted to ChatGPT’s free tier, where prompts can be retained and used for training purposes. And while companies may assume their internal AI policy has things locked down, it’s clear that employees are finding workarounds because they want the productivity benefits of AI, and many don’t realize the security implications of how they’re accessing it.

Even if an organization is trying to control what gets sent to public LLMs, the moment the interaction moves to a personal account, there is no oversight. There’s no logging, no data retention management, and no real way to know what was shared.

It’s likely that in most cases, employees are not trying to be reckless. Instead, they’re just trying to get work done and if the official tools aren’t available then they’ll use whatever else is at their disposal. However, the result is that organizations are facing an enormous governance gap.

Blocking does not make sense

Trying to block AI tools outright is a losing strategy. SaaS and AI are increasingly inseparable, and AI isn’t limited to tools like ChatGPT or Copilot anymore. Nearly every SaaS product now has AI baked in, so broad blocks quickly become impractical.

More importantly, employees won’t sit on the sidelines. If access is restricted, they’ll find workarounds. They’ll switch to personal devices, hop off the VPN, or use mobile apps. The result? Less visibility, more risk, and no real control.

Time for a proactive approach

This problem isn’t going to magically solve itself. It needs a mindset shift so that security teams proactively understand what AI tools employees are using and work with them to enable secure use.

This does not mean blocking AI. Instead, organizations need to implement sensible controls that act as guardrails for secure AI use.

In doing so, therein lies a massive opportunity for the security team. They can move away from the “department of no”, to being an internal champion of AI that can have wide-ranging business benefits.

Written By

Alastair Paterson is the CEO and co-founder of Harmonic Security, enabling companies to adopt Generative AI without risk to their sensitive data. Prior to this he co-founded and was CEO of the cyber security company Digital Shadows from its inception in 2011 until its acquisition by ReliaQuest/KKR for $160m in July 2022. Alastair led the company to become an international, industry-recognised leader in threat intelligence and digital risk protection.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn how the LOtL threat landscape has evolved, why traditional endpoint hardening methods fall short, and how adaptive, user-aware approaches can reduce risk.

Watch Now

Join the summit to explore critical threats to public cloud infrastructure, APIs, and identity systems through discussions, case studies, and insights into emerging technologies like AI and LLMs.

Register

People on the Move

Cloud security startup Upwind has appointed Rinki Sethi as Chief Security Officer.

SAP security firm SecurityBridge announced the appointment of Roman Schubiger as the company’s new CRO.

Cybersecurity training and simulations provider SimSpace has appointed Peter Lee as Chief Executive Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.