Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

When Vendors Overstep – Identifying the AI You Don’t Need

AI models are nothing without vast data sets to train them and vendors will be increasingly tempted to harvest as much data as they can and answer any questions later.

Shadow AI

A year so far of data policy fails

Microsoft recently announced an AI-powered feature ‘Windows Recall’ which effectively captures everything a user has ever done on a PC. While it promised heavy encryption of the data it captures, along with a promise the info would never leave the device, security professionals were less than impressed. Former Microsoft exec, Kevin Beaumont, remarking “In essence, a keylogger is being baked into Windows as a feature”.

But it’s far from an isolated incident. In March, DocuSign updated its FAQ to state that “if you have given contractual consent, we may use your data to train DocuSign’s in-house, proprietary AI models.” 

Then, in May, a similar AI scare had Slack users up in arms with many criticizing vague data policies after it emerged that, by default, their data —including messages, content, and files — was being used to train Slack’s global AI models. Reports remarked that ‘it felt like there was no benefit to opting in for anyone but Slack.’

After conducting our own research we found that such cases are just the tip of the iceberg and even the likes of LinkedIn, X, Pinterest, Grammarly, and Yelp have potentially risky content training declarations.

Vendors are increasingly tempted to harvest data now and respond to complaints later

We are likely to see more and more instances of vendors pushing boundaries as the global arms race to get ahead in AI accelerates. AI models are nothing without vast data sets to train them and vendors will be increasingly tempted to harvest as much data as they can and answer any questions later. This could be in the form of feature updates that, in effect, are a ‘data grab’ and deliver little value to the end user, or it could be in the form of vague policies deliberately designed to confuse which businesses unwittingly sign up to. So how can businesses be on their guard against the AI features they don’t need?

Check the T&Cs…even if the details are buried

Advertisement. Scroll to continue reading.

The first, and perhaps obvious answer is to check the T&Cs of the most commonly used applications used in your organization. However, checking the text can be complicated as few services are forthcoming with exactly what types of data are used to train their models. The details are often buried and sometimes aren’t referred to at all. Overarching privacy policies can often be the best place to guide decision-making but can change frequently so there is a need to review fairly regularly.

Providers that have a trust center where you can easily access the most important security and privacy information are generally easier to deal with, however, only larger vendors tend to have them. Even then some of the examples stated above apply to firms that have them so it’s not clear-cut.

With this information, you can then create more detailed policies that guide users on what information they should avoid uploading into the apps depending on their content declaration policy.

Focus on the apps you need

The challenge of understanding and tracking these T&Cs is exacerbated by the huge surge in Shadow AI. With such attention to detail required to oversee apps, it’s helpful to have fewer apps to focus on. This means having honest conversations with employees to understand what services they are using and for what purpose to promote transparency and understanding of the risks posed by rogue AI adoption.

By conducting periodic surveys and interviews organizations can gain ongoing insight into the unauthorized AI applications being used in various departments enabling leaders to take informed actions while fostering a culture of accountability. It also means that, if necessary, employees can be redirected towards a safer alternative that will still enable them to achieve what they want from their AI use.

Identify Shadow AI

On the technical side, traditional cybersecurity tools like internet gateways and next-generation firewalls can also provide data to manually identify potential shadow AI instances. For companies using identity providers like Google, tracking “Sign-in with Google” activity can reveal unauthorized app usage. Specialized third-party solutions designed specifically to detect both shadow IT and shadow AI, can significantly improve an organization’s ability to pinpoint and mitigate these technological threats.

Ultimately, a dual approach combining technical controls with active employee engagement is crucial. Encouraging open dialogue, raising awareness of risks, and incentivizing employees to self-report fosters an environment of mutual trust, rather than an accusatory stance. At the same time, using diverse monitoring tools provides the necessary insights for timely intervention against unchecked shadow AI proliferation. Adopting this balanced strategy allows organizations to harness AI’s immense potential while safeguarding data, operations, and reputations from the inherent dangers of shadowy technological misuse.

Learn More at SecurityWeek’s AI Risk Summit

Written By

Alastair Paterson is the CEO and co-founder of Harmonic Security, enabling companies to adopt Generative AI without risk to their sensitive data. Prior to this he co-founded and was CEO of the cyber security company Digital Shadows from its inception in 2011 until its acquisition by ReliaQuest/KKR for $160m in July 2022. Alastair led the company to become an international, industry-recognised leader in threat intelligence and digital risk protection.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn how to utilize tools, controls, and design models needed to properly secure cloud environments.

Register

Event: ICS Cybersecurity Conference

The leading industrial cybersecurity conference for Operations, Control Systems and IT/OT Security professionals to connect on SCADA, DCS PLC and field controller cybersecurity.

Register

People on the Move

SaaS security company AppOmni has hired Joel Wallenstrom as its General Manager.

FTI Consulting has appointed Brett Callow as Managing Director in its Cybersecurity & Data Privacy Communications practice.

Mobile security firm Zimperium has welcomed David Natker as its VP of Global Partners and Alliances.

More People On The Move

Expert Insights