Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Shadow AI – Should I be Worried?

Overzealous policies and blanket bans on AI tools risk forcing users underground to use unknown tools with unknown consequences.

AI security

Since OpenAI’s release of ChatGPT in November 2022, the number of products using Generative AI has skyrocketed. Right now there are some 12,000 AI tools available promising to help with over 16,000 job tasks and we’re seeing this number grow at around 1,000 every month.

The growth of these tools is fast outpacing the capability of employers to control them. In the UK alone Deloitte has found that 1m employees (or 8% of the adult workforce) have used GenAI tools for work. And only 23% of this number believe their employer would have approved of this use. This indicates that either their employer doesn’t have a policy about using AI safely or they’re just ignoring it in the hope that the perceived productivity gain is worth the risk. As we have seen with ‘Shadow IT’ – if employees believe there are productivity gains to be made by using their own devices or third party services then they’ll do it….unless firms come up with pragmatic policies and safeguards to work with new areas of technology.

Most GenAI apps are thin veneers of ChatGPT – minus the safeguards

Employers are entitled to be cautious. Most of the 12,000 tools listed above are in the main thin veneers over ChatGPT with clever prompt engineering designed to make them appear differentiated and aligned to help with a specific job function. However, unlike ChatGPT which at least has some safeguards against data protection, these GPTs offer no defence about where company data will ultimately end up – it can still be sent to any number of spurious third party websites with unknown security controls.

We analyzed the most popular GPTs and found that forty percent of the apps can involve uploading content, code, files, or spreadsheets for AI assistance. This raises some data leakage questions if employees are uploading corporate files of those types, such as:

  • How do we know that the data does not contain any sensitive information or PII?
  • Will this cause compliance issues?

As this ecosystem grows, the answers to these questions become more complex. Whilst the launch of the ‘official’ GPTstore might help vet some of these apps we still do not know enough about the review process and the security controls they have in place, if any. The privacy policy states that GPTs do not have access to chat history, but little else. For example, files uploaded into these GPTs could be seen by third parties. It’s unlikely that a carefully curated and largely secure ‘App Store’ such as we have seen for mobile apps will evolve. Instead it is likely that at best the GPT store could become a user’s first port of call but if they can’t find what they need, alternatives are readily available elsewhere.

Digging a little deeper into key concerns

Broadley speaking, security concerns are based around the following:

Privacy and Data Retention Policies – every GenAI app will have different privacy and data retention policies that employees are unlikely to assess before proceeding. Worse still, these policies shift and change over time, making it difficult to understand a firm’s risk exposure at any point. Leaving this up to your employees’ discretion will likely lead to compliance headaches later down the line, so these apps must be considered as part of a robust third party risk program. Some applications explicitly give themselves permission to train future models on the data uploaded to them for example.

Advertisement. Scroll to continue reading.

Prompt Injection – AI tools built on LLMs are inherently prone to prompt-injection attacks which can cause them to behave in ways they were not designed – such as potentially revealing previously uploaded, sensitive information. This is particularly concerning as we give AI more autonomy and agency to take actions in our environments.  An AI email application with inbox access could start sending out confidential emails, or forward password reset emails to provide a route in for an attacker, for example.

Account Takeover – what happens if an attacker gains access to an employee’s account with full access to chat history, file uploads, code reviews, and more? Many provide social logins (login via Google, etc), but the option to sign up with an email and password was an option. Of these, very few apps we analyzed required MFA default. Given how frequently passwords are exposed, this raises the potential for accounts to be taken over.  While obtaining one single prompt may not be that interesting, the aggregation of a lot of prompts from a senior employee could give a more comprehensive view of a company’s plans or IP.

What to do about it?

Many firms have chosen to block AI outright, but this approach is ineffective at best and counterproductive at worst. Overzealous policies and blanket bans on AI tools risk forcing users underground to use unknown tools with unknown consequences. It can also mean that firms miss out on the huge productivity gains that are made possible by using GenAI apps safely. Instead, the emphasis needs to be on educating and guiding users regarding responsible AI use whereby security teams truly understand the needs of end users and what they are looking to achieve. Of course, there will be legitimate reasons for banning specific GenAI tools, but this should only be done after considering their data and security policies and how users plan on interacting with them as well as the expected business benefit. Over time we’ll likely see the GenAI space mature and security offerings will become available to help manage the risk more effectively but right now both vigilance and pragmatism is needed.

Related: SecurityWeek to Host AI Risk Summit June 25-26 at the Ritz-Carlton, Half Moon Bay CA

Written By

Alastair Paterson is the CEO and co-founder of Harmonic Security, enabling companies to adopt Generative AI without risk to their sensitive data. Prior to this he co-founded and was CEO of the cyber security company Digital Shadows from its inception in 2011 until its acquisition by ReliaQuest/KKR for $160m in July 2022. Alastair led the company to become an international, industry-recognised leader in threat intelligence and digital risk protection.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.