Artificial Intelligence

Shadow AI – Should I be Worried?

Overzealous policies and blanket bans on AI tools risk forcing users underground to use unknown tools with unknown consequences.

AI security

Since OpenAI’s release of ChatGPT in November 2022, the number of products using Generative AI has skyrocketed. Right now there are some 12,000 AI tools available promising to help with over 16,000 job tasks and we’re seeing this number grow at around 1,000 every month.

The growth of these tools is fast outpacing the capability of employers to control them. In the UK alone Deloitte has found that 1m employees (or 8% of the adult workforce) have used GenAI tools for work. And only 23% of this number believe their employer would have approved of this use. This indicates that either their employer doesn’t have a policy about using AI safely or they’re just ignoring it in the hope that the perceived productivity gain is worth the risk. As we have seen with ‘Shadow IT’ – if employees believe there are productivity gains to be made by using their own devices or third party services then they’ll do it….unless firms come up with pragmatic policies and safeguards to work with new areas of technology.

Most GenAI apps are thin veneers of ChatGPT – minus the safeguards

Employers are entitled to be cautious. Most of the 12,000 tools listed above are in the main thin veneers over ChatGPT with clever prompt engineering designed to make them appear differentiated and aligned to help with a specific job function. However, unlike ChatGPT which at least has some safeguards against data protection, these GPTs offer no defence about where company data will ultimately end up – it can still be sent to any number of spurious third party websites with unknown security controls.

We analyzed the most popular GPTs and found that forty percent of the apps can involve uploading content, code, files, or spreadsheets for AI assistance. This raises some data leakage questions if employees are uploading corporate files of those types, such as:

  • How do we know that the data does not contain any sensitive information or PII?
  • Will this cause compliance issues?

As this ecosystem grows, the answers to these questions become more complex. Whilst the launch of the ‘official’ GPTstore might help vet some of these apps we still do not know enough about the review process and the security controls they have in place, if any. The privacy policy states that GPTs do not have access to chat history, but little else. For example, files uploaded into these GPTs could be seen by third parties. It’s unlikely that a carefully curated and largely secure ‘App Store’ such as we have seen for mobile apps will evolve. Instead it is likely that at best the GPT store could become a user’s first port of call but if they can’t find what they need, alternatives are readily available elsewhere.

Digging a little deeper into key concerns

Broadley speaking, security concerns are based around the following:

Privacy and Data Retention Policies – every GenAI app will have different privacy and data retention policies that employees are unlikely to assess before proceeding. Worse still, these policies shift and change over time, making it difficult to understand a firm’s risk exposure at any point. Leaving this up to your employees’ discretion will likely lead to compliance headaches later down the line, so these apps must be considered as part of a robust third party risk program. Some applications explicitly give themselves permission to train future models on the data uploaded to them for example.

Advertisement. Scroll to continue reading.

Prompt Injection – AI tools built on LLMs are inherently prone to prompt-injection attacks which can cause them to behave in ways they were not designed – such as potentially revealing previously uploaded, sensitive information. This is particularly concerning as we give AI more autonomy and agency to take actions in our environments.  An AI email application with inbox access could start sending out confidential emails, or forward password reset emails to provide a route in for an attacker, for example.

Account Takeover – what happens if an attacker gains access to an employee’s account with full access to chat history, file uploads, code reviews, and more? Many provide social logins (login via Google, etc), but the option to sign up with an email and password was an option. Of these, very few apps we analyzed required MFA default. Given how frequently passwords are exposed, this raises the potential for accounts to be taken over.  While obtaining one single prompt may not be that interesting, the aggregation of a lot of prompts from a senior employee could give a more comprehensive view of a company’s plans or IP.

What to do about it?

Many firms have chosen to block AI outright, but this approach is ineffective at best and counterproductive at worst. Overzealous policies and blanket bans on AI tools risk forcing users underground to use unknown tools with unknown consequences. It can also mean that firms miss out on the huge productivity gains that are made possible by using GenAI apps safely. Instead, the emphasis needs to be on educating and guiding users regarding responsible AI use whereby security teams truly understand the needs of end users and what they are looking to achieve. Of course, there will be legitimate reasons for banning specific GenAI tools, but this should only be done after considering their data and security policies and how users plan on interacting with them as well as the expected business benefit. Over time we’ll likely see the GenAI space mature and security offerings will become available to help manage the risk more effectively but right now both vigilance and pragmatism is needed.

Related: SecurityWeek to Host AI Risk Summit June 25-26 at the Ritz-Carlton, Half Moon Bay CA

Related Content

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Artificial Intelligence

Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to...

Artificial Intelligence

While over 400 AI-related bills are being debated this year in statehouses nationwide, most target one industry or just a piece of the technology...

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version