Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Applying the OODA Loop to Solve the Shadow AI Problem

By taking immediate actions, organizations can ensure that shadow AI is prevented and used constructively where possible.

Shadow and GenAI

With AI introducing efficiency, automation, and reduced operational costs, organizations are embracing AI tools and technology with open arms. At the user level, more employees resort to personal AI tools to save time, work smarter, and increase productivity. According to a study in October 2024, Seventy-five percent of knowledge workers currently use AI, with 46% stating they would not relinquish it even if their organization did not approve of its use. Organizations are confronting the challenge of shadow AI, as employees utilize unauthorized AI tools without company consent, leading to risks related to data exposure, compliance, and operations.

Applying the OODA Loop to the Shadow AI Dilemma

The OODA loop is a U.S. military mental model that stands for Observe, Orient, Decide, and Act. It is a four-step decision-making framework that collects every piece of data and puts it in perspective to facilitate rapid decision-making regarding a course of action that achieves the best outcome. It’s not a procedure run once; it’s an endless loop where decisions and actions are revised as new feedback and data appear.

Here’s how the OODA loop can be applied to prevent and mitigate shadow AI:

Observe: Detecting Shadow AI

Organizations should have complete visibility of their AI model inventory. Inconsistent network visibility arising from siloed networks, a lack of communication between security and IT teams, and point solutions encourages shadow AI. Complete network visibility must therefore become the priority for organizations to clearly see the extent and nature of shadow AI in their systems, thus promoting compliance, reducing risk, and promoting responsible AI use without hindering innovation.

Routine audits can also uncover trends in employee use of AI, pointing to gaps in current systems. Repeated use of specific tools without authorization may signal a gap in an organization’s sanctioned offerings that need addressing. AI-driven behavioral analytics can hint at shadow AI use, such as massive unexplained data uploads to services like OpenAI, or anomalous system usage and data access patterns.

Through effective monitoring, organizations can clearly see the scope and nature of shadow AI within their environment.

Advertisement. Scroll to continue reading.

Orient: Understanding Context and Impact

With “zero-knowledge threat actors” using AI to conduct attacks on businesses, the barrier to entry for AI-driven cybercrime has been significantly lowered. Combine this with shadow AI that has less oversight and vetted security measures, and it’s a security free fall for organizations. Unsanctioned AI tools make organizations vulnerable to attacks such as data breaches, injecting buggy code into business workflows, or compliance and NDA breaches by inadvertently exposing sensitive information to third-party AI platforms.

Organizations need to identify the effect of shadow AI once it has been discovered. This includes identifying the risks and advantages of such shadow software. One way to achieve this is by assessing whether the identified AI tools adhere to the organization’s security policies, such as data anonymization and role-based user access.

Not all shadow AI tools are equally risky. Organizations must quantify their risk tolerance to operational, ethical and legal, and reputational risks. Transparency regarding the organization’s risk appetite can help them rank AI tools by risk and business impact, determining where strict controls are needed to contain exposure and where greater flexibility can be allowed to enable innovation.

Orientation enables organizations to prepare a response by understanding the threats and opportunities shadow AI presents.

Decide: Defining Policies

Organizations must set clearly defined yet flexible policies regarding the acceptable use of AI to enable employees to use AI responsibly. Such policies need to allow granular control from binary approval (approve/not approve AI tools) to more sophisticated levels like providing access based on users’ role and responsibility, limiting or enabling certain functionalities within an AI tool, or specifying data-level approvals where sensitive data can be processed only in approved environments. The policies should adapt to unfolding opportunities and threats and align with the organization’s needs and security priorities.

Familiarity with the organization’s parameters for analyzing and approving new AI software and projects will keep employees on the right path in using authorized AI tools or finding approved substitutes that fulfil their function. Organizations can encourage employees to report unauthorized tools by fostering a culture of openness. They can also incentivize ideas to improve existing systems, avoiding the use of shadow AI.

Act: Enforcing Policies and Monitoring

The final step involves applying the defined policies, monitoring them, and refining them repeatedly based on outcomes and feedback. Effective enforcement must be uniform and centralized, ensuring that all users, networks, and devices adhere to AI governance principles without gaps.

Organizations must evaluate and formally incorporate shadow AI tools offering substantial value to ensure their use in secure and compliant environments. Access controls need to be tightened to avoid unapproved installations; zero trust and privilege management policies can assist in this regard. AI-driven monitoring systems need to be implemented to guarantee continuous monitoring. Real-time feedback loops through these systems can assist organizations in fine-tuning their response mechanisms.

By taking immediate actions, organizations can ensure that shadow AI is prevented and used constructively where possible. Centralized governance, reinforced by automated monitoring and adaptive security policies, will allow organizations to reduce exposure risks while optimizing AI utility. To conclude, shadow AI is most frequently a byproduct of employee attempts to solve problems quickly and boost productivity. However, it has emerged as a significant threat to data privacy and compliance. Organizations can effectively respond to shadow AI challenges through the OODA Loop by systematic observation, contextual understanding, strategy formulation, and prompt action. This practice reduces risks while cultivating a spirit of innovation and accountability.

Written By

Etay Maor is the chief security strategist at Cato Networks, a founding member of Cato CTRL, and an industry-recognized cybersecurity researcher. Prior to joining Cato in 2021, Etay was the chief security officer for IntSights (acquired by Rapid7), where he led strategic cybersecurity research and security services. Etay has also held senior security positions at Trusteer (acquired by IBM) and RSA Security’s Cyber Threats Research Labs. Etay is an adjunct professor at Boston College and is part of the Call for Paper (CFP) committees for the RSA Conference and Qubits Conference. Etay holds a Master’s degree in Counterterrorism and Cyber-Terrorism and a Bachelor's degree in Computer Science from IDC Herzliya.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this event as we dive into threat hunting tools and frameworks, and explore value of threat intelligence data in the defender’s security stack.

Register

Learn how integrating BAS and Automated Penetration Testing empowers security teams to quickly identify and validate threats, enabling prompt response and remediation.

Register

People on the Move

Shane Barney has been appointed CISO of password management and PAM solutions provider Keeper Security.

Edge Delta has appointed Joan Pepin as its Chief Information Security Officer.

Vats Srivatsan has been appointed interim CEO of WatchGuard after Prakash Panjwani stepped down.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.