Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Mitigating AI Threats: Bridging the Gap Between AI and Legacy Security

Adopting a layered defense strategy that includes human-centric tools and updating security components.

The quantum leap in artificial intelligence is transforming sectors at an unparalleled pace, with large language models (LLMs) and agentic systems becoming critical to modern workflows. This rapid deployment has unveiled gaping vulnerabilities, as legacy tools such as firewalls, EDR, and SIEM are struggling to keep pace with AI-specific threats, including adaptive threat patterns, and covert prompt engineering.

Besides technical threats, human-centric threats are at the center of existing cybersecurity concerns, with easily generated hyper-personalized phish baits using generative AI that are difficult to detect. The 2025 Verizon Data Breach Investigations Report (DBIR) categorically advises that 60% of breaches involve human factors, underscoring the important role of Security Awareness Training (SAT) and Human Risk Management (HRM) in mitigating AI-driven threats. With security development falling behind AI integration, organizations need to rethink their strategy and counter rapidly evolving AI threats by implementing layered defenses.

AI vs. Legacy Security: Understanding the Mismatch

AI systems, particularly those with adaptive or agentic capabilities, evolve dynamically, unlike static legacy tools built for deterministic environments. This inconsistency renders systems vulnerable to AI-focused attacks, such as data poisoning, prompt injection, model theft, and agentic subversion—attacks that often evade traditional defenses. Legacy tools struggle to detect these attacks because they don’t followpredictable patterns, requiring more adaptive, AI-specific security solutions.

Human flaws and behavior only worsen these weaknesses; insider attacks, social engineering, and insecure interactions with AI systems leave organizations vulnerable to exploitation. As AI transforms cybersecurity, traditional security solutions must adapt to address the new challenges it presents.

Adopting a Holistic Approach for AI Security

A ground-up security approach for AI ensures that AI systems are designed with security incorporated throughout the machine learning security operations (MLSecOps) lifecycle, from scoping and training to deployment and ongoing monitoring. Confidentiality, Integrity, and Availability – the C.I.A. triad —is a widely accepted framework for understanding and addressing the security challenges in AI systems. ‘Confidentiality’ requires strong safeguards for training data and model parameters to avoid leakage or theft. ‘Integrity’ protects against adversarial attacks that can manipulate the model, providing trustworthy outputs. ‘Availability’ protects against resource-exhaustion attacks that can stall operations. In addition, SAT and HRM should be integrated early, so that policies and education align with the workflow of AI to anticipate vulnerabilities before they materialize.

A Layered Defense: Merging Technology and Human-Centric Tools

Advertisement. Scroll to continue reading.

Combining AI-specific security measures with human awareness ensures resilience against evolving threats through adaptive protections and informed user practices. Here are a few tools that organizations must focus on:

  • Model scanning (proactively checking AI for hidden risks) offers a safety check-up for AI systems. It involves using specialized tools to automatically search for hidden problems within the AI itself, such as biases, illegal or offensive outputs, and revealing sensitive data. Some model scanners can inspect the AI’s core design and code, while others actively try to compromise it by simulating attacks during operation. A best practice is to combine scanning with red teaming —where ethical experts deliberately try to hack or trick the AI to uncover complex vulnerabilities that automated tools might miss.
  • AI-specific monitoring tools analyze input-output streams for anomalies like adversarial prompts or data poisoning attempts, feeding insights into threat intelligence platforms.
  • AI-aware authorization mechanismsprovide safe interactions with vector databases and unstructured data, preventing unauthorized queries and manipulations. By enforcing granular permissions, monitoring access patterns, and applying AI-driven authentication mechanisms, organizations can protect sensitive datasets, avoiding risks such as data leakage, adversarial manipulation, and prompt-based exploits in AI ecosystems.
  • Model stability analysis tracks behavioral anomalies and changes in decision paths in agentic AI systems. Through real-time examination of deviations from intended performance, organizations can engage behavioral anomaly detection to track deviations in AI decision patterns to identify adversarial manipulation or unintended actions. 
  • AI firewalls supporting automated compliance management are capable of flagging and blocking policy-violating inputs and outputs, promoting conformity with security and ethical guidelines. Such systems analyze AI interactions in real-time, blocking unauthorized queries, offensive content generation, and adversarial manipulations, and bolstering automated governance to maintain integrity in AI-driven environments.
  • Human risk management helps prevent AI-related threats via phishing simulations for employees, role-based access, and creating a security-first culture to mitigate insider threats. SAT educates employees on how to detect malicious AI prompts, learn safe data handling, and report anomalies. Companies should establish precise policies regarding AI interactions, where users adhere to clear guidelines.

Regulatory Frameworks for Secure AI Implementation

Deploying effective AI security frameworks becomes crucial to countering upcoming threats. The OWASP Top 10 for LLMs focuses on critical vulnerabilities, such as prompt injections, where security awareness training teaches users how to spot and avoid exploitative prompts.

MITRE ATT&CK addresses social engineering in broader cybersecurity contexts, while MITRE ATLAS specifically maps adversarial techniques targeting AI, such as model evasion or data poisoning.

AI security frameworks like NIST’s AI Risk Management Framework incorporate human risk management to ensure that AI security practices align with organizational policies. Also modeled on the fundamental C.I.A. triad, the “manage” phase specifically includes employee training to uphold AI security principles across teams.

For effective use of these frameworks, cross-departmental coordination is required. There needs to be collaboration among security staff, data scientists, and human resource practitioners to formulate plans that ensure AI systems are protected while encouraging their responsible and ethical use.

In short, AI-centric tools enable real-time monitoring, dynamic access controls, and automated policy enforcement, facilitating effective AI security. Strategic investment in SAT programs (e.g., phishing simulations, AI prompt safety training) and HRM frameworks fosters a security-aware culture for safe AI adoption. As AI systems become increasingly complex, companies must continually refresh their security components to ensure that infrastructure security and employee training remain top priorities.

Learn More at The AI Risk Summit | Ritz-Carlton, Half Moon Bay

Written By

Stu Sjouwerman (pronounced “shower-man”) is Founder and Executive Chairman of KnowBe4. A serial entrepreneur and data security expert with 30 years in the IT industry, he was co-founder of Sunbelt Software, the anti-malware software company that was acquired in 2010. He is the author of four books, including “Cyberheist: The Biggest Financial Threat Facing American Businesses.”

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn how the LOtL threat landscape has evolved, why traditional endpoint hardening methods fall short, and how adaptive, user-aware approaches can reduce risk.

Watch Now

Join the summit to explore critical threats to public cloud infrastructure, APIs, and identity systems through discussions, case studies, and insights into emerging technologies like AI and LLMs.

Register

People on the Move

Coro, a provider of cybersecurity solutions for SMBs, has appointed Joe Sykora as CEO.

SonicWall has hired Rajnish Mishra as Senior Vice President and Chief Development Officer.

Kenna Security co-founder Ed Bellis has joined Empirical Security as Chief Executive Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.