Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Webinar Today: Breaking AI – Inside the Art of LLM Pen Testing

Join the webinar as we reveal a new model for AI pen testing – one grounded in social engineering, behavioral manipulation, and even therapeutic dialogue.

Development software vulnerability

Live Webinar | Thursday, September 11 at 2PM ET – Register

Large Language Models (LLMs) are reshaping enterprise technology and redefining what it means to secure software. But here’s the problem: most penetration testers are using the wrong tools for the job. Traditional techniques focus on exploits and payloads, assuming the AI is just another application. But it’s not.

This session makes the case that effective LLM security testing is more about persuasion than payloads. Drawing on hands-on research and real-world client engagements, we reveal a new model for AI pen testing – one grounded in social engineering, behavioral manipulation, and even therapeutic dialogue.

You’ll explore Adversarial Prompt Exploitation (APE), a methodology that targets trust boundaries and decision pathways using psychological levers like emotional preloading, narrative control, and language nesting. This is not Prompt Injection 101 — it’s adversarial cognition at scale – using real-world case studies to demonstrate success.

This virtual session tracks key operational challenges: the limitations of static payloads and automation, the complexity of reproducibility, and how to communicate findings to executive and technical leadership.

Join Bishop Fox and SecurityWeek for the live webinar to learn:

Advertisement. Scroll to continue reading.
  • Why conventional penetration testing methodologies fail on LLMs
  • How attackers exploit psychological and linguistic patterns, not code
  • Practical adversarial techniques: emotional preloading, narrative leading, and more
  • Frameworks for simulating real-world threats to LLM-based systems
  • How to think like a social engineer to secure AI

Who Should Watch:

This session is perfect for anyone securing, testing, or building AI systems, especially those using LLMs. Pen testers and red teamers will explore a new adversarial framework focused on behavioral manipulation over payloads. AI/ML security pros and researchers will gain insight into psychological attack techniques like emotional preloading and narrative control. Developers will see real-world examples of how attackers engage with models, and CISOs/tech leads will benefit from guidance on operational challenges like reproducibility and communicating findings.

Written By

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

With "Shadow AI" usage becoming prevalent in organizations, learn how to balance the need for rapid experimentation with the rigorous controls required for enterprise-grade deployment.

Register

Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization.

Register

People on the Move

MongoDB has appointed Doug Bowers as Chief Information Security Officer.

Ben Wilkens has been promoted to Director of Cybersecurity at NMFTA.

Cato Networks has appointed Meital Koren as Chief Legal Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.