Connect with us

Hi, what are you looking for?


Artificial Intelligence

Microsoft Releases Red Teaming Tool for Generative AI

Microsoft releases PyRIT red teaming tool to help identify risks in generative AI through automation.

Microsoft on Thursday announced the release of PyRIT, an open access red teaming tool designed to help security professionals and ML engineers identify risks in generative AI.

PyRIT, Microsoft says, increases audit efficiency by automating tasks and flagging areas that require further investigation, essentially augmenting manual red teaming.

Red teaming generative AI, the tech giant notes, is different from probing classical AI systems or traditional systems, mainly because it requires identifying both security risks and responsible AI risks, generative AI is more probabilistic, and due to the wide variations in generative AI system architectures.

Generative AI could produce ungrounded or inaccurate content and its output is influenced even by small input variations, and red teaming these systems needs to consider these risks as well.

Furthermore, generative AI systems may vary from stand-alone applications to integrations, and their output may vary greatly as well, Microsoft notes.

PyRIT (Python Risk Identification Toolkit for generative AI), which started in 2022 as a set of scripts for red teaming generative AI, has already proven its efficiency in red teaming various systems, including Copilot.

“PyRIT is not a replacement for manual red teaming of generative AI systems. Instead, it augments an AI red teamer’s existing domain expertise and automates the tedious tasks for them. PyRIT shines light on the hot spots of where the risk could be, which the security professional can incisively explore,” Microsoft explains.

The tool provides the user with control over the strategy and execution of the AI red team operation, can generate additional harmful prompts based on the set it was fed with, and changes tactics based on the responses received from the generative AI system.

Advertisement. Scroll to continue reading.

PyRIT includes support for various generative AI target formulations, can be fed a dynamic prompt template or a static set of malicious prompts, provides two options for scoring the target system’s outputs, supports two styles of attack strategy, and can save intermediate input and output interactions for follow-up analysis.

“PyRIT was created in response to our belief that the sharing of AI red teaming resources across the industry raises all boats. We encourage our peers across the industry to spend time with the toolkit and see how it can be adopted for red teaming your own generative AI application,” Microsoft notes.

PyRIT is available on GitHub.

Related: Google Open Sources AI-Aided Fuzzing Framework

Related: Critical Vulnerabilities Found in Open Source AI/ML Platforms

Related: AI’s Future Could Be Open-Source or Closed. Tech Giants Are Divided as They Lobby Regulators

Written By

Ionut Arghire is an international correspondent for SecurityWeek.


Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.


SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.


People on the Move

Kim Larsen is new Chief Information Security Officer at Keepit

Professional services company Slalom has appointed Christopher Burger as its first CISO.

Allied Universal announced that Deanna Steele has joined the company as CIO for North America.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.