Connect with us

Hi, what are you looking for?


Artificial Intelligence

Microsoft Shares Guidance and Resources for AI Red Teams

Microsoft has shared guidance and resources from its AI Red Team program to help organizations and individuals with AI security.

Microsoft on Monday published a summary of its artificial intelligence (AI) red teaming efforts, and shared guidance and resources that can help make AI safer and more secure.

The tech giant said its AI red teaming journey started more than two decades ago, but it launched a dedicated AI Red Team in 2018. It has since been working on developing AI security resources that can be used by the whole industry.

The company has now shared five key lessons learned from its red teaming efforts. The first is that AI red teaming is now an umbrella term for probing security, as well as responsible AI (RAI) outcomes. In the case of security, it can include finding vulnerabilities and securing the underlying model, while in the case of RAI outcomes the Red Team’s focus is on identifying harmful content and fairness issues, such as stereotyping. 

Microsoft also pointed out that AI red teaming focuses not only on potential threats from malicious actors, but also on how AI can generate harmful and other problematic content when users interact with it. 

AI systems are constantly evolving and changing, at a faster pace compared to traditional software systems, which is why it’s important to conduct multiple rounds of red teaming and automate measurements and monitoring of the system.

This is also needed because AI systems are probabilistic — the same input can generate different outputs. Conducting multiple red teaming rounds in the same operation can reveal issues that a single attempt may not identify.

Lastly, Microsoft highlighted that — just like in the case of traditional security — the mitigation of AI failures requires a defense-in-depth approach that can include the use of classifiers for flagging harmful content, leveraging metaprompt to guide behavior, and limiting conversational drift. 

Advertisement. Scroll to continue reading.

Microsoft has shared several resources that could be useful to various groups of individuals interested in AI security. These resources include a guide to help Azure OpenAI model application developers create an AI red team, a bug bar for triaging attacks on machine learning (ML) systems for incident responders, and an AI risk assessment checklist for ML engineers. 

The resources also include threat modeling guidance for developers, ML failure mode documentation for policymakers and engineers, and enterprise security and governance guidance for Azure ML customers.

Microsoft shared guidance and resources just a few weeks after Google introduced its AI Red Team, which is tasked with carrying out complex technical attacks on artificial intelligence systems.

Related: Now’s the Time for a Pragmatic Approach to New Technology Adoption

Related: ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages

Related: AntChain, Intel Create New Privacy-Preserving Computing Platform for AI Training

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join security experts as they discuss ZTNA’s untapped potential to both reduce cyber risk and empower the business.


Join Microsoft and Finite State for a webinar that will introduce a new strategy for securing the software supply chain.


Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...