Artificial Intelligence

Microsoft Shares Guidance and Resources for AI Red Teams

Microsoft has shared guidance and resources from its AI Red Team program to help organizations and individuals with AI security.

Microsoft has shared guidance and resources from its AI Red Team program to help organizations and individuals with AI security.

Microsoft on Monday published a summary of its artificial intelligence (AI) red teaming efforts, and shared guidance and resources that can help make AI safer and more secure.

The tech giant said its AI red teaming journey started more than two decades ago, but it launched a dedicated AI Red Team in 2018. It has since been working on developing AI security resources that can be used by the whole industry.

The company has now shared five key lessons learned from its red teaming efforts. The first is that AI red teaming is now an umbrella term for probing security, as well as responsible AI (RAI) outcomes. In the case of security, it can include finding vulnerabilities and securing the underlying model, while in the case of RAI outcomes the Red Team’s focus is on identifying harmful content and fairness issues, such as stereotyping. 

Microsoft also pointed out that AI red teaming focuses not only on potential threats from malicious actors, but also on how AI can generate harmful and other problematic content when users interact with it. 

AI systems are constantly evolving and changing, at a faster pace compared to traditional software systems, which is why it’s important to conduct multiple rounds of red teaming and automate measurements and monitoring of the system.

This is also needed because AI systems are probabilistic — the same input can generate different outputs. Conducting multiple red teaming rounds in the same operation can reveal issues that a single attempt may not identify.

Lastly, Microsoft highlighted that — just like in the case of traditional security — the mitigation of AI failures requires a defense-in-depth approach that can include the use of classifiers for flagging harmful content, leveraging metaprompt to guide behavior, and limiting conversational drift. 

Microsoft has shared several resources that could be useful to various groups of individuals interested in AI security. These resources include a guide to help Azure OpenAI model application developers create an AI red team, a bug bar for triaging attacks on machine learning (ML) systems for incident responders, and an AI risk assessment checklist for ML engineers. 

Advertisement. Scroll to continue reading.

The resources also include threat modeling guidance for developers, ML failure mode documentation for policymakers and engineers, and enterprise security and governance guidance for Azure ML customers.

Microsoft shared guidance and resources just a few weeks after Google introduced its AI Red Team, which is tasked with carrying out complex technical attacks on artificial intelligence systems.

Related: Now’s the Time for a Pragmatic Approach to New Technology Adoption

Related: ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages

Related: AntChain, Intel Create New Privacy-Preserving Computing Platform for AI Training

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version