Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Microsoft Shares Guidance and Resources for AI Red Teams

Microsoft has shared guidance and resources from its AI Red Team program to help organizations and individuals with AI security.

Microsoft on Monday published a summary of its artificial intelligence (AI) red teaming efforts, and shared guidance and resources that can help make AI safer and more secure.

The tech giant said its AI red teaming journey started more than two decades ago, but it launched a dedicated AI Red Team in 2018. It has since been working on developing AI security resources that can be used by the whole industry.

The company has now shared five key lessons learned from its red teaming efforts. The first is that AI red teaming is now an umbrella term for probing security, as well as responsible AI (RAI) outcomes. In the case of security, it can include finding vulnerabilities and securing the underlying model, while in the case of RAI outcomes the Red Team’s focus is on identifying harmful content and fairness issues, such as stereotyping. 

Microsoft also pointed out that AI red teaming focuses not only on potential threats from malicious actors, but also on how AI can generate harmful and other problematic content when users interact with it. 

AI systems are constantly evolving and changing, at a faster pace compared to traditional software systems, which is why it’s important to conduct multiple rounds of red teaming and automate measurements and monitoring of the system.

This is also needed because AI systems are probabilistic — the same input can generate different outputs. Conducting multiple red teaming rounds in the same operation can reveal issues that a single attempt may not identify.

Lastly, Microsoft highlighted that — just like in the case of traditional security — the mitigation of AI failures requires a defense-in-depth approach that can include the use of classifiers for flagging harmful content, leveraging metaprompt to guide behavior, and limiting conversational drift. 

Microsoft has shared several resources that could be useful to various groups of individuals interested in AI security. These resources include a guide to help Azure OpenAI model application developers create an AI red team, a bug bar for triaging attacks on machine learning (ML) systems for incident responders, and an AI risk assessment checklist for ML engineers. 

Advertisement. Scroll to continue reading.

The resources also include threat modeling guidance for developers, ML failure mode documentation for policymakers and engineers, and enterprise security and governance guidance for Azure ML customers.

Microsoft shared guidance and resources just a few weeks after Google introduced its AI Red Team, which is tasked with carrying out complex technical attacks on artificial intelligence systems.

Related: Now’s the Time for a Pragmatic Approach to New Technology Adoption

Related: ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages

Related: AntChain, Intel Create New Privacy-Preserving Computing Platform for AI Training

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Passwordless authentication firm Hawcx has appointed Lakshmi Sharma as Chief Product Officer.

Matt Hartley has been named Chief Revenue Officer at autonomous security solutions provider Horizon3.ai.

Trustwave has announced the appointment of Keith Ibarguen as Senior Vice President of Engineering.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.