Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

US, UK Cybersecurity Agencies Publish AI Development Guidance

New guidance from US and UK cybersecurity agencies provides recommendations for secure AI system development.

AI Safety Summit

The US and UK cybersecurity agencies CISA and NCSC have published security-focused guidance for the developers of systems that leverage AI.

The document, titled Guidelines for Secure AI System Development (PDF), promotes the implementation of secure-by-design principles, as well as transparency and accountability, and prioritizes ownership of security outcomes for customers.

The guidelines, the two agencies note, apply to all types of AI/ML systems, regardless of whether built from scratch or on top of third-party resources, to address issues related to AI, cybersecurity, and critical infrastructure.

Developed in collaboration with over 20 domestic and international cybersecurity organizations, the document has been broken down into four sections, covering different stages of the AI system development lifecycle, namely design, development, deployment, and operation and maintenance.

Meant to be applied in conjunction with cybersecurity, incident response, and risk management best practices, the recommendations demand investments in features, mechanisms, and tools that protect customer data at all layers, throughout the entire system lifecycle, CISA and NCSC say.

“Providers should implement security controls and mitigations where possible within their models, pipelines and/or systems, and where settings are used, implement the most secure option as default,” the two agencies note.

CISA and NCSC also say that providers are responsible for informing users of risks that cannot be mitigated and of advising them how to use systems securely, and that they should treat all cybersecurity risks as critical.

Providers are advised to assess the threats to their systems, focus on security, functionality, and performance during the design stage, to secure their supply chain, protect their assets, secure their infrastructure and protect their model continuously, implement incident response, monitor the system’s behavior and inputs, and implement a secure-by-designed approach to updates.

Advertisement. Scroll to continue reading.

The guidelines, the two agencies say, are primarily aimed at providers of AI systems, either hosted by an organization or accessed via external APIs. However, all stakeholders, “including data scientists, developers, managers, decision-makers, and risk owners”, are encouraged to read the document “to make informed decisions about the design, development, deployment and operation of their AI systems,” the two agencies note.

Related: Pentagon’s AI Initiatives Accelerate Hard Decisions on Lethal Autonomous Weapons

Related: The $64k Question: How Does AI Phishing Stack Up Against Human Social Engineers?

Related: White House Unveils New Efforts to Guide Federal Research of AI

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.