Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Google Introduces SAIF, a Framework for Secure AI Development and Use

The Google SAIF (Secure AI Framework) is designed to provide a security framework or ecosystem for the development, use and protection of AI systems.

AI risks to humanity

The Google SAIF (Secure AI Framework) is designed to provide a security framework or ecosystem for the development, use and protection of AI systems.

All new technologies bring new opportunities, threats, and risks. As business concentrates on harnessing opportunities, threats and risks can be overlooked. With AI, this could be disastrous for business, business customers, and people in general. SAIF offers six core elements to ensure maximum security in AI.

Expand strong security foundations to the AI ecosystem
Many existing security controls can be expanded and/or focused on AI risks. A simple example is protection against injection techniques, such as SQL injection. “Organizations can adapt mitigations, such as input sanitization and limiting, to help better defend against prompt injection style attacks,” suggests SAIF.

Traditional security controls will often be relevant to AI defense but may need to be strengthened or expanded. Data governance and protection becomes critical to protect the integrity of the learning data used by AI systems. The old concept of ‘rubbish in, rubbish out’ is magnified manyfold by AI, but made critical where business and people decisions are based on that rubbish.

Extend detection and response to bring AI into an organization’s threat universe
Threat intelligence must now also include an understanding and awareness of threats relevant to organizations’ own AI usage, including the consequences of a breach. If a data pool is poisoned without knowledge of that poisoning, AI outputs will be adversely and possibly invisibly affected. 

It will be necessary to monitor AI output to detect algorithmic errors and adversarial input. “Organizations that use AI systems must have a plan for detecting and responding to security incidents and mitigate the risks of AI systems making harmful or biased decisions,” says Google.

Automate defenses to keep pace with existing and new threats
This is the most common advice used in the face of AI-based attacks – automate defenses with AI to counter the increasing speed and magnitude of adversarial AI-based attacks. But Google warns that humans must be kept in the loop for important decisions, such as determining what constitutes a threat and how to respond to it.

The human element is important for both detection and response. “This is because AI systems can be biased or make mistakes, and human oversight is necessary to ensure that AI systems are used ethically and responsibly,” says Google.

AI-based automation goes beyond the automated detection of threats and can also be used to decrease the workload and increase the efficiency of the security team. Secure scripts could be generated through no-code systems to control and automate security processes. Reverse engineering a malicious binary could be automated, and the subsequent automatic generation of a Yara rule could look for evidence of related activity.

Advertisement. Scroll to continue reading.

Harmonize platform level controls to ensure consistent security across the organization
As the use of AI grows, it is important to have periodic reviews to identify and mitigate associated risks. This should include the AI models used and the data used to train them, together with the security measures implemented, and the AI security risk awareness and training for all employees.

Reduce overlapping frameworks for security and compliance controls to help reduce fragmentation. Fragmentation increases complexity, costs, and inefficiencies. Reducing fragmentation will, suggests Google, “provide a ‘right fit’ approach to controls to mitigate risk.”

Adapt controls to adjust mitigations and create faster feedback loops for AI deployment
This involves continuously testing and evolving systems in use, including techniques such as reinforcement learning based on incidents and user feedback. The training data needs to be monitored and updated as necessary, and the models fine-tuned to respond to attacks.

It involves being continuously aware of new attacks involving prompt injection, data poisoning and evasion attacks. “By staying up to date on the latest attack methods, organizations can take steps to mitigate these risks,” says Google. Red teaming can also help organizations identify and mitigate security risks before they can be exploited by malicious actors.

An effective feedback loop is required to ensure that everything learned is put to good use, whether that is to improve defenses or improve the AI model itself.

Contextualize AI system risks in surrounding business processes
This involves a thorough understanding of how AI will be used within business processes, and requires a complete inventory of AI models in use. Assess their risk profile based on the specific use cases, data sensitivity, and shared responsibility when leveraging third-party solutions and services.

“Implement data privacy, cyber risk, and third-party risk policies, protocols and controls throughout the ML model lifecycle to guide the model development, implementation, monitoring, and validation,” says Google.

Throughout this, it is important to assemble a strong AI security team. “AI systems are often complex and opaque, have a large number of moving parts, rely on large amounts of data, are resource intensive, can be used to apply judgment-based decisions, and can generate novel content that may be offensive, harmful, or can perpetuate stereotypes and social biases,” warns Google.

For many organizations this will expand the necessary expertise available to the security team, including business use case owners, security, cloud engineering, risk and audit teams, privacy, legal, data science, development, and responsible AI and ethics.

Google has based its SAIF framework on the experience of 10-years in the development and use of AI in its own products. The company hopes that making public its own experience in AI will lay the groundwork for secure AI – just as its BeyondCorp access model led to the zero trust principles which are industry standard today.

Related: Insider Q&A: Artificial Intelligence and Cybersecurity In Military Tech

Related: ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence

Related: Harris to Meet With CEOs About Artificial Intelligence Risks

Related: Cyber Insights 2023 | Artificial Intelligence

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Understand how to go beyond effectively communicating new security strategies and recommendations.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...