Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Google Announces Bug Bounty Program and Other Initiatives to Secure AI

Google announces a bug bounty program and other initiatives for increasing the safety and security of AI.

Google today announced several initiatives meant to improve the safety and security of AI, including a bug bounty program and a $10 million fund.

The new vulnerability reporting program (VRP), Google says, will reward researchers for finding vulnerabilities in generative AI, to address concerns such as the potential for unfair bias, hallucinations, and model manipulation.

The internet giant is looking for reports detailing attack scenarios leading to prompt injections, data leaks, tampering with the behavior of a model, misclassifications in security controls, and the extraction of exact architecture or weights of a confidential/proprietary model.

According to Google, security researchers may also be rewarded for other vulnerabilities or behaviors in AI-powered tools that could lead to a valid security or abuse issue.

“Our scope aims to facilitate testing for traditional security vulnerabilities as well as risks specific to AI systems. It is important to note that reward amounts are dependent on severity of the attack scenario and the type of target affected,” the internet giant explains.

To improve the security of AI, Google has introduced a Secure AI Framework (SAIF), which aims to secure the “critical supply chain components that enable machine learning (ML)”.

Today, the company is announcing the first prototypes for model signing and attestation verification, which leverage Sigstore and SLSA to verify the identity of software and improve supply chain resilience.

The initiative focuses on increasing the transparency of the ML supply chain throughout the development lifecycle, to help mitigate the recent increase in supply chain attacks.

Advertisement. Scroll to continue reading.

“Based on the similarities in development lifecycle and threat vectors, we propose applying the same supply chain solutions from SLSA and Sigstore to ML models to similarly protect them against supply chain attacks,” Google notes.

Additionally, together with Anthropic, Microsoft, and OpenAI, Google today announced the creation of a $10 million AI Safety Fund, focused on promoting research in the field of AI safety.

Related: Google Expands Bug Bounty Program With Chrome, Cloud CTF Events

Related: Google Workspace Introduces New AI-Powered Security Controls

Related: Hacker Conversations: Natalie Silvanovich From Google’s Project Zero

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.