Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Google Won’t Use Artificial Intelligence for Weapons

Google announced Thursday it would not use artificial intelligence for weapons or to “cause or directly facilitate injury to people,” as it unveiled a set of principles for these technologies.

Google announced Thursday it would not use artificial intelligence for weapons or to “cause or directly facilitate injury to people,” as it unveiled a set of principles for these technologies.

Chief executive Sundar Pichai, in a blog post outlining the company’s artificial intelligence policies, noted that even though Google won’t use AI for weapons, “we will continue our work with governments and the military in many other areas” including cybersecurity, training, and search and rescue.

The news comes with Google facing pressure from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

Pichai set out seven principles for Google’s application of artificial intelligence, or advanced computing that can simulate intelligent human behavior.

He said Google is using AI “to help people tackle urgent problems” such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in the blog.

“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

The chief executive said Google’s AI programs would be designed for applications that are “socially beneficial” and “avoid creating or reinforcing unfair bias.”

Advertisement. Scroll to continue reading.

He said the principles also called for AI applications to be “built and tested for safety,” to be “accountable to people” and to “incorporate privacy design principles.”

Google will avoid the use of any technologies “that cause or are likely to cause overall harm,” Pichai wrote.

That means steering clear of “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” and systems “that gather or use information for surveillance violating internationally accepted norms.”

The move comes amid growing concerns that automated or robotic systems could be misused and spin out of control, leading to chaos.

Several technology firms have already agreed to the general principles of using artificial intelligence for good, but Google appeared to offer a more precise set of standards.

The company, which is already a member of the Partnership on Artificial Intelligence including dozens of tech firms committed to AI principles, had faced criticism for the contract with the Pentagon on Project Maven, which uses machine learning and engineering talent to distinguish people and objects in drone videos.

Faced with a petition signed by thousands of employees and criticism outside the company, Google indicated the $10 million contract would not be renewed, according to media reports.

Written By

AFP 2023

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Cyberwarfare

WASHINGTON - Cyberattacks are the most serious threat facing the United States, even more so than terrorism, according to American defense experts. Almost half...

Cyberwarfare

Russian espionage group Nomadic Octopus infiltrated a Tajikistani telecoms provider to spy on 18 entities, including government officials and public service infrastructures.

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Cyberwarfare

Several hacker groups have joined in on the Israel-Hamas war that started over the weekend after the militant group launched a major attack.