Security Experts:

Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Cyberwarfare

Pentagon Adopts New Ethical Principles for Using AI in War

The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.

The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets.

The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.

The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets.

They also say decisions made by automated systems should be “traceable” and “governable,” which means “there has to be a way to disengage or deactivate” them if they are demonstrating unintended behavior, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon’s Joint Artificial Intelligence Center.

The Pentagon’s push to speed up its AI capabilities has fueled a fight between tech companies over a $10 billion cloud computing contract known as the Joint Enterprise Defense Infrastructure, or JEDI. Microsoft won the contract in October but hasn’t been able to get started on the 10-year project because Amazon sued the Pentagon, arguing that President Donald Trump’s antipathy toward Amazon and its CEO Jeff Bezos hurt the company’s chances at winning the bid.

An existing 2012 military directive requires humans to be in control of automated weapons but doesn’t address broader uses of AI. The new U.S. principles are meant to guide both combat and non-combat applications, from intelligence-gathering and surveillance operations to predicting maintenance problems in planes or ships.

The approach outlined Monday follows recommendations made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.

While the Pentagon acknowledged that AI “raises new ethical ambiguities and risks,” the new principles fall short of stronger restrictions favored by arms control advocates.

“I worry that the principles are a bit of an ethics-washing project,” said Lucy Suchman, an anthropologist who studies the role of AI in warfare. “The word ‘appropriate’ is open to a lot of interpretations.”

Shanahan said the principles are intentionally broad to avoid handcuffing the U.S. military with specific restrictions that could become outdated.

“Tech adapts. Tech evolves,” he said.

The Pentagon hit a roadblock in its AI efforts in 2018 after internal protests at Google led the tech company to drop out of the military’s Project Maven, which uses algorithms to interpret aerial images from conflict zones. Other companies have since filled the vacuum. Shanahan said the new principles are helping to regain support from the tech industry, where “there was a thirst for having this discussion.”

“Sometimes I think the angst is a little hyped, but we do have people who have serious concerns about working with the Department of Defense,” he said.

Shanahan said the guidance also helps secure American technological advantage as China and Russia pursue military AI with little attention paid to ethical concerns.

University of Richmond law professor Rebecca Crootof said adopting principles is a good first step, but the military will need to show it can critically evaluate the huge data troves used by AI systems, as well as their cybersecurity risks.

Crootof said she also hopes the U.S. action helps establish international norms around the military use of AI.

“If the U.S. is seen to be taking AI ethical norms seriously, by default they become a more serious topic,” she said.

RelatedAmazon, Microsoft, May be Putting World at Risk of Killer AI, Says Report

Related: The Starter Pistol Has Been Fired for Artificial Intelligence Regulation in Europe

RelatedThe Malicious Use of Artificial Intelligence in Cybersecurity 

RelatedThe Role of Artificial Intelligence in Cyber Security 

RelatedPrivacy Fears Over Artificial Intelligence as Crimestopper

Written By

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this webinar to learn best practices that organizations can use to improve both their resilience to new threats and their response times to incidents.

Register

Join this live webinar as we explore the potential security threats that can arise when third parties are granted access to a sensitive data or systems.

Register

Expert Insights

Related Content

Cyberwarfare

WASHINGTON - Cyberattacks are the most serious threat facing the United States, even more so than terrorism, according to American defense experts. Almost half...

Cyberwarfare

Websites of German airports, administration bodies and banks were hit by DDoS attacks attributed to Russian hacker group Killnet

Cyberwarfare

Iranian APT Moses Staff is leaking data stolen from Saudi Arabia government ministries under the recently created Abraham's Ax persona

Cyberwarfare

The war in Ukraine is the first major conflagration between two technologically advanced powers in the age of cyber. It prompts us to question...

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...

Cyberwarfare

Russia-linked cyberespionage group APT29 has been observed using embassy-themed lures and the GraphicalNeutrino malware in recent attacks.

Cyberwarfare

A newly identified threat actor tracked as NewsPenguin has been targeting military organizations in Pakistan with sophisticated malware.

Application Security

Virtualization technology giant VMware on Tuesday shipped urgent updates to fix a trio of security problems in multiple software products, including a virtual machine...