Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Security Infrastructure

Amazon, Microsoft, May be Putting World at Risk of Killer AI, Says Report

Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

The use of AI to allow weapon systems to autonomously select and attack targets has sparked ethical debates in recent years, with critics warning they would jeopardize international security and herald a third revolution in warfare after gunpowder and the atomic bomb.

A panel of government experts debated policy options regarding lethal autonomous weapons at a meeting of the United Nations Convention on Certain Conventional Weapons in Geneva on Wednesday. 

Google, which last year published guiding principles eschewing AI for use in weapons systems, was among seven companies found to be engaging in “best practice” in the analysis that spanned 12 countries, as was Japan’s Softbank, best known for its humanoid Pepper robot.

Twenty-two companies were of “medium concern,” while 21 fell into a “high concern” category, notably Amazon and Microsoft who are both bidding for a $10 billion Pentagon contract to provide the cloud infrastructure for the US military.

Others in the “high concern” group include Palantir, a company with roots in a CIA-backed venture capital organization that was awarded an $800 million contract to develop an AI system “that can help soldiers analyse a combat zone in real time.”

Advertisement. Scroll to continue reading.

“Autonomous weapons will inevitably become scalable weapons of mass destruction, because if the human is not in the loop, then a single person can launch a million weapons or a hundred million weapons,” Stuart Russell, a computer science professor at the University of California, Berkeley told AFP on Wednesday.

“The fact is that autonomous weapons are going to be developed by corporations, and in terms of a campaign to prevent autonomous weapons from becoming widespread, they can play a very big role,” he added.

The development of AI for military purposes has triggered debates and protest within the industry: last year Google declined to renew a Pentagon contract called Project Maven, which used machine learning to distinguish people and objects in drone videos.

It also dropped out of the running for Joint Enterprise Defense Infrastructure (JEDI), the cloud contract that Amazon and Microsoft are hoping to bag.

The report noted that Microsoft employees had also voiced their opposition to a US Army contract for an augmented reality headset, HoloLens, that aims at “increasing lethality” on the battlefield.

– What they might look like –

According to Russell, “anything that’s currently a weapon, people are working on autonomous versions, whether it’s tanks, fighter aircraft, or submarines.”

Israel’s Harpy is an autonomous drone that already exists, “loitering” in a target area and selecting sites to hit.

More worrying still are new categories of autonomous weapons that don’t yet exist — these could include armed mini-drones like those featured in the 2017 short film “Slaughterbots.”

“With that type of weapon, you could send a million of them in a container or cargo aircraft — so they have destructive capacity of a nuclear bomb but leave all the buildings behind,” said Russell.

Using facial recognition technology, the drones could “wipe out one ethnic group or one gender, or using social media information you could wipe out all people with a political view.”

The European Union in April published guidelines for how companies and governments should develop AI, including the need for human oversight, working towards societal and environmental wellbeing in a non-discriminatory way, and respecting privacy.

Russell argued it was essential to take the next step in the form of an international ban on lethal AI, that could be summarized as “machines that can decide to kill humans shall not be developed, deployed, or used.”

RelatedDismantling the Myths Surrounding Facial Recognition

 

Written By

AFP 2023

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

More People On The Move

Expert Insights

Related Content

Malware & Threats

The NSA and FBI warn that a Chinese state-sponsored APT called BlackTech is hacking into network edge devices and using firmware implants to silently...

Management & Strategy

Hundreds of companies are showcasing their products and services this week at the 2023 edition of the RSA Conference in San Francisco.

Security Infrastructure

Security vendor consolidation is picking up steam with good reason. Everyone wants to improve security efficiency and effectiveness while paying for less.

Cloud Security

The term ‘zero trust’ is now used so much and so widely that it has almost lost its meaning.

Funding/M&A

Responding to Cyber Threats Against Critical Infrastructures: Wired Business Media Acquires Long Running ICS Cybersecurity Conference Series

Security Infrastructure

Instead of deploying new point products, CISOs should consider sourcing technologies from vendors that develop products designed to work together as part of a...

Audits

The PCI Security Standards Council (SSC), the organization that oversees the Payment Card Industry Data Security Standard (PCI DSS), this week announced the release...

Security Infrastructure

Comcast jumps into the enterprise cybersecurity business, betting that its internal security tools and inventions can find traction in an expanding marketplace.