Identity & Access

Siri, Alexa, Google Now Vulnerable to Ultrasound Attacks

A team of researchers from the Zhejiang University in China have demonstrated how several popular speech recognition systems can be controlled using ultrasound via an attack method they have dubbed “DolphinAttack.”

<p><strong><span><span>A team of researchers from the Zhejiang University in China have demonstrated how several popular speech recognition systems can be controlled using ultrasound via an attack method they have dubbed “DolphinAttack.”</span></span></strong></p>

A team of researchers from the Zhejiang University in China have demonstrated how several popular speech recognition systems can be controlled using ultrasound via an attack method they have dubbed “DolphinAttack.”

The experts tested Apple’s Siri, Google Now, Samsung’s S Voice, Huawei’s HiVoice, Microsoft’s Cortana, Amazon’s Alexa and the speech recognition system in an Audi Q3 vehicle. They modulated various voice commands on ultrasonic carriers, at a frequency of 20,000 Hz or higher, in order to make them inaudible to humans.

The goal was to determine if these systems can be activated using ultrasound and if they can be controlled once they have been activated. The activation commands they tested included “Hey Siri,” “OK Google,” “Hi Galaxy” and “Alexa,” while recognition commands included “Call 1234567890,” “Open dolphinattack.com,” “turn on airplane mode” and “open the back door.”

The experiments, carried out on 16 devices with 7 different speech recognition systems, were successful in all cases from various distances. The DolphinAttack method was the most effective against Siri on an iPhone 4s and Alexa on Amazon’s Echo personal assistant device. In both cases, the attack worked over a distance of nearly 2 meters (6.5 feet).

The tests showed that the language used does not have an influence on the efficiency of the attack, but the type of command used does matter. For example, researchers determined that commands such as “call/facetime 1234567890,” “turn on airplane mode” or “how’s the weather today” are recognized much better than “open dolphinattack.com.”

Background noise also has an impact, with recognition rates for the “turn on airplane mode” command decreasing to 30% on the street compared to 100% in an office and 80% in a cafe.

The researchers have also proposed a series of hardware- and software-based defenses against the DolphinAttack method.

Advertisement. Scroll to continue reading.

“The recently discovered DolphinAttack design flaw in IoT devices is another example of the importance in secure manufacturing. The flaw has introduced a relatively new attack vector – audio,” said Tim Jarrett, Sr. Director of Enterprise Security Strategy at Veracode.

“It is likely that audio and voice-based security controls will evolve as security researchers and hackers begin to explore vulnerabilities. Building in security by design and the ability to adapt to new threats will help IoT manufacturers leverage security as a competitive advantage,” Jarrett added. “IoT device manufacturers should consider this a wake-up call — manipulating audio for vulnerability injections is a serious area for concern. This recent news isn’t just an issue for the enterprise, but one for the millions of consumers that are using these IoT devices day in and day out.”

Related Reading: Barclays Unveils Voice Authentication for Phone Banking

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version