Light Commands: hacking voice assistants via laser beam

Researchers from the University of Michigan and the University of Electro-Communications in Tokyo, demonstrated that is possible to hack smart voice assistants like Siri, Alexa and Google using a lasers beam to send them inaudible commands.

This new technique, dubbed Light Commands, exploits a design flaw in the smart assistants MEMS microphones.
MEMS (microelectro-mechanical systems) microphones convert voice commands into electrical signals, but researchers demonstrated that they can also react to laser light beams:

By shining the laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri.
Making things worse, once an attacker has gained control over a voice assistant, the attacker can use it to break other systems.


The research team


Which devices are vulnerable to Light Commands?

According with the technical paper, the attack has been successfully tested on the most popular voice recognition systems, such as Amazon Alexa, Apple Siri, Facebook Portal and Google Assistant.

Below a table also with the distance used during the test

Light can easily travel long distances, limiting the attacker only in the ability to focus and aim the laser beam. We have demonstrated the attack in a 110 meter hallway, which is the longest hallway available to us at the time of writing.

DeviceVoice Recognition
System
Minimun Laser Power
at 30 cm [mW]
Max Distance
at 60 mW [m]*
Max Distance
at 5 mW [m]**
Google HomeGoogle Assistant0.550+110+
Google Home miniGoogle Assistant1620
Google NEST Cam IQGoogle Assistant950+
Echo Plus 1st GenerationAmazon Alexa2.450+110+
Echo Plus 2nd GenerationAmazon Alexa2.950+50
EchoAmazon Alexa2550+
Echo Dot 2nd GenerationAmazon Alexa750+
Echo Dot 3rd GenerationAmazon Alexa950+
Echo Show 5Amazon Alexa1750+
Echo SpotAmazon Alexa2950+
Facebook Portal MiniAlexa + Portal185
Fire Cube TVAmazon Alexa1320
EchoBee 4Amazon Alexa1.750+70
iPhone XRSiri2110
iPad 6th GenSiri2720
Samsung Galaxy S9Google Assistant605
Google Pixel 2Google Assistant465

Is there a mitigation?

Countermasures include the implementation of further authentication, sensor fusion techniques or the use of a cover on top of the microphone to prevent the light hitting it:

An additional layer of authentication can be effective at somewhat mitigating the attack. Alternatively, in case the attacker cannot eavesdrop on the device’s response, having the device ask the user a simple randomized question before command execution can be an effective way at preventing the attacker from obtaining successful command execution.
Manufacturers can also attempt to use sensor fusion techniques, such as acquire audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands.
Another approach consists in reducing the amount of light reaching the microphone’s diaphragm using a barrier that physically blocks straight light beams for eliminating the line of sight to the diaphragm, or implement a non-transparent cover on top of the microphone hole for attenuating the amount of light hitting the microphone. However, we note that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to compensate for the cover-induced attenuation or for burning through the barriers, creating a new light path.

https://lightcommands.com/

References

Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.