When it comes to digital assistants, user's main concern is often privacy. To enable their commands, the device must be listening all the time, and despite companies assurances that their data is safe, it creates a natural wariness.
However, Kaspersky says users should also be worried about the voices of others – even if they can't hear them.
“In an article published in 2017, researchers from Zhejiang University presented a technique for taking covert control of voice assistants, named DolphinAttack“, explains Kaspersky's Igor Kuksov. “The research team converted voice commands into ultrasonic waves, with frequencies too high to be picked up by humans, but still recognizable by microphones in modern devices.”
For now, this isn't a massive concern. Ultrasound is very short range unless you buy over 60 speakers like the University of Illinois. That could change in the future, though, and until then, attackers could use different techniques.
Kaspersky references work by the University of California that hid voice assistant commands in classical music and regular audio files. It managed to trick Mozilla's Deep Speech to navigate to evil.com with hidden noise in the sentence “without the data set the article is useless.”
These attacks are basically indistinguishable from a regular audio file and could be inadvertently played by users to navigate them to malicious sites or perform other tasks. Though these specific examples didn't work over the air, a combination of different techniques could create a successful exploit.
Before such attacks reach the mainstream, companies will have to come up with solutions. Theoretically, mitigations should be possible, but it's likely the attack landscape for smart speakers will continue to evolve, just like for PCs.