Devices like Siri and Alexa have been eavesdropping on every word we say. Even when we think they can’t hear us, they pick up our secrets. Why or how they do this? They pick up the sound from their hidden microphone and then transfer the sound to high-quality audio files stored in a computer.
But now, Mia Chiquier, a graduate student at Columbia University, has come up with a way to stop this. She has made a device that allows you to confuse these clever listening devices. Chiquier says that her devices use automated speech-recognition, or ASR, to translate sound waves into text. Chiquier’s new program fools the devices using a tactic called “voice camouflage”. The voice camouflage makes the eavesdropping devices hear different words then what is actually said.
This helps a lot, since many people do not want information about their family or company to go to strangers (or a big company that uses their data). Devices like Siri and Alexa can pick up sensitive, private information like passwords. Privacy is important after all.
Looking at the visual above, the sounds that were spoken and correctly transcribed by ASR look green. And the red words are sounds that were not actually spoken but mistakenly heard as a result of the voice camouflage. And finally, the white noise was played over people speaking twice as loud as the voice camouflage, yet it did not work as well.
Sources: New Audio System to Confuse Eavesdropping Devices
But now, Mia Chiquier, a graduate student at Columbia University, has come up with a way to stop this. She has made a device that allows you to confuse these clever listening devices. Chiquier says that her devices use automated speech-recognition, or ASR, to translate sound waves into text. Chiquier’s new program fools the devices using a tactic called “voice camouflage”. The voice camouflage makes the eavesdropping devices hear different words then what is actually said.
This helps a lot, since many people do not want information about their family or company to go to strangers (or a big company that uses their data). Devices like Siri and Alexa can pick up sensitive, private information like passwords. Privacy is important after all.
Looking at the visual above, the sounds that were spoken and correctly transcribed by ASR look green. And the red words are sounds that were not actually spoken but mistakenly heard as a result of the voice camouflage. And finally, the white noise was played over people speaking twice as loud as the voice camouflage, yet it did not work as well.
Sources: New Audio System to Confuse Eavesdropping Devices