In a new era of behavioral biometrics, the way we walk, talk, swipe our fingers across our phones, and otherwise innocently interact with the world around us has become a means of identifying who we are and what we are doing at any given time. The privacy implications are obvious – while these quantifying mechanisms were originally designed to protect us, they now give away more about us than we realize or feel comfortable sharing.
Now, sound is being used to identify not only people but their actions at work and at home as well. New techniques used by smartphone apps track you through the use of ultrasonic tones that the human ear can’t pick up on. According to ZDNet, some of these work “by emitting high-frequency tones in advertisements and billboards, web pages, and across brick-and-mortar retail outlets or sports stadiums. Apps with access to your phone’s microphone can pick up these tones and build up a profile about what you’ve seen, where, and in some cases even the websites you’ve visited.” And these aren’t even the apps that just use your microphone.
The fact that people rarely appreciate how much their personal data is worth to companies means this will only grow in popularity. In an undated paper (published in 2017 at the latest), researchers from the Technische Universität Braunschweig in Brunswick, Germany, found that 234 Android apps already had the ability to listen for ultrasonic tones without the user’s knowledge.
But an individual’s privacy, even while deeply important, likely doesn’t hold the same value as that of scientists working on research in academic, industrial, and government labs.
Implications for espionage
Research from computer scientists and engineers at the University of California, Riverside is giving us new insight into the types of sound signals that machines in laboratories produce and to what extent those sounds can be hacked to leak valuable information for the purpose of espionage.
The scientists say that by simply recording the sounds produced by a lab instrument, they managed to reconstruct what the researcher was doing and the results they were getting.
According to Philip Brisk, a UC Riverside associate professor of computer science who worked on the project:
Any active machine emits a trace of some form: physical residue, electromagnetic radiation, acoustic noise, etc. The amount of information in these traces is immense, and we have only hit the tip of the iceberg in terms of what we can learn and reverse engineer about the machine that generated them.
And the risks are all too real. If our phones can easily use the sounds we make for marketing purposes, there’s even more at stake when lab sounds are recorded (or devices are hacked to record them). Other researchers could theoretically “scoop” a lab’s research, trade secrets could be revealed, or, in the case of confidential research, the information could even put national security at risk.
This particular experiment was done on DNA synthesizers, which are automated systems that have both a cyber and physical domain, leaving them open to security breaches. The engineers developed an attack methodology that could not only break into the machine and steal the DNA sequences being analyzed, but was 88.07% accurate in predicting the sequences based on the sounds emitted from the machine using a microphone up to 0.7 meters away, even in the presence of background noise. Attackers would have to have biomedical knowledge in order to process the information, but one can easily imagine the information being hacked and sold to those with the appropriate expertise.
What are the risks?
There are many ways that DNA can be used maliciously, most notably in the creation of artificial pathogens for the purpose of bioterrorism. Labs that work on pathogens such as Ebola, which has a relatively simple genome, could potentially be hacked by a bad actor looking to alter the virus to become more virulent, for example. According to the researchers, “in 2010 the US Department of Health and Human Services issued a statement to commercial DNA synthesis companies, warning them to be on the lookout for customers ordering ‘sequences of concern,’ or snippets of DNA from the genomes of anthrax, Ebola, smallpox, and several other deadly pathogens.” The availability of second-hand DNA synthesizers on auction sites such as eBay makes this even more feasible instead of just the stuff of nightmares.
If potential bioterrorists were able to eavesdrop on a lab, they could have the most cutting-edge information available to work with. On the other hand, the tech can be used to enhance national security as well. Eavesdropping on a suspected terrorist’s machine could give law enforcement officers insight into how it is being used and allow them to intervene.
The risk is not limited to those who work with DNA. The researchers also cited studies in which sound waves revealed information about additive manufacturing, objects being printed by a 3-D printer, and allowed hackers to carry out physical attacks on magnetic hard disks, or even obtain electroencephalography (EEG) signals from a brain-computer interface to gain private user information.
For the rest of us, these signals can be used to collect user information including login details on smartphones.
Luckily, there are various defense mechanisms under development. But the first step is understanding that the risks exist and just how much information we produce without realizing it. This is important in both the home and the lab.
Ways of preventing these acoustic attacks include: using anti-vibration pads to reduce noise, creating artificial noise that masks signals that reveal information but doesn’t disturb workers, asking manufacturers to design components to reduce the number of sounds their machines make, adding in redundant steps to confuse anyone listening, performing other tasks at the same time the machine makes the relevant noises, and taking measures to secure laboratories from microphones that can be hacked. Of course, banning every electronic device with recording capabilities in the lab is no easy task.
And, of course, none of this really helps individuals from cyber sound hacking, though the risks aren’t nearly as serious in this case.
For the rest of us, it’s worth looking through apps on our devices and turning off microphone access to any app that shouldn’t need it (and being careful when downloading an app that asks for access). Unfortunately, there’s usually no way to know when an app is accessing your microphone or sending information to companies.
There’s no reason a company should know what conversations you’re having or what other noises you or your family or appliances make in your home solely for the sake of advertising to you. Turning off microphone access to apps that don’t need to make recordings should not interfere with their operation and it’s wise to be suspicious if that is the case.
Now that we know just how pervasive microphone hacking is, we can begin asking more intelligent questions about who is using them and how we can protect ourselves at home and at work. It’s also worth noting what devices in your home have microphones, including smartphones, tablets, computers, televisions, baby monitors, and smart assistants such as Alexa.
Companies such as Facebook and Apple have strongly denied spying on people in their homes, but if we’ve learned anything recently it’s that people have learned the hard way that companies don’t always tell the truth and customers have very little recourse.