Listen, do you want to know a password?


Researchers at British universities have demonstrated a technique that allows an AI model to work out what you’re typing simply by listening to the keystrokes.
Known as an acoustic side channel attack (ASCA) it involves recording the sound of a keyboard, either by using a nearby smartphone or via a remote conferencing session such as Zoom. Researchers used a standard iPhone 13 to record the sound of the Apple MacBook Pro 16-inch laptop keyboard at standard 44.1kHz quality.
Whilst it may not be detectable to the human ear, each key makes a slightly different sound, so that a trained AI model is able to distinguish between them.
The team, including Durham University graduate Joshua Harrison, University of Surrey software security lecturer Ehsan Toreini, and Royal Holloway University of London's Dr Maryam Mehrnezhad, was able to determine which keys were pressed with 95 percent accuracy when the sound of the typing was recorded using a smartphone and 93 percent accuracy over Zoom.
"The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector," the researchers say. "But also prompts victims to underestimate (and therefore not try to hide) their output."
Biometrics are recommended as a way of guarding against the threat. The researchers also point out that touch typing reduces recognition accuracy considerably. Passwords with multiple cases help too as the AI model can pick up the sound of a Shift key being pressed, but not detect when it gets released because of the noise from other keys.
Commenting on the research Eduardo Azanza, CEO at Veridas, says:
Passwords have continuously failed to keep users fully protected, and now, with the rise of AI-driven cyberattacks, they’ve become even weaker. The tests conducted by Durham University demonstrate just how easy it is for cyber criminals to steal passwords.
It's encouraging to see that scientists at Durham University have recommended the use of biometrics to mitigate the risk of such cyber threats. Unlike passwords, users' biometrics cannot be lost or stolen and used by cyber criminals to gain access to systems or commit other crimes such as fraud and identity theft.
However, instead of using fingerprint or face scanning as recommended by Durham University, we strongly advice that organizations implement stronger forms of biometric authentication such as voice recognition and full-facial scanning. Voice and full-facial biometrics are significantly more accurate at verifying a user's identity, are unaffected by external factors such as lighting, and are capable of countering more sophisticated threats like deepfakes.
You can read the full research paper here.
Photo Credit: Brian A Jackson/Shutterstock