Researchers in the UK claim to have translated the sound of laptop keystrokes into their corresponding letters with 95 percent accuracy in some cases.

That 95 percent figure was achieved with nothing but a nearby iPhone. Remote methods are just as dangerous: over Zoom, the accuracy of recorded keystrokes only dropped to 93 percent, while Skype calls were still 91.7 percent accurate.

In other words, this is a side channel attack with considerable accuracy, minimal technical requirements, and a ubiquitous data exfiltration point: Microphones, which are everywhere from our laptops, to our wrists, to the very rooms we work in.

  • rhandyrhoads@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    A pretty simple deep learning approach would be to take a large sample and first identify the individual key sounds. From there it can start associating the most common letters with the most common sounds and switch it around until dictionary words start coming out. Once it can identify individual keys you could even brute force it in a pretty reasonable timeframe. The keyboard layout is the least important part because the individual key sound output is going to vary keyboard by keyboard and even potentially user by user. If you used a password without dictionary words and used a different keyboard layout exclusively for entering the password that would likely defeat this sort of attack.