Remember May 2022? At the IEEE Symposium in San Francisco, four researchers
Boucher, Shumailov, Anderson, and Papernot dropped a bomb that is still
shaking the foundations of AI security.
The paper's title itself contained 1,000 invisible characters. You couldn't see them, the conference reviewers couldn't see them.
But the machines could. They proved their point before anyone even
read the first sentence. That is top-tier, Hall-of-Fame level academic trolling.
Breaking the World with 3 Keystrokes, they exposed a glitch in the matrix. They asked a simple question: What happens in the gap between what humans see (pixels) and what machines see (bytes)?
Thay foundChaos.
They took down Google Translate, Microsoft Azure, and IBM's classifiers. They didn't use supercomputers or complex algorithms. They used three invisible characters.
Three.
This team was the Avengers of security research.
- Boucher: The implementation wizard.
- Shumailov: The model breaker.
- Anderson: The legend who saw this coming decades ago.
- Papernot: The adversarial ML expert.
They realized that while the rest of the world was rushing to build bigger, faster models, we had forgotten the golden rule of the web: Never. Trust. User. Input.
The Legacy
Most security papers get cited and forgotten. This one birthed an entire industry. It validated the need for tools like Bad Character Scanner and proved that you can't patch your way out of an architectural problem.
Here's to the brilliance of asking the question no one else thought to ask.
Read the full paper: Bad Characters: Imperceptible NLP Attacks
Published: 2022 IEEE Symposium on Security and Privacy (SP)
DOI: 10.1109/SP46214.2022.9833641
Authors: Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot
This article is part of the Bad Character Scanner blog series exploring the landscape of invisible character vulnerabilities in modern software systems.