New Method Lets Hackers Fool Facial Recognition Systems and Impersonate Others
Researchers from the Israeli company Adversa AI, which specializes in artificial intelligence (AI) technologies, have introduced a new method for tricking facial recognition systems by adding so-called “noise” to photographs. This noise consists of tiny data particles that are invisible to the naked eye but are enough to make a facial recognition system believe that the photo shows a different person. For example, the researchers demonstrated how they managed to make the PimEyes facial recognition system mistake Adversa AIโs CEO, Alex Polyakov, for Elon Musk.
Adversarial attacks are improving every year, as are the methods for defending against them. However, the attack presented by Adversa AI, called โAdversarial Octopus,โ stands out for several reasons.
- First, Adversarial Octopus only masks the person in the photo, rather than transforming them into someone else.
- Second, instead of adding noise to the image data used to train AI models (a so-called poisoning attack), this new method involves making changes to the image that will be input into the facial recognition system. It does not require any internal knowledge of how the system was trained.
The creators of Adversarial Octopus have not yet published a scientific paper with a full explanation of the attack. The researchers plan to share more details only after completing the responsible disclosure process with facial recognition system developers.