Self-Driving Cars Can Be Fooled by Special Images

Researchers Discover Vulnerability in Self-Driving Cars

Experts from the Max Planck Institute for Intelligent Systems and the University of Tübingen have conducted a study on the safety of self-driving cars. The engineers tested how well these vehicles recognize human figures. They found that the system can completely malfunction, causing the car to veer out of its lane or brake unexpectedly. This is somewhat similar to how strobe lights of certain frequencies can trigger epileptic seizures in some people—an artificial visual stimulus causes a system failure. In this sense, the brains of humans and autonomous vehicles have something in common.

How Special Patterns Can Trick Autonomous Vehicles

According to the researchers, it took them only four hours to create a sample of color combinations that can induce a panic-like state in a self-driving car, posing a real safety threat. These patterns can easily be printed on T-shirts, made into stickers for road signs, or put on shopping bags. The researchers warn that hackers could also exploit this vulnerability.

The Root of the Problem: Imperfect Artificial Intelligence

The issue lies in the imperfections of artificial intelligence, especially when it comes to image recognition. The algorithm uses a built-in camera to observe the environment, such as the road ahead, and to detect obstacles. If the recognition process fails, the robot car will, at best, stop for safety reasons.

Unpredictable Behavior and the Need for Solutions

The authors of the study emphasized that this bug occurs only a few percent of the time, but that’s enough to make a self-driving car behave unpredictably. The experiment showed that if the car’s camera sees the same spot several times, its reaction will be different each time.

Of course, scientists and programmers will eventually solve this problem, but for now, it remains an issue. The researchers believe that it is now up to car manufacturers to train their systems to be resistant to such attacks.

Leave a Reply