Hold the Wheel Tight! How Hackers Trick Smart Cars
Not long ago, headlines were buzzing: disaster struck, a Tesla went off course! How can we trust “smart” cars after that? To understand the real risks to this promising technology, let’s look at the system architecture, the history, key vulnerabilities, attack vectors, and real-world cases.
System Architecture: CAN Bus, ECU Blocks, and More
The onboard computer in a modern car isn’t a single unit. Instead, it’s a network of electronic control units (ECUs) connected via a CAN bus—a standard since the late 1980s. All ECUs communicate over twisted pair wires, sending messages in a unified format.
It’s more complex in practice: there can be multiple buses for different priorities, with “bridges” between them. For example, Volkswagen’s MEB electric platform is moving away from CAN to onboard Ethernet and a unified Android-based OS.
No matter how “smart” a car is, the CAN bus remains at its core—and with it, a fundamental vulnerability: if someone gains access to the CAN bus (via the diagnostic port or a sniffer), they can see all transmitted data. Worse, if they can send commands to an ECU, it will execute them—whether it’s the air conditioner (not so bad) or the brakes or engine (potentially catastrophic).
Manufacturers do add protections: ECUs may reject commands without the right checksum or under certain conditions (e.g., parking assist only works in reverse under 5 km/h). Also, CAN messages are low-level, like machine code, so understanding them requires deep technical documentation or hands-on experiments.
Researchers Enter the Scene
Theoretically, hacking a car is simple; practically, it takes meticulous prep. For years, only criminals seeking to steal cars, not control their electronics, explored these weaknesses. That changed in 2010, when computer engineers from UC San Diego and the University of Washington presented a landmark paper at an IEEE security symposium.
They highlighted that while the auto industry focuses on safety in normal and emergency use, it pays little attention to targeted attacks on vehicle electronics. For example, to ensure doors unlock after a crash, the low-priority network (central locking) is bridged to the high-priority network (vehicle sensors and driver-assist systems). Advanced telematics collect sensor data and send it via cellular to service centers, or even call 911 in a crash. Anti-theft systems can remotely disable the engine.
Modern cars also have an OBD-II diagnostic port and mode, accessible to any mechanic—essentially “root login, password: password.” Many hacks start here.
The researchers created CARSHARK, a tool for analyzing and injecting CAN messages. Some ECUs had authentication, but with only a 16-bit key—brute-forcible in days. They could reflash ECUs or launch DoS attacks, making systems unresponsive. For dramatic effect, they wrote a “self-destruct demo virus” that counted down on the speedometer, flashed lights, honked, then killed the engine and locked the doors, displaying “PWNED.”
They also loaded malicious code into the telematics system’s RAM (running Unix), triggered by events like reaching a certain speed, then self-erased after execution—an “ideal crime.”
How Hackers Gain Access
In 2011, the same team explored how attackers could gain access. Real-world vectors included service computers (often running Windows and connected to the internet) and, theoretically, “smart” EV chargers at charging stations, which transmit both power and data.
They described attacks like a WMA music file that plays normally on a PC but sends malicious packets to the CAN bus when played in a car, or attacks via Bluetooth or cellular-connected telematics. All these were demonstrated in the lab, not just theorized.
The Legendary Duo: Chris Valasek and Charlie Miller
After the university team, two independent hackers, Chris Valasek and Charlie Miller, repeated and expanded on the research. They studied dozens of car models, focusing on network architecture and remote attack vectors, especially “cyber-physical” components like cruise control and lane-keeping assist (LKA)—prime hacker targets and key steps toward true self-driving cars.
They found that while automakers use different components and architectures, many infotainment systems use off-the-shelf consumer electronics, even web browsers.
The Jeep Cherokee and the Infotainment System Hack
Their main breakthrough was a vulnerability in the 2014 Jeep Cherokee’s Harman Uconnect infotainment system—a computer with a 32-bit ARM processor and QNX OS, combining audio, radio, navigation, apps, Wi-Fi, and a 3G modem (specifically for Sprint, a major US carrier).
Scanning revealed port 6667 open for D-Bus interprocess communication, with no password required. A simple Python script could open a remote root shell. Even more, port 6667 was accessible via any network interface, including 3G. Using Sprint femtocells bought on eBay, they could connect to any car within range. Worse, any two devices on Sprint’s network could reach each other nationwide. By scanning the IP range assigned to cars, they found multiple models with open, unauthenticated ports—potentially hundreds of thousands of vehicles. After their research was published, Chrysler recalled 1.5 million vehicles for security fixes.
Security in Design and Use
The Jeep hack was a wake-up call for the auto industry. Valasek and Miller convinced manufacturers that cars need the same security principles as other electronics: secure design, collaboration with security experts, and rapid, centralized patching. For example, a lazy Jeep owner might ignore recall notices, leaving their car vulnerable, while Tesla owners get over-the-air security updates like smartphones.
They also noted a worrying trend: as cars gain more “cyber-physical” components, more control is concentrated in single ECUs—prime, high-priority targets. Once inside the car’s network, hackers can focus on the “autopilot” ECU and trick it directly.
As cars get smarter—self-parking, lane-keeping, adaptive cruise—they approach true autonomy. But when does a car become truly self-driving? And how much can we trust it? Automakers tout new features but still urge drivers to keep hands on the wheel and eyes on the road. Unfortunately, many drivers ignore this advice.
Who’s Driving the Tesla?
Whenever a Tesla crashes, the media focuses on autopilot or batteries, rarely on what the driver was doing. In several real cases, the answer was “eating bagels,” “watching a movie,” or “barely touching the wheel for an hour.” A common scenario: in the left lane, the car ahead swerves right to avoid an obstacle, and Tesla accelerates, thinking the lane is clear—with predictable results. In August 2019, this happened in Moscow; fortunately, everyone survived.
This happens because Tesla doesn’t use lidar or preloaded maps—Elon Musk considers lidar too expensive and maps too inflexible. Instead, Tesla relies on radar and cameras, processed by a neural network. But radar struggles with stationary objects, and when radar and cameras disagree, the conflict resolution can fail. If the driver isn’t ready to intervene, disaster can strike.
Dangerous Lifehacks
Some experts argue that the problem isn’t just technical—Elon Musk markets Tesla as a full autopilot. The manual says drivers must always be ready to take control, but Musk himself often ignores this in ads, setting a bad example. As a result, many Tesla owners believe they have a fully autonomous car and try to trick the system—taping a water bottle or orange to the wheel to fool the hands-on detection. This is a form of hacking and bypassing safety features.
Sensors: How Robots See the World
The “smartness” of a car depends on both the number and quality of its sensors and the system processing their signals. But every sensor is a data input—and a potential vulnerability. Most development focuses on everyday use, not on defending against targeted interference or attacks. Conflict-resolution algorithms often aren’t designed to handle deliberate deception—just like in classic sci-fi, robots are smart but trusting.
At DEF CON 21 in 2013, hacker Zoz Brooks gave a detailed talk on attacks against autonomous vehicles, drones, and even underwater robots. He outlined two main attack paths for all sensors: jamming (blocking signals) and spoofing (faking signals).
He cited, for example, Arabic-language manuals (likely from banned groups) advising to fool lidars with reflectors, or tricking wheel rotation sensors by swapping wheels of different sizes. He also discussed the weaknesses of both preloaded and real-time maps.
GPS: “Hello, Where Am I?”
Brooks also discussed GPS spoofing—a hot topic since Iran allegedly used it to capture a US RQ-170 Sentinel stealth drone. Today, GPS spoofing is widespread. In 2017, 20 ships in the Black Sea near Gelendzhik saw GPS errors of over 25 nautical miles. Similar anomalies are reported by drivers near VIP motorcades. With cheap SDR (software-defined radio) tech, these attacks are now within reach of individuals.
Recently, Regulus, a company specializing in GPS security, demonstrated that for $600, you can trick a Tesla Model 3’s autopilot into veering off course. The fake signal makes the car think it’s time to exit the highway, sending it onto the shoulder or into oncoming traffic. Any GPS-reliant device—drones, inattentive drivers, even Pokemon GO—can be affected.
The “Chinese Room”: What Does the Autopilot Think?
But what about the part of the smart car that processes sensor data? Can it be tricked? Chinese hackers at Tencent’s Keen Security Lab have led the way here. Unlike Miller and Valasek, who studied many brands, Keen focused solely on Tesla. (Fun fact: Tencent is both a Tesla shareholder and Keen’s sponsor.)
Their reports are highly technical, but the key takeaway is that many vulnerabilities they exploited were similar to those found by Miller and Valasek—showing these issues persist. Keen’s team are masters of reverse engineering. In 2016, to remotely open a Tesla’s trunk or fold its mirrors, they used a rogue Wi-Fi hotspot and a browser vulnerability in the infotainment system. The real breakthrough came when they reprogrammed the Gateway component (the bridge between car networks), gaining access to the CAN bus. In 2018, they achieved root access to the autopilot, bypassing improved security, and in 2019, they published a detailed analysis of how it processes camera and sensor data.
They found that several tasks—object tracking, mapping, even rain detection—are handled by a single neural network. By studying the images used for rain detection, they applied “adversarial examples”—adding noise to camera images that looks insignificant to humans but fools the neural net into thinking it’s raining, triggering the wipers. They achieved this not with a custom neural net, but with Tesla’s commercial product. Later, they managed to trigger the effect by showing a noisy image on a TV in the camera’s view, and then applied similar tricks to lane detection.
They discovered that stickers on the road can confuse the autopilot, making it miss lane lines—even though the stickers are visible to the naked eye. Conversely, small stickers could make the neural net “see” lane lines where none exist, potentially steering the car into oncoming traffic at an intersection. The researchers believe these tricks would work even on unmodified cars.
Human, All Too Human: Technology’s Limits
Most self-driving car projects rely on detailed maps and are tested in specific cities or test tracks, because maps make autopilot’s job much easier. For now, navigating truly unknown terrain is the domain of DARPA challenge winners and ambitious startups.
Also, except for a few demo models, most self-driving cars still have a safety driver behind the wheel. Manufacturers rarely disclose how often human intervention is needed—because it’s more frequent than they’d like.
Levels of Autonomy
The US Society of Automotive Engineers (SAE) defines autonomy from Level 0 to 5. Tesla is Level 2—automation controls speed and direction, but the driver must be ready to intervene at any moment. (Even basic parking assist is Level 2.) Level 3 means full automation, but only in limited conditions (e.g., highways). Level 4 doesn’t require a human driver, but only in specific areas (like Waymo’s cars). Level 5—full autonomy in any conditions—remains out of reach for now.
How to Stop Worrying and Love Your Smart Car
So what’s the bottom line? Like with many “Internet of Things” devices, the more complex the system, the more potential vulnerabilities. The most secure elements can be bypassed by attacking weaker ones. Borrowing solutions from consumer electronics can introduce new problems. And human factors—complacency after repeated success—mean no one is ready for failure in a critical moment.
Be vigilant and cautious—both on the road and online. Don’t blame the robots—they’re still learning—but don’t trust them 100% with life-and-death decisions.
If this article makes you want to dive deeper—not ditch your modern car for an old Lada, but experiment yourself—check out George Hotz’s comma.ai project: an open-source autopilot with affordable hardware for many modern cars. It’s much cheaper than a Tesla and has a vibrant community, wiki, and developer blog. You won’t even need to keep your hands on the wheel—but you’ll still need to watch the road and think about what you’re doing.