The Hazard of Tesla’s Approach to Driverless Cars
A fatal crash calls into question the car company’s approach to building autonomous vehicles—and underscores the stark contrast between its strategy and Google’s.
In the aftermath of a fatal crash that killed the driver of a Tesla Model S using a new feature called Autopilot, much of the focus has been on the driver.
It’s not known whether he was speeding—or perhaps even watching a movie—at the time of the collision, as some reports have suggested. More details will surely emerge as a federal investigation moves forward.
Better understanding the driver’s role in the crash may offer crucial context for an incident that could shape the future of autonomous driving. Though the Tesla vehicle isn’t technically a driverless car, Autopilot is arguably the most sophisticated partially-autonomous system on the roads.
But the possibility of a technical failure isn’t all that’s at stake. Because even if the driver is deemed to be at fault, the man’s death highlights the extent to which Tesla’s approach to driverlessness differs from Google’s. Tesla may be asking itself: Did it make a strategic mistake?
We already know humans are not reliable drivers. This is an uncontroversial fact, and one of the main reasons the developers of self-driving vehicles believe the technology could save so many lives. People make dangerous mistakes on the roads all the time, and more than 1.25 million people die in traffic accidents around the world every year as a result.
Even when humans are required to stay completely engaged with the task of driving, many of them don’t. Many people don’t keep their foot hovering above the brake when cruise control is on, for instance. Or they try to multitask while driving.
Google has one of the best examples proving this point—and it’s as funny as it is terrifying: Its test drivers once spotted a person driving a car while playing a trumpet. They’ve also seen people reading books and, of course, text messaging.
“Lots of people aren’t paying attention to the road,” Chris Urmson, the head of Google’s Self-Driving Car Project, wrote in a blog post last year. “In any given daylight moment in America, there are 660,000 people behind the wheel who are checking their devices instead of watching the road.”
Google and Tesla both know this, but the two companies have dramatically different approaches to building autonomous vehicles. Tesla’s strategy is incremental. The idea is this: Add one sophisticated assistive-driving feature at a time, and eventually you’ll end up with a fully autonomous vehicle. (Its Autopilot feature, Tesla has emphasized repeatedly, requires a person to stay completely focused behind the wheel, even as the car does much of the driving.)
Google, on the other hand, is designing its vehicles for full-autonomy from the start—a “level 4” system, as it’s known in the driverless world—which involves the car doing all of the driving without any human intervention necessary.
“It’s not to say that either of them is right or wrong, it’s just different,” Urmson told me last fall. “From our perspective, we look at the challenges involved in getting to a self-driving car, and we don’t see it as an incremental task.”
Google didn’t always see it this way, though. It wasn’t until the company realized just how quickly people trust technology to work perfectly that it decided it had to build a car that can “shoulder the entire burden of driving,” as Urmson once put it.
“Our experience has been that when we’ve had people come and ride in our vehicles—even those who think this is smoke and mirrors, or who fundamentally don’t believe in the technology—after trying it out for as little as 10 or 15 minutes, they get it,” he told me. “And their attitudes change dramatically.”
That transformation is a good thing for Google and for the future of self-driving cars more broadly, in that it suggests even skeptics will readily accept them, eventually, Urmson says. But it also poses the danger that people trust technology too much.
Tesla’s Autopilot is exactly the sort of feature that encourages this kind of dynamic—no matter how many times the company emphasizes it requires human attention, the fact that Autopilot can do so much on its own ends up sending a dangerous mixed message.
Tesla’s Autopilot feature is in beta mode, and the drivers who test it on public roads are required to acknowledge the risks involved. But a question remains about whether the risks posed by partially autonomous systems (and their human drivers) are, in fact, justifiable.
That’s a question that Tesla is confronting again now. How it ultimately answers that question may have a profound effect on the future of driving.
NEXT STORY: When Algorithms Take the Stand