Uber’s Self-Driving Car Didn’t Malfunction, It Was Just Bad
There were no software glitches or sensor breakdowns that led to a fatal crash, merely poor object recognition, emergency planning, system design, testing methodology, and human operation.
On March 18, at 9:58 p.m., a self-driving Uber car killed Elaine Herzberg. The vehicle was driving itself down an uncomplicated road in suburban Tempe, Arizona, when it hit her. Herzberg, who was walking across the mostly empty street, was the first pedestrian killed by an autonomous vehicle.
The preliminary National Transportation Safety Board report on the incident, released on Thursday, shows that Herzberg died because of a cascading series of errors, human and machine, which present a damning portrait of Uber’s self-driving testing practices at the time.
Perhaps the worst part of the report is that Uber’s system functioned as designed. There were no software glitches or sensor malfunctions. It just didn’t work very well.
According to the report, the object-detection system misclassified Herzberg when its sensors first detected her “as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.” That led the planning software to make poor predictions for her speed and direction, as well as its own speed and direction.
1.3 seconds before the impact, the self-driving computer realized that it needed to make an emergency-braking maneuver to avoid a collision. But it did not. Why? Uber’s software prevented its system from hitting the brakes if that action was expected to cause a deceleration of faster than 6.5 meters per second. That is to say, in an emergency, the computer could not brake.
“According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” the report says.
Instead, the system relied on the driver to take control in an emergency, but “the system is not designed to alert the operator.”
The driver, for her part, took control of the car less than 1 second before the crash by grabbing the steering wheel. It wasn’t until after impact that she began braking.
In video footage of the interior of the car leading up to the crash, the driver is repeatedly seen looking down toward the center console of the car. Many commentators assumed that she was looking at a phone, but she told the NTSB investigators “she had been monitoring the self-driving system interface.” In fact, the testing method requires operators “monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review.”
Other self-driving companies’ testing protocols involve two people: one to drive and the other to monitor the system’s outputs and do the computer work. Uber itself did this too until late 2017, when the company decided that the second operator’s job could be done by looking at logs back at the office. “We decided to make this transition because after testing, we felt we could accomplish the task of the second person—annotating each intervention with information about what was happening around the car—by looking at our logs after the vehicle had returned to base, rather than in real time,” an Uber spokeswoman told CityLab earlier this year.
It’s unclear what penalties Uber could face for this failure. The company has already settled a court case with Herzberg’s family. It has also scaled back its autonomous-testing efforts.
“Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program,” Uber said in an emailed statement. “We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.”