Self-Driving Cars Still Don't Know How to See
An Uber autonomous SUV killed a pedestrian. What does that say about the promise of self-driving technology?
On Sunday, the inevitable happened: An autonomous vehicle struck and killed someone. In Arizona, a woman police identified as Elaine Herzberg was crossing the street with her bicycle when a self-driving Uber SUV smashed into her.
Tempe police reported in their preliminary investigation that the vehicle was traveling at 40 miles per hour. Uber has suspended its self-driving car program in response.
This is the second death in the United States caused by a self-driving car, and it’s believed to be the first to involve a pedestrian. It’s not the first accident this year, nor is this the first time that a self-driving Uber has caused a major vehicle accident in Tempe: In March 2017, a self-driving Uber SUV crashed into two other cars and flipped over on the highway. As the National Transportation Safety Board opens an inquiry into the latest crash, it’s a good time for a critical review of the technical literature of self-driving cars. This literature reveals that autonomous vehicles don’t work as well as their creators might like the public to believe.
A self-driving car is like a regular car, but with sensors on the outside and a few powerful laptops hidden inside. The sensors, which are GPS, lidar, and cameras, transmit information back to the car’s computer system. The best way to imagine the perspective of a self-driving car is to imagine you are driving in a 1980s-style first-person driving video game. The world is a 3-D grid with x, y, and z coordinates. The car moves through the grid from point A to point B, using highly precise GPS measurements gathered from nearby satellites. Several other systems operate at the same time. The car’s sensors bounce out laser radar waves and measure the response time to build a “picture” of what is outside.
It’s similar to the way that bats use echolocation to avoid obstacles. The cameras take photos of the lines on the road, and send data that helps the car steer so that it stays between the solid line at the edge of the road, and the dotted line that marks the edge of the lane.
It is a masterfully designed, intricate computational system. However, there are dangers.
Cars don’t see well
Autonomous cars don’t track the center line of the street well on ill-maintained roads. They can’t operate on streets where the line markings are worn away—as on many of the streets in New York. These cars also don’t operate in snow and other bad weather because they can’t “see” in these conditions. A lidar guidance system doesn’t work well in the rain or snow or dust because its beams bounce off the particles in the air instead of bouncing off obstacles like bicyclists.
Image recognition is flawed
Autonomous vehicles generally use deep neural network algorithms to “recognize” images. The car’s lidar recognizes an obstacle, the camera takes a picture of it, and the computer uses a deep neural network to identify the image as a stop sign. The car’s acceleration system is programmed to slow down and brake when a stop sign is a set distance ahead. Alternatively, the car’s internal map can be programmed to stop at a particular set of GPS coordinates where a human engineer notices that the car fails to recognize a stop sign.
However, deep neural networks are the same type of image-recognition algorithms that misidentified photos of Black people as gorillas. Laboratory tests reveal that deep neural networks are easily confused by minor changes. Something simple, like putting a sparkly unicorn sticker on a stop sign, can cause the image recognition to fail. Disrupting image recognition would result in a self-driving car failing to stop at a stop sign, which is likely to cause an accident or more pedestrian injuries.
GPS is vulnerable
GPS hacking is a very real danger for autonomous vehicles. Pocket-size GPS jammers are illegal, but they are easy to order online for about $50, as the journalist Kashmir Hill demonstrated in a recent Gizmodo article. She reported that commercial truckers commonly use jammers in order to pass for free through GPS-enabled tollbooths.
Self-driving cars navigate by GPS. What happens if a self-driving school bus is speeding down the highway and loses its navigation system at 75 mph because of a jammer in the next lane?
Autonomous cars can’t react quickly
Let’s say that there is a huge red fire truck idling at the side of the road. An autonomous car can’t see the fire truck if the fire truck suddenly pulls out into traffic. Why? Autonomous cars have a stop and start problem. They don’t have unlimited processing power, so they cut down on calculations by only calculating the potential future location of objects that seem to be in motion. Because they are not calculating the trajectory for the stationary fire truck, only for objects in motion (like pedestrians or bicyclists), they can’t react quickly to register a previously stationary object as an object in motion.
If a giant red fire truck is parked, and it suddenly swings into traffic, the autonomous car can’t react in time. Though it is running its calculations in microseconds, it still calculates everything equally. So far, it isn’t able to recognize a fire truck as a class of objects that might move unexpectedly. A human, by contrast, gets a jolt of adrenaline in a crisis situation that allows the brain to react lightning-fast. And the brain sees a fire truck and it registers that the fire truck might move in the future, so the brain is ready to act.
More importantly for accident avoidance, the car computer lacks the human instinct for self-preservation that permits people to act with uncanny speed or strength in times of life-threatening peril. The autonomous car just ... runs into the truck. Or runs over the innocent person.
At their worst, autonomous cars might be murder machines
Many people claim that autonomous cars could save lives. But an overwhelming number of tech people (and investors) seem to want self-driving cars so badly that they are willing to ignore evidence suggesting that self-driving cars could cause as much harm as good. This kind of blind optimism about technology, the assumption that tech is always the right answer, is a kind of bias that I call technochauvinism. Google Glass and Snapchat Specs are other good examples of technochauvinism guiding unwise ideas. There, however, the stakes were lower. Nobody died because people wore dopey-looking camera-enabled glasses.
With driving, the stakes are much higher. In a self-driving car, death is an unavoidable feature, not a bug. By this point, many people know about the trolley problem as an example of an ethical decision that has to be programmed into a self-driving car. It’s too soon to tell whether the Uber crash was a situation where the car was programmed to save the occupants and kill the bystander, or if it was a software malfunction, or if something totally unexpected happened. If the car was programmed to save the car’s occupants at the expense of pedestrians, the autonomous-car industry is facing its first public moment of moral reckoning.
But imagine the opposite scenario: The car is programmed to sacrifice the driver and the occupants to preserve the lives of bystanders. Would you get into that car with your child? Would you let anyone in your family ride in it? Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver? Do you trust the unknown programmers who are making these decisions on your behalf?
“The current model for real-life testing of autonomous vehicles does not ensure everyone’s safety,” Linda Bailey, the executive director of the National Association of City Transportation Officials (nacto), said in a statement in response to the Arizona crash. In addition to the National Transportation Safety Board, nacto is also launching an investigation into the incident.
Nobody needs a self-driving car to avoid traffic
Plenty of people want self-driving cars to make their lives easier, but self-driving cars aren’t the only way to fix America’s traffic problems. One straightforward solution would be to invest more in public transportation. In the Bay Area of California, public transportation is woefully underfunded. The last time I tried to take a subway at rush hour in San Francisco, I had to wait for three trains to pass before I could squeeze into a jam-packed car. On the roads, the situation is even worse.
Public-transportation funding is a complex issue that requires massive, collaborative effort over a period of years. It involves government bureaucracy. This is exactly the kind of project that tech people often avoid attacking, because it takes a really long time and the fixes are complicated.
Plenty of people, including technologists, are sounding warnings about self-driving cars and how they attempt to tackle very hard problems that haven’t yet been solved. People are warning of a likely future for self-driving cars that is neither safe nor ethical nor toward the greater good. Still, the idea that self-driving cars are nifty and coming soon is often the accepted wisdom, and there’s a tendency to forget that technologists have been saying “coming soon” for decades now.
To date, all self-driving car “experiments” have required a driver and an engineer to be onboard at all times. Now, even with that safety measure, a pedestrian has died.
NEXT STORY: HHS Wants to Use AI to Buy in Bulk