These Robots Act Differently When You're Around
The machines of the future will tailor their behavior to humans—and even individual personalities.
Self-driving cars are famously cautious. This is, of course, by design. And it’s part of why Google’s fleet of driverless cars has caused only one (minor) accident over the course of six years and more than 1.4 million miles of autonomous driving.
But this fastidiousness is also what fuels skepticism among those who say self-driving cars aren’t ready to share roads with human drivers. Because human drivers are, well, in addition to being sort of terrible at driving, they’re unpredictable.
To navigate this uncertainty, people also rely on all sorts of human signals. They make eye contact at stop signs, and wave one another along (and occasionally use less productive gestures to communicate with other drivers).
Self-driving cars, lacking eyes and hands, don’t do any of this. So instead, they often err on the side of caution. A classic example of where this extra tentativeness can go wrong is at a four-way stop sign. Humans, even when they know to wait their turn, sometimes roll into the stop a bit more than they should; or stop, inch forward at the wrong moment, then stop again. This kind of behavior is common, and yet may appear so erratic to a robot as to render it immobile.
That could be a huge problem for self-driving cars—or maybe not.
There’s more to the development of driverless cars than the work that computer scientists and engineers are doing to make them perceive where they are and figure out how to get from Point A to Point B.
“Most of robotics is focused on how to get robots to achieve the task, and obviously this is really, really important,” said Anca Dragan, a roboticist at Berkeley, and the head of the Interactive Autonomy and Collaborative Technologies lab. “But what we are doing in my lab, is we focus on how these algorithms need to change when robots are actually out there in the real world. How they need to coexist with people, direct people, and so on.”
In other words, robots are learning to tailor their behaviors to the presence of humans. Which is difficult, even for humans! Because that kind of customization is based on a vast overlay of experience and guesswork.
“How does the robot decide which actions to take?” Dragan said. “How do you figure out the state of the world—when that world also contains people and their internal states, their plans, their intentions, and their beliefs about the robots?”
For starters, the robots rely on models of human behavior—which is based on approximations about how people tend to prize convenience and efficiency as they interact with their environment. Such models come from actual observation of human behavior, and might also factor in the fact that while humans prioritize getting somewhere quickly—they also routinely take action to avoid collisions, like moving out of the way if another car is veering into theirs.
“Much like the robot, you are also plotting what to do, and actively thinking about the road system and the actions you take,” Dragan told me. “By learning how humans act on a highway, the robot is indirectly learning how to behave itself.”
This sort of learning is precisely what might help solve the stalled-forever-at-a-four-way-stop dilemma. In one experiment, for example, Dragan and her colleagues taught an algorithm to observe human drivers in a highway setting—then tested to see how the algorithm would apply what it had just learned in other scenarios. At a four-way stop, it didn’t just sit there and wait for other cars to go first. To the surprise of the researchers, the robot figured out a way to signal its intentions to the human driver.
“Our robot does something really cool and a bit counterintuitive,” Dragan told me. “What it decides to do, is it decides to slightly back up a little. And by backing up a little, it prompts the person to go, because the robot is clearly not moving forward. So the danger of colliding with the robot is very, very low—compared even with the robot just sitting there.”
The lesson the robot learned from highway driving is that people often speed up when there is more space between their car and other vehicles. So, the robot figured, one way to encourage another car to move is to create a greater space between you and that car.
“It was able to transfer the model of that [human behavior] to the four-way stop,” Dragan said. “On the highway, the robot wouldn’t back up. But in a four-way stop, the right thing to do was to back up.”
That wouldn’t always be the case, though. A sophisticated enough model would know that backing up might not be necessary if a self-driving car were faced with another self-driving car—perhaps then, the machines could communicate their intentions by linking up wirelessly. Or maybe, if the self-driving car could predict—based on other behaviors—that the other vehicle at the intersection was driven by a person who didn’t need an extra nudge before going first. Or, of course, if backing up posed a collision risk.
The larger implication of this kind of technology is that humans don’t actually have to predict every absurd scenario on the road and program a robot accordingly—a task that would be impossible anyway. (Just consider some of the strange things Google’s human test drivers have observed in their adventures: a person playing a trumpet while driving; a woman armed with a broom and chasing a turkey. What’s a robot to think—and more importantly, what’s a robot to do—when it encounters such drivers and pedestrians?)
Robots that learn from experience save engineers from having to hand-code algorithms for situations that no one could ever predict. But machine learning also means figuring out that not every human driver acts the same way—and knowing what sorts of micro-actions signal whether a person is likely to act one way (like speeding up when you try to pass them) or another (like moving to the right lane to let the car behind you go ahead). This level of sophistication could play a crucial role in achieving widespread social acceptance of self-driving vehicles.
“I can’t remember who said this to me many years ago,” said Julie Shah, the head of the Interactive Robotics group at MIT. “But someone made the joke: Once artificial intelligence works, you don’t call it artificial intelligence anymore.”
In other words, once driverless cars work; they’re just cars. (Or drivers.)
More broadly, there’s a paradox in all this, that reflects the overarching direction of contemporary robotics. As machines become more and more general-purpose, they’re also going to become much better at tailoring their behavior to different kinds of people—and even eventually to different individuals.
Already, SoftBank’s Pepper robot, a humanoid designed to interactive with people, is billed as the first machine able to read human emotions. For people to accept robots as they increasingly work their way into various areas of our lives, robots will have to develop fairly sophisticated understanding of individual human needs.
“If an assistive robot tries to help you, how much help you want really depends on your personality and the situation,” Dragan says. That’s also why robots are in some cases changing form—some of the machines designed to care for humans, for example, will have soft, cuddly bodies rather than just hard metal exoskeletons.
“We’re going to have more and more capable robots,” Dragan told me. Which means when machines interact with people, we’ll be able to customize them depending on who’s around; or if humans are around at all.