Robots are great at dealing with predictable environments, but human pedestrian behavior can be difficult to anticipate. That's especially true in the frenzy to catch the D train at rush hour. A group of MIT researchers is on the case and adding to a growing body of academic work aiming to give robots some of the tools we (at least those of us living in overcrowded cities) take for granted: Street intuition.
Also: Amazon may be building an Alexa home robot CNET
In a paper entitled "Deep sequential models for sampling-based planning," the researchers outline a method of robot navigation that utilizes traditional path planning algorithms, which analyze a number of options in real time and select the optimal choice, with a neural network that learns over time by observing and interacting with people.
The addition of a neural network remedies a problem with traditional path planning, which relies on a branching decision tree that evaluates environmental conditions. Paper co-author Andrei Barbu, a researcher at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), explains why that's less than ideal:
"Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents. The thousandth time they go through the same crowd is as complicated as the first time. They're always exploring, rarely observing, and never using what's happened in the past."
Last year I wrote about a group of Stanford researchers working on a socially aware wheeled robot capable of navigating a busy college campus. The Stanford project similarly employs traditional path planning algorithms but augments it with machine learning to enable robots to deduce patterns in the seemingly randomly movements of humans zipping through crowds.
"We're not planning an entire path to the goal--it doesn't make sense to do that anymore, especially if you're assuming the world is changing," said Michael Everett, a researcher on that project, said last year. "We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again."
The MIT team specifically tested its model in cases where a robot will have to navigate an environment populated by multiple agents. The simulation drew on a scenario dreaded by many human drivers: The roundabout.
Also: Why learning to code won't save you from losing your job to a robot TechRepublic
"Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on," Barbu says. "You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with."
The choice of a road scenario was no accident. While the research demonstrates a strategy that can help robots navigate a number of unpredictable environments, the acceleration of autonomous vehicle development in the past couple years heightens the need for next gen route planning.
"Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that's why it can plan efficiently," Barbu says.