X
Innovation

MIT proposes a robot valet that can safely touch a human

Scientists adapt a form of machine learning, reinforcement learning, that allows a robot arm more extensive movement as long as the impact force on a person is expected not to be harmful.
Written by Tiernan Ray, Senior Contributing Writer

It has been said that robotics is the most challenging area of machine learning: even simple things, such as moving a robotic arm a small distance, are an incredibly complex engineering challenge. 

You can imagine, then, that it's a big feat to apply machine learning to make a robotic arm help a human put on their jacket.

Researchers at the Massachusetts Institute of Technology on Monday published the details of a study in which they demonstrated a robot arm helping a human, and offered details as to why the procedure they claim is provably safe for people. 

In the demonstration, a robotic arm has a vest in its grip, with the human's right arm through the armhole which it then slowly tugs upward to the shoulder. A video of the demo posted on YouTube compares how much faster the arm is versus a traditionally engineered approach. 

The work, detailed in a paper titled "Provably Safe and Efficient Motion Planning under Uncertainty for Human-Robot Collaboration," by MIT PhD student Shen Li, lead author, along with Nadia Figueroa, Ankit Shah, and Julie A. Shah, is expected to be presented at the 2021 Robotics: Science and Systems conference.

The research of Li and team builds upon an algorithmic approach based on reinforcement learning, a form of machine learning, developed in 2019 by Torsten Koller and colleagues at University of Freiburg, in Germany, and peers at ETH Zürich.

The problem of robot motion can, in a sense, be summarized as a tension between objectives, one objective that is immediate, and one that is long-term. 

The immediate objective is to avoid a harmful situation to a human. A robot has to be careful at every single moment, to avoid collisions with a person, or to minimize any harmful effects of such collisions. 

On a longer time frame, a robot has a task to achieve. It must get some task accomplished, in this case, helping a person get dressed. 

Balancing these two objectives is the challenge that the MIT group set about. 

Also: Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all

In the work by Koller et al. in 2019, the scientists developed what is called a learning-based model predictive control algorithm, an "LBMPC," where the goal is for a robot to avoid collisions with a human being, while simultaneously find the most efficient path to a given task. 

Building upon what Koller and colleagues did, Li and team asked is it possible, instead of entirely avoiding collisions, to have the robot arm proceed where a collision with a human might be a gentle tap, not a harmful blow. By tolerating a so-called safe impact, the robotic arm could proceed without being so cautious. 

As the authors put it, they are engineering system where rather than avoiding all touch, the robotic arm is balancing its goal objective with the safety objective:

In order to reduce system conservativeness while maintain- ing safety, we propose a safe planner for integrating the pre- dictive and reactive approach jointly within a framework. Our goal is to make motion planners aware of low-level compliant controllers and leverage the fact that a small impact might not be harmful to the human, allowing planners to potentially produce more efficient motions without sacrificing safety. 

Taking the LBMPC developed by Koller et al., Li et al. add in a modeling of the position and velocity of human and robot, to calculate what is the impact of the two in the event of a collision. The authors didn't just make this up themselves, they are borrowing from work in the literature on what kinds of friction and what velocity would be safe between two bodies colliding. 

In particular, work in 2003 by two researchers at the Australian National University, Jochen Heinzmann and Alexander Zelinsky, developed a model of impact to define what's safe. "These safety restrictions limit the potential impact force of the robot in the case of a collision with a person," Heinzmann and Zelinsky define it.

With a definition of what's safe in hand, Li et al. are able to revise Koller et al.'s LBMPC so that it doesn't plan to always only avoid collisions, but also to choose moves even where they might result in safe impacts. 

"To the best of our knowledge, this is the first work to provide a probabilistic safety guarantee regarding human dynamics models with epistemic uncertainty for human-robot systems," as Li and team write. 

It is important to linger for a moment on the word "probabilistic" to understand what is being offered and what is not. 

The algorithm being developed, the modified LBMPC, is a probability model of what can happen between human and machine. As such, it is not a certainty, it is a prediction that in interactions calculated by the algorithm, some outcomes "will hold with a high probability," as it's put. 

What is provably safe, then, is a measure of the uncertainty in which a robot may move with respect to a person. In that way, the realm of tolerable uncertainty has been expanded a bit by allowing some collisions that could be harmless.

It's important to keep in mind that tolerable uncertainty is within the narrow constrains of a controlled experiment, in which a number of assumptions are made. As the authors describe it:

Our implementation depends on the following assumptions, that we have made to circumvent the challenges in geometry and computer vision: (1) the human shoulder position is known and fixed; (2) the human elbow never bends throughout this task. With these assumptions, we only need to track the human hand position and can interpolate the arm as a line segment between the hand and shoulder position. We further assume that the human hand stays perfectly observable throughout the operation and use PhaseSpace Motion Capture system [65] to track it.

And the work makes one more important assumption, "We assume that the human won't move during the robot planning time."

All of these caveats suggest there will have to be more modeling of what happens when a human is more dynamic, so to speak, with fewer constrains on interaction. 

A tantalizing parting thought by Li et al. is that by making their system supposedly safe, they are easing humans' acceptance of the system.

"It is interesting that our subject pointed out that she felt more comfortable when the robot is running our algorithm than the one doing just collision avoidance," the authors write. 

"One explanation is that our algorithm optimizes for velocity and safe impact when collision avoidance cannot be strictly avoided. The safe impact formulation could potentially improve the psychological side of [human-robot interactions] HRI, besides the safety guarantee."

Editorial standards