X
Innovation

Startup Kindred brings sliver of hope for AI in robotics

San Francisco-based startup Kindred outlines the challenges to using deep learning kinds of artificial intelligence to train robots in the real world. The study offers hope that by setting benchmarks, machine learning really can train robots at some point.
Written by Tiernan Ray, Senior Contributing Writer

Training robots to do simple tasks with so-called deep learning has met with limited success, but a San Francisco startup offers a glimmer of hope for future work.

Kindred, a three-year-old startup, Thursday offered up a research paper presented at the 2nd Conference on Robot Learning in Zürich, Switzerland.

The thrust of the paper is that roboticists need to establish some basic benchmarks about how machine learning, and particularly deep learning, perform before real-world progress can be made.

The paper doesn't prove machine learning can teach a robot to move; rather, it suggests there are ways to systematically identify the challenges to doing so, as a basis for future work.

Also: Top 5: Things to know about AI TechRepublic

In the report, "Benchmarking Reinforcement Learning Algorithms on Real-World Robots," posted on arXiv on September 20th, the authors, A. Rupam Mahmood, Dmytro Korenkevych, Gautham Vasan, William Ma, and James Bergstra, took three commercially available robots and had them move in space to a target location.

Reinforcement learning, a form of artificial neural network in which the system improves the error function as it is given a "reward," was employed in four different flavors. The point was to see how the three robots did with four different algorithms on multiple versions of these basic tests of motor function.

As the authors note, studies to date have mostly simulated robots inside a software program, they haven't tested real robotic movement. For example, A 2016 study by Duan et al., at University of California at Berkeley's department of electrical engineering and computer sciences, sought to establish benchmarks for deep learning as simulated by computer-generated automatons moving in a kind of video-game environment.

As Mahmood and colleagues write in the current paper, "Reinforcement learning research with real-world robots is yet to fully embrace and engage the purest and simplest form of the reinforcement learning problem statement-an agent maximizing its rewards by learning from its first-hand experience of the world.

The study conducted 450 experiments with robots across more than 950 hours.

Also: MIT ups the ante in getting one AI to teach another

The robots they tested were the "UR5," a Universal Robotics "collaborative arm," an armature that can bend and move through space; the "MX-64AT Dynamixel," from Robotis, an "actuator" that's popular for control of a number of different robots, and "iRobot Create2," a kind of stripped-down version of the iRobot's "Roomba" vacuum cleaner. https://www.irobot.com

A major finding is that deep learning lags way behind training of robots in the conventional manner, with scripts. "Overall, RL solutions were outperformed by scripted solutions, by a large margin in some tasks, where such solutions were well established or easy to script."

And the report notes that "hyper-parameters," variables of the machine learning model being used, have to be tuned very, very carefully. In fact, deep learning models weren't able to accomplish much of anything without some substantial work tweaking these variables.

"The performance of all algorithms was highly sensitive to their hyper-parameter values, requiring re-tuning on new tasks for the best performance," the authors write.

That sounds discouraging, but the authors note that using the same hyper-parameters across different tasks led to results that were not wildly different, which gives some hope that deep learning can contribute something eventually.

As the authors put it, "a good configuration [of hyper-parameters] based on one task can still provide a good baseline performance for another." Hence, they conclude that the reinforcement style of deep learning is "viable" for research "based on real-world experiments" with robots.

There are some humorous details here, too, of the real-world problems that crop up with physical robots. Some of the DXL units experienced overheating, which led them to fail when left in experiments overnight. And the Create2 systems from iRobot ran into problems when left overnight because their cables got tangled up.

Previous and related coverage

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Editorial standards