How Google's machine learning is turning robots into grasping humans

Google's robotic arm can grab anything on a flat surface, 82 percent of the time.

For humans, grasping objects is literally child's play. Babies are born with a reflex to grasp a finger, which over time and with the aid of vision, develops into the capacity to clasp objects of various shapes with higher precision.

Google is now using machine learning to teach a group robotic arms to grasp household objects by mimicking the feedback processes that humans rely on for hand-eye coordination.

These processes allow us to make tiny motor adjustments to do things such as serving a tennis ball and washing dishes.

Thanks to that feedback mechanism, Google says its robotic arm can now "observe its own gripper" and correct its actions when grasping an object.

The ghost in the machine: Vicarious and the search for AI that can rival the human brain

Startup Vicarious is aiming to build the first general artificial intelligence system -- just don't expect it any time soon.

Researchers led by Google scientist Sergey Levine noted that today's robots usually observe a scene, create a model, devise a plan, and then execute. The method falls down when the robot is confronted by real-world mess.

The more human feedback system that Google's researchers devised relies on a row of 14 separate robots with cameras mounted on their arms. Each shares data about their failures and successes.

The robots 'experiences' are then used to train the feedback system, which is a convolutional neural network (CNN), a particular field of machine learning.

The CNN is fed the robots' experiences on a daily basis, which then enhances the ability of the robot to judge its chance of successful grip, based on camera images and the motion of its gripper. It can also make adjustments to maximize its chances of success.

The training data consisted of 800,000 grasp attempts collected over two months.

One of the key achievements noted by Levine at Google Research is that the scientists didn't have to program the robots to optimize their movements.

"The result is continuous feedback: what we might call hand-eye coordination," wrote Levine.

He continued: "The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group. All these behaviors emerged naturally from learning, rather than being programmed into the system."

One sign of success is that Google's method halved the failure rate of similar methods used previously from 34 percent to 18 percent.

The research paper notes that robots are ill-suited for grasping on non-flat surfaces and narrow spaces.

However, the researchers are pleased with the robots' tendency to grasp soft objects differently from hard ones. In the case of paper tissues and sponges, the robots pinched the objects while clamping harder objects.

Read more about machine learning and robots


You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.
See All
See All