X
Innovation

This small robot lures hackers away from other robots

Created by researchers at Georgia Tech, HoneyBot was inspired by IT security.
Written by Greg Nichols, Contributing Writer

A group of researchers at Georgia Tech created a novel tool to delay and potentially expose would-be hackers of industrial automation.

Called the HoneyBot, the small robot is designed to fool perpetrators of cyberattacks into thinking it's a vulnerable robot performing important industrial automation tasks.

TechRepublic: Defending against cyberwar: How the cybersecurity elite are working to prevent a digital apocalypse

When it detects a successful breach, HoneyBot sounds the alarm and aids IT security professionals in thwarting further attack.

It takes its name from decoy computer systems used in IT security, known as honeypots.

With more industrial robots more connected than ever, the security of robots is of increasing concern.

honeybot.jpg

As industries as disparate as food service, transportation, and light manufacturing flock to automation, there's growing concern that hackers could get in and disrupt operations, ransom robots -- or worse, cause physical harm to humans.

Last year, a group of ethical hackers turned a friendly humanoid into a murderous killbot. So far there haven't been any big headline breaches with of the type we see nearly everyday in IT security.

In part, open-source architecture used by a number of robotics developers may be playing an important role, allowing developer communities to root out potential security risks before they're exploited.

But the threats are real, and in the dawning robotic age it's only a matter of time before automation becomes a cyber target.

"A lot of cyberattacks go unanswered or unpunished because there's this level of anonymity afforded to malicious actors on the internet, and it's hard for companies to say who is responsible," says Celine Irvene, a Georgia Tech graduate student who worked with the team behind HoneyBot.

"Honeypots give security professionals the ability to study the attackers, determine what methods they are using, and figure out where they are or potentially even who they are," she adds.

In order for a honeypot decoy to work, an attacker has to believe they've accessed an authentic system. In the case of a robot, that means a smart hacker would look for the input data from sensors to verify they have control.

In tests conducted with volunteers, the Georgia Tech researchers successfully fooled a group of volunteers.

The tests were an important affirmation of the effectiveness of the decoy ruse when extended to robotics.

In practice, a HoneyBot application running on a networked system, one able to simulate real sensor data in the event of an attack, would likely be more practical than a small physical bot.

Editorial standards