At the threshold of the autonomous revolution, one fact has remained stubbornly consistent for the last half-century: Controlling robots is a pain. Virtual reality, it turns out, could be the key to changing that, possibly opening up new frontiers for humans and machines alike.
I've written about early efforts to use virtual reality to program autonomous robots. Now a company called SE4 has rolled out a promising new operating system to remotely operate robots, such as those increasingly used in construction or disaster recovery. One of the big looming use cases for the technology might also be in space, where robots will soon explore and even build structures on other planets in advance of human colonization.
But back to robots being a pain to control. The people who work on human-machine interface for robotics are fairly aligned in their overriding goal: They want users to be able to control complex robots in as simple and straightforward a way as possible without limiting the functionality of the device. In other words, they want more and more users to be able to utilize complex machines without needing a Ph.D. in robotics.
Until recently, end users have needed to code robots using a programming language. That approach has downsides -- namely, the person who bought a robot to work in their business probably doesn't know the programming languages to begin with.
More recently, there have been big strides that have bridged that gap. A lot of industrial robots can now be programmed on a tablet with block-based languages in which commands are stacked up visually. There's also something called programming-by-demonstration, where a user physically moves a robot through the motions of a task.
Virtual reality offers a very compelling evolution of how robots are controlled. It's easy to understand how a player moves an avatar in VR to accomplish a task. What if that avatar represented a physical robot in the real world? The user organically knows how to move their body to accomplish a virtual task. The virtual reality system captures the movements and translates them into commands. In a video game, the commands power the avatar, but in the new realm of robotics controls, the commands can be encoded to move a robot.
SE4's technology enables an operator to virtually beam into the pilot's seat, riding along with the robot. Instead of simply remote controlling the robot the way you would a drone or RC car, the technology enables a user to virtually execute complex tasks. The system's AI understands not only the movements being performed but also the goal. A user essentially carries out what they want the robot to do, and the system's AI then works backward to develop specific instructions that are transmitted to the robot.
One cool advantage: Because the instructions are sent as a package, and not step by step, this approach makes latency a non-issue in many cases. Given that it would take several minutes for instructions sent from earth to reach Mars or other distance points in space, that's a very important consideration for our plans to send machines into the final frontier.
SE4's VR control approach also enables a single user to put multiple robots to work on a single process, which is very compelling in lots of earth-bound scenarios like construction, where allowing the robots to cooperate significantly expands functionality.
It's the latest example of how VR is helping unlock the power of automation.