Robots are the perfect tools for exploring unknown and potentially dangerous territories, which is why they are essential to space exploration. But while robots are great at repetitive tasks in industrial settings, they struggle in the outside world. Software could be the missing link that could help robots be able to adapt to unpredictable space scenarios.
"There's still this automation, factory legacy for robotics," says BluHaptics CEO Don Pickering. "But the demands and the challenges that are placed on robotics when you move them into dynamic environments in the field are just different."
BluHaptics is a software company that has specialized in robotic control for underwater environments, but similar technology can also be useful for exploring Mars or other planets. NASA recently awarded BluHaptics grant funding to apply its software to remote robotic operations in space. The company is small -- made of six guys with software and electrical engineering backgrounds -- but it's gaining momentum. The $125,000 NASA contract is just part of a total of $1.9 million in grant funding and a $1.3 million investment round that closed in February.
Their approach is to use GPU-enabled machine learning to give humans better control of remotely operated robots. If robots can be remotely controlled with more precision, they could be used for more tasks in space, such as performing maintenance on satellites or space stations, exploring planets, or eventually building habitats for humans. If we can have video games with low latency and haptic feedback, we should also be able to control robots in unstructured settings.
"We're doing GPU processing, so it's very rapid processing of high resolution data in near-realtime," Pickering explains.
Anytime we talk about automation, the goal is to take the most dangerous and routine tasks away from humans so that our intelligence can be reserved for more creative tasks. Better robots could help astronauts focus on overcoming obstacles and solving the kinds of problems that can't be predicted.
"There's this gap that exists between manual control of robots and automation," Pickering says. "We're seeing better sensors and better hardware come along, but making computer vision and control has always been sort of disparate and by connecting what a robot sees to what it does, we can achieve complex tasks more easily and help users achieve those tasks more easily and achieve a level of safety and efficiency that wasn't possible before."