Nvidia researchers create AI, deep-learning system to enable robots to learn from human demonstration

Nvidia researchers have created a deep-learning system that can teach a robot simply by observing a human's actions.
According to Nvidia, the deep learning and artificial intelligence method is designed to improve robot-human communication and allow them to collaborate. The paper will be presented at a conference in Brisbane, Australia.
Researchers trained neural networks powered by Nvidia's Titan X GPUs. The neural networks incorporated perception, program generation and ultimate execution. Simply put, a human could demonstrate a real world task and the robot could learn a task.
Nvidia continues to ride AI, gaming, machine learning, crypto waves
The robot would see a task via a camera and then infer positions and relationships of objects in a scene. The neural network would then generate a plan to explain how to recreate perceptions. The execution network would carry the task out.
A flow chart of the method goes like this:
Nvidia said its method is the first time where synthetic data was combined with an image-centric approach on a robot.
A video highlighted how the neural network enabled a robot to see a task and then recreate it.
What is AI? Everything you need to know about Artificial Intelligence | What is machine learning? Everything you need to know | Robotics in business: Everything humans need to know
- Enterprise IoT projects: Data, ML, security, and other key factors
- Free PDF: Sensor'd Enterprise: IoT, ML, and big data
- Survey shows that most businesses are taking steps to secure IoT data
- How to use machine learning to accelerate your IoT initiatives
- How to create a data strategy for enterprise IoT
- How to create a security strategy for IoT
- South Korea's IoT in full swing: From water meters to AI-powered smart buildings
- Successful IoT deployment: The Rolls-Royce approach