In a paper documenting the research (.PDF), MIT says the AI uses a neural networking to teach wireless devices to sense the posture and movement of people, "even from the other side of a wall."
This kind of X-ray vision may seem far-fetched, but the team, led by Professor Dina Katabi, says that the neural network is able to analyze the radio signals which bounce off bodies in order to create a digital, dynamic figure of where the individual is -- and what pose they are striking.
In order to demonstrate the AI's capabilities, MIT programmed RF-Pose to create and control a stick figure which is able to sit, stand, and move its limbs at the same time the test subject is moving.
The majority of today's neural networks are trained by data labeled by hand. However, radio signal-based training was more of a challenge.
In order to teach the AI how to interpret these signals, the scientists collected examples from wireless devices and camera, gathering thousands of samples based on people conducting everyday activities.
Images were extracted from the camera to show the network, alongside corresponding radio signals.
"This combination of examples enabled the system to learn the association between the radio signal and the stick figures of the people in the scene," the team says.
RF-Pose was then able to estimate posture and movement without the help of cameras -- but being able to do so through a wall was a surprise to MIT, which did not anticipate that the system could generalize its knowledge to handle through-wall movements.
According to MIT, the invention has practical uses beyond fulfilling sci-fi dreams. For example, the AI could be used to discreetly monitor the elderly and allow them to live independently -- while monitoring for falls or accidents.
However, it may also have a deeper purpose in the medical field, and RF-Pose may have worth in studying and monitoring the progression of diseases such as Parkinson's, multiple sclerosis (MS), and muscular dystrophy.
"We've seen that monitoring patients' walking speed and ability to do basic activities on their own gives healthcare providers a window into their lives that they didn't have before, which could be meaningful for a whole range of diseases," says Katabi. "A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices."
The team is currently working on upgrading 2D figure outputs to three-dimensional representations and is also working with medical professionals to explore the technology's applications in healthcare.
The research will be presented later this month at the Conference on Computer Vision and Pattern Recognition in Salt Lake City, Utah.
"By using this combination of visual data and AI to see through walls, we can enable better scene understanding and smarter environments to live safer, more productive lives," says co-lead author Mingmin Zhao.