Video: Businesses still to warm to VR and AR, with just 24,000 headsets bought last year
There seems to be a explosion of new user interfaces from virtual reality to augmented reality, as well as some truly novel approaches, such as a wristband from CTRL-Labs, which can understand what a user intends to do with their hand without any movement.
Read also: AR and VR: The future of work and play? | Research: 67 percent considering adoption of augmented reality in the enterprise | Executive's guide to the business value of VR and AR (free ebook)
I recently met with Thomas Reardon, CEO and co-founder at CTRL-Labs, and Josh Duyan, chief strategy officer at CTRL-Labs, to discuss their technology and the release of their first development kit.
Reardon is a neuroscientist and a veteran software developer whose credits include leading the development of the Microsoft Internet Explorer web browser.
His Manhattan-based company is about to release CTRL-kit, a VR/AR, productivity, and robotics-focused development kit that will open up its unique interface technologies to anyone who can find a commercial use for it.
The kit is described as non-invasive surface electromyographic (EMG) neural interface that enables the integration of their intention capture technology to work with a broad range of applications.
EMG enables the capture of electrical signals created by muscles. The effect was first detected in 1666 by Francesco Redi's studies of fish muscles in what are now called electric eels.
Using the latest in machine learning and artificial intelligence, CTRL-Labs has created a fascinating technology that allows a user to control a digital device through a series of electronic pulses picked up by a wristband full of sensors. For example, a user could type at an imaginary keyboard, or the user doesn't need to move any muscles at all.
EMG systems will pick up electronic signals even when no muscle actually moves -- called isometric activity. A user can hold their hand motionless on a desk and still be able to type text on a screen by imperceptibly triggering the nerves to the muscles in the fingers.
Many researchers have tried to map the brain's functions and directly capture brain signals. But there's a lot going on in the brain, and it's tough to sort out the signals you want from all the other signals. To get more accurate data would require implanting sensors -- which is understandably unappealing.
The field of Brain Machine Interface (BMI) doesn't require brain surgery, if you know where to look for signals. Neurons extend out of cortex to the spinal cord, where motor neurons live, then to muscles. The capture of electronic signatures and mapping them to an expected action is the key to the CTRL-Labs system.
"By focusing on the wrist we don't have to deal with trying to find those signals in the brain, which is a noisy place. When the signal reaches the wrist we are essentially capturing the user's intent," said Reardon.
The CTRL-Labs kit provides high-resolution monitoring of very faint electric fields generated by nerves passing through the wrist and into the fingers. This information is assembled into a model of a virtual hand.
A demonstration showed a human hand going through a wide variety of motions in three-dimensional space, and a display with a computer animated image of a hand follows it in perfect synchronization with seemingly little or no lag time.
Many new interfaces
There are many types of user interface technologies available to developers, such as speech, which has been growing in popularity because of personal digital assistants from Apple, Amazon, and Google. Speech recognition has improved tremendously over the past decade.
Read also: GoInStore uses AR to blur the line between the online and in-store experience | Five ways your company can get business value out of virtual reality | Marxent and Lowe's partner to use VR to help customer visualize home improvements
"Speech is not that good of a user interface for many things. Try controlling a cursor with words," Reardon said.
He explained that the computational overhead of the system is relatively small, which makes it easy to integrate into virtually any application that requires hand-based controls.
And there's no reason to stick with five fingers per hand. With the CTRL-Labs system, Reardon said people can train themselves to use six or more fingers: "You can create and control up to 20 fingers. The system will distinguish between the individual nerve impulses."
Each user needs to fine-tune the system to their physiology and also learn the isometric techniques of not moving a muscle -- a process that is quickly learned through a game-like training module.
The CTRL-kit device and software stack will be available in 2018. It features the company's Intention Capture neural interface technology.
With the development kit, it will be the first time that others can get access to the work of one of the best teams of computational neuroscientists and machine learning experts in the US. CTRL-Labs is one of the reasons that New York has developed its reputation as a world-class center for machine learning expertise.
Wrist user interface
There are many potential applications for a CTRL-Labs interface: Virtual reality systems need real hand movements rather than a controller; maintaining complex industrial and aviation systems; surgical procedures by specialists in remote locations; and there are even more imaginative uses that might emerge such as a pianist playing a virtual keyboard with 20 fingers!
Developers can sign up for the CTRL-kit waitlist at www.ctrl-labs.com on May 2.
- Research: 92 percent are interested in wearables
- Executive's guide to wearable computing in business (free ebook)
- Wearables in business: Five companies getting real work done
- The History of Wearable Technology: A timeline
- Wearables: The ones we use versus the ones we really want
- Best practices for crafting a wearable device policy
- Wearables etiquette: How to use your devices without people hating you