News flash: Cell phone use by drivers is dangerous. We all know that. A 2014 survey conducted by National Highway Traffic Safety Administration (NHTSA) showed that 398 drivers were killed and 33,000 drivers were injured in accidents due to cell phone usage while driving.
Some apps, such as Waze and Spotify, disable the full set of functionalities while the car is in motion. The intentions behind this feature are certainly good, but typically apps like these allow the driver to easily opt out of driver mode. Apps that sense vehicle motion are also clunky for passengers whose access shouldn't be limited.
Researchers at Carnegie Mellon University have developed a novel solution: An app that automatically detects whether the user is the driver or a passenger.
"A practical approach to such a problem needs to be lightweight, real-time, and immediately deployable," write researchers Rushi Khurana and Mayank Goel in a recent paper. "So, we built a software-only approach that phone makers can potentially push as a simple update."
I previously covered another project of Khurana's and Goel's, a stationary camera paired with a machine vision algorithm that proved better for tracking gym exercises than standard wearables. Now the pair from CMU's Human-Computer Interaction Institute (HCII) and Institute for Software Research is back with another pragmatic solution to an everyday challenge -- one that, in this case, could save lives.
The app utilizes the phone's cameras to continuously detect the user's context. It uses common contextual clues that are standard across nearly all cars to determine if the phone is oriented for use by driver or a passenger. For example, the so-called chicken handles overhead, the placement of the windows, the orientation of a sun roof -- taken together, the app's algorithm can use these clues to accurately tell who's behind the wheel.
"Regardless of the placement of the phone in the car, we are at least able to discern between the driver and the passenger with 90% accuracy."
During testing, the researchers used continuous video recording to obtain a large dataset of images, however their algorithm does not need continuous video for phone usage detection. In a real world scenario, a photo can be taken opportunistically based on event triggers, such as in-vehicle detection or user touch. The data can be featurized and processed in real time without storing any sensitive information.
A fully automated, lightweight, software-only solution such as this that leverages the on-board smartphone camera could conceivably ship standard with handsets if manufacturers were onboard, opening the technology to app developers.
The research is a step in the right direction toward cutting down on distracted driving.