X
Tech

Using your face for remote control

A UC San Diego computer scientist has turned his face into a remote control. One of his goals is to 'use automated facial expression recognition to make robots more effective teachers.' In fact, this project is 'at the intersection of facial expression recognition research and automated tutoring systems.' Changing the speed of the delivery of automated lessons to remote students could make a huge difference in learning. If the robotic teacher goes too fast for you, you need to tell it that it needs to slow down. And this can be done through smiling or frowning at a simple webcam installed on your laptop. But read more...
Written by Roland Piquepaille, Inactive

A UC San Diego computer scientist has turned his face into a remote control. One of his goals is to 'use automated facial expression recognition to make robots more effective teachers.' In fact, this project is 'at the intersection of facial expression recognition research and automated tutoring systems.' Changing the speed of the delivery of automated lessons to remote students could make a huge difference in learning. If the robotic teacher goes too fast for you, you need to tell it that it needs to slow down. And this can be done through smiling or frowning at a simple webcam installed on your laptop. But read more...

Using your face as remote control

You can see above how a "UC San Diego computer science Ph.D. student can turn his face into a remote control that speeds and slows video playback." (Credit: UC San Diego Jacobs School of Engineering, link to a slightly larger version) On the left part of this image, the largest rectangle shows how he controls the speed of the video. When he smiles, the black bars are at a high level and the video in front of him is displayed at high speed. When he frowns, the black bars are at a lower level and the video rate is slowed.

This research project has been led by Jacob Whitehill, a computer science Ph.D. student from UCSD Jacobs School of Engineering who works at the Machine Perception Laboratory (MPLab). If you want to see more visual explanations of Whitehill, here are two links to short videos. The first one lasts 55 seconds while the second one is 3 minutes and 40 seconds long.

But what exactly is the status of this research project? "In the pilot study, the facial movements people made when they perceived the lecture to be difficult varied widely from person to person. Most of the 8 test subjects, however, blinked less frequently during difficult parts of the lecture than during easier portions of the lecture, which is supported by findings in psychology. One of the next steps for this project is to determine what facial movements one person naturally makes when they are exposed to difficult or easy lecture material. From here, Whitehill could then train a user specific model that predicts when a lecture should be sped up or slowed down based on the spontaneous facial expressions a person makes, explained Whitehill."

This research project will be presented on June 25 at the Intelligent Tutoring Systems 2008 conference (ITS 2008) held in Montreal, Canada, on June 23-27, 2008. The subject of the presentation is "Measuring the Perceived Diculty of a Lecture Using Automatic Facial Expression Recognition." Here is the abstract of the short version of this paper (PDF format, 3 pages, 84 KB). "We show how automatic real-time facial expression recognition can be effectively used to estimate the level of difficulty, as perceived by an individual student, of a delivered lecture. We also show that facial expression data are predictive of an individual student's preferred rate of curriculum presentation at each moment in time. On a recorded video lecture viewing task, training on less than two minutes of recorded facial expression data, our system predicted the subjects' self-reported difficulty scores with mean accuracy of 42% (Pearson correlation) and the subjects' preferred viewing speed with mean accuracy of 29%. Our techniques are fully automatic and have potential applications for both intelligent tutoring systems and standard classroom environments."

And here is the conclusion of a longer version of this paper (PDF format, 11 pages, 706 KB). "Our empirical results indicate that facial expression is a valuable input signal for two concrete tasks important to intelligent tutoring systems: estimating how difficult the student finds a lesson to be, and estimating how fast or slow the student would prefer to watch a lecture. Currently available automatic expression recognition systems can already be used to improve the quality of interactive tutoring programs. As facial expression recognition technology improves in accuracy, the range of its application will grow, both in ITS and beyond. One particular application we are currently developing is a 'smart video player' which modulates the video speed in real-time based on the user's facial expression so that the rate of lesson presentation is optimal for the current user."

This work will also be presented during a workshop about Human Communicative Behavior Analysis at the 2008 IEEE Computer Vision and Pattern Recognition conference (CVPR 2008) held in Anchorage, Alaska, on June 28, 2008. The name of the presentation will be "Automatic Facial Expression Recognition for Intelligent Tutoring Systems" and here is a link to the full paper (PDF format, 6 pages, 639 KB).

Here is the beginning of the abstarct. "This project explores the idea of facial expression for automated feedback in teaching. We show how automatic realtime facial expression recognition can be effectively used to estimate the difficulty level, as perceived by an individual student, of a delivered lecture. We also show that facial expression is predictive of an individual student’s preferred rate of curriculum presentation at each moment in time."

Sources: University of California at San Diego, June 25, 2008; and various websites

You'll find related stories by following the links below.

Editorial standards