Sign language over cell phones in the U.S.

Sign language over cell phones in the U.S.

Summary: Thanks to University of Washington (UW) computer scientists, hearing-impaired users might soon be able to use sign language over a mobile phone, like in Japan or Sweden. The research team received a grant from the U.S. National Science Foundation to start a 20-person field project next year in Seattle. Of course, deaf people were already able to use text messages for communication. But as said the lead researcher, 'the point is you want to be able to communicate in your native language. For deaf people that's American Sign Language (ASL).' Now the researchers have to convince a commercial cell phone manufacturer to integrate their MobileASL software before this service becomes widely available. But read more...

SHARE:
TOPICS: CXO, Mobility
0

Thanks to University of Washington (UW) computer scientists, hearing-impaired users might soon be able to use sign language over a mobile phone, like in Japan or Sweden. The research team received a grant from the U.S. National Science Foundation to start a 20-person field project next year in Seattle. Of course, deaf people were already able to use text messages for communication. But as said the lead researcher, 'the point is you want to be able to communicate in your native language. For deaf people that's American Sign Language (ASL).' Now the researchers have to convince a commercial cell phone manufacturer to integrate their MobileASL software before this service becomes widely available. But read more...

MobileASL visualization techniques

You can see on the left some of the visualization techniques used for this project. (Credit: UW) These images have been extracted from a technical paper named "Activity Detection in Conversational Sign Language Video for Mobile Telecommunication" (PDF format, 12 pages, 430 KB). This paper will be included in the Proceedings of the 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008) which will be held in September 2008 at Amsterdam, The Netherlands. Please read this paper to learn more about these visualization techniques.

The principal investigator (PI) of this project is Eve Riskin, a UW professor of electrical engineering. The MobileASL team also includes Richard Ladner, a UW professor of computer science and engineering, Sheila Hemami, a professor of electrical engineering at Cornell University and Jacob Wobbrock, an assistant professor in the UW's Information School. And obviously, many students were involved as well.

Now, let's see why two-way real-time video communication is not really possible today in the U.S. "Low data transmission rates on U.S. cellular networks, combined with limited processing power on mobile devices, have so far prevented real-time video transmission with enough frames per second that it could be used to transmit sign language. Communication rates on United States cellular networks allow about one-tenth of the data rates common in places such as Europe and Asia."

So the team designed its software to work even with this bandwidth problem. "The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person's face while they are signing."

For more details, please visit the MobileASL project home page. "The current version of MobileASL uses a standard video compression tool to stay within the data transmission limit. Future versions will incorporate custom tools to get better quality. The team developed a scheme to transmit the person's face and hands in high resolution, and the background in lower resolution. Now they are working on another feature that identifies when people are moving their hands, to reduce battery consumption and processing power when the person is not signing."

If you're interested in this project, you should look at the long list of MobileASL publications. Here is an excerpt of the long abstract of the paper mentioned above. "The goal of the MobileASL project is to increase accessibility by making the mobile telecommunications network available to the signing Deaf community. Video cell phones enable Deaf users to communicate in their native language, American Sign Language (ASL)."

The researchers also describe how they were able to reduce the impact of encoding and transmission of real-time video on cell phones batteries. "By recognizing activity in the conversational video, we can drop the frame rate during less important segments without significantly harming intelligibility, thus reducing the computational burden. [...] In this work, we quantify the power savings from dropping the frame rate during less important segments of the conversation."

Finally, you should take a look at a short video (1 minute and 30 seconds) available from the MobileASL project home page or directly on YouTube.

Sources: Hannah Hickey, University of Washington News, August 21, 2008; and various websites

You'll find related stories by following the links below.

Topics: CXO, Mobility

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

0 comments
Log in or register to start the discussion