Helping Asia's disabled move forward with cognitive technology

Cognitive assistance plays a key role in supplementing abilities others may be missing, but technology still needs to catch up to help them move beyond basic functions, says visually-impaired IBM Fellow.
Written by Eileen Yu, Senior Contributing Editor

Chieko Asakawa (Photo credit: IBM)

Cognitive technology and machine-learning capabilities are essential to help the disabled stand on their own, but further advancements are needed to help them beyond the basics.

The visually impaired now were able to perform more tasks than they did decades before the internet and mobile technology surfaced, said Chieko Asakawa, who was the first female Japanese to be named IBM Fellow back in 2009. She lost her sight at the age of 14 after an accident in a swimming pool damaged her optic nerve and had to abandon her dream of becoming an Olympic athletic.

Asakawa, who joined IBM as a researcher in 1985, currently works with the Carnegie Mellon University in Pittsburgh, USA, to identify ways accessibility technologies can help more people participate in society. Much of her work now centred on cognitive technology.

She explained that the visually impaired faced two primary difficulties in life-- accessibility to information and mobility, the first of which had changed dramatically over the past few decades.

Previously, without personal computers and the internet, she was unable to read newspapers, magazines, or books without help from someone else. While the emergence of audio and Braille books helped, copies were limited and she would have to wait, sometimes for months, before the Braille library was able to send a copy to her.

The most significant development came when Braille went digital and web accessibility became pervasive, she said. Asakawa's research had supported various initiatives in this field, which included developing a word processor to create Braille documents and building a digital library for Braille literature.

More notably, she helped build a browser plugin that converted text on webpages to speech, enabling visually-impaired users to navigate the web using a numeric keypad. Developed in 1997, the IBM Home Page Reader supported multiple languages including French, German, and Japanese and widely adopted across the globe.

There also had been some advancement in the area of mobility, thanks to technologies such as GPS and beacons as well as mobile devices with voice command capabilities. Progress, though, remained inadequate and more improvements would be needed to help the blind attain true independence.

Asakawa's research here looked at how GPS could be used to guide the visually impaired, but the technology's accuracy, especially indoors, still was not up to par. Its potential, though, was promising.

In fact, IBM and Carnegie Mellon University developed a mobile app, called NavCog, which operated as a voice navigation system using sensors, or beacons, as well as cognitive technology to identify the user's location and direction. It then would send voice commands via the smartphone to guide users towards their destination.

IBM Research this month kicked off a pilot, alongside Japanese civil engineering firm Shimizu and real estate developer Mitsui Fudosan, to assess the NavCog system across three Coredo Muromachi shopping mall buildings located in the downtown district of Nihonbashi-Muromachi.

Some 220 beacons were installed to cover an area spanning 21,000 square metres. This encompassed an underground pedestrian walkway, which connected the three buildings, as well as several shops and restaurants and a movie theatre. The beacons were installed on ceilings and in existing gaps, so no changes to infrastructures were required.

A probabilistic model was created using machine-learning algorithms, which linked radio wave signals with likely pedestrian locations to facilitate navigation. The system used various sensors in the smartphone, such as accelerometer, gyroscope, and barometer, to improve navigation.

After a destination had been provided, the system would provide the shortest route while avoiding obstructions such as escalators and confusing turns. It would provide additional information to caution users about nearby obstacles or when they were about to reach a fork in the passageway.

During the pilot, data would be analysed for location accuracy, voice guidance timing, and ease-of-use.

According to Asakawa, the system currently had an accuracy rate of one to two metres. While this would need to be further improved, she underscored the importance of cognitive technology in enabling the blind to be mobile.

"We call this cognitive assistance, which means to supplement or augment abilities that others may be missing or abilities that are decreasing and weakening, such as those experienced by elderly people," she said.

She explained that IBM categorised cognitive technology into four key areas: localisation; computer vision or object recognition; data or knowledge; and interaction.

Pointing to the localisation component, she said the effectiveness of navigation systems could be significantly improved if the accuracy of the user's location could be narrowed down to an inch.

She added that object recognition also would need to be further developed and properly linked to the required data, such as a map or store details.

No frills until true mobility is attained

Until technology caught up, Asakawa's priorities remained primarily on addressing the mobility challenge, which continued to be the biggest hurdle for the visually impaired.

Asked what she would like technology to help her regain from her time as a sighted individual, she said it would be "nice to have" the ability to perceive colours again.

"I really enjoy visiting museums and looking at art and paintings, but those information has been lost," she said. "So perhaps we could find a way to describe artistic artefacts, colours, or sceneries and portraits through voice. Or we could tap some form of crowdsourcing, in which we ask people to describe and then share what they see."

This, she added, could open up opportunities for video analysis, among others, in the field of vision recognition. She also mooted the idea of a robotic guide dog, which would have vision recognition and machine-learning capabilities while requiring lower maintenance than an actual dog.

For now, however, such ideas remained low on her priorities and would remain so until the visually impaired attained absolute independence in terms of mobility.

"Now I still have to depend on someone, for instance, to tell me where the front desk or the gate is. There are still so many issues to address to be truly mobile," said Asakawa, who today aspires to be able to travel and go for walks alone.

Reiterating the importance of achieving true independence, she noted: "We need to change the mindset that the impaired can't or don't need to shop, just as previously when people didn't think we needed to use the web."

The goal was to reduce the amount of effort needed for the disabled to go about their daily lives, she said, adding that cognitive technology and artificial intelligence played a key role in facilitating this change.

Editorial standards