X
Business

Accents render tech lost in translation

It's gadgets-and-gizmos galore at this week's 2010 Consumer Electronics Show.Toshiba, for instance, is reportedly working on a translation software that will enable mobile phones to interpret between Chinese, English and Japanese.
Written by Eileen Yu, Senior Contributing Editor

It's gadgets-and-gizmos galore at this week's 2010 Consumer Electronics Show.

Toshiba, for instance, is reportedly working on a translation software that will enable mobile phones to interpret between Chinese, English and Japanese. It allows users to speak into the handset in any of these languages, have the phone translate the words and repeat them out loud in another of the three languages.

It sounds pretty nifty and if seen to fruition, will be a handy tool for travelers. But, the question remains whether it'll be sufficiently accurate to avoid any potential miscommunication.

Years back in Australia where I had acquired my university education, I remember a fascinating conversation I shared with a local hostel-mate who had a really strong Aussie accent. We had just met and this was the first time we had a tête-à-tête.

"I heard you're a pee-nis?" he asked, quite gleefully.

I was stupefied. Did someone I hardly knew just ask if I was part of the male anatomy?

"Whaaat?" I inquired, quite speechlessly.

Still smiling, he threw his hands in the air and wiggled them sideways: "Pee-nis, you know?"

For a couple of seconds, I remained dumbfounded. Then, I finally saw light: "Ohhh-oh, pee-a-nis! Yes yes, I am a pianist!" Needless to say, I was relieved. As it turns out, he was also a pee-nis and we then happily exchanged notes about our favorite musicians.

It would be interesting to see how the most advanced of speech recognition software translated our little chat, and more interesting to see whether it will be able to accurately assume we were having a conversation about an instrument used mainly by musicians--and not by guys who just hit puberty.

Language may indeed be the universal barrier to better social cohesion. Throw accents into the mix, and what we have on our hands remains the biggest hurdle the IT industry has to address before it can produce commercial-grade translation tools.

For as long as I've been a tech journalist, several speech-to-text recognition software offerings have attempted to break into the market, but none has ever truly succeeded. In fact, during one of these launches some years back, the product manager was giving a demo in a room filled with eager journalists. It would have been an indispensible tool for reporters who typically have to spend hours transcribing interviews.

However, the product manager had a thick Hong Kong accent and struggled to get through the demo. Almost every other paragraph had a wrongly translated word, and even voice-based commands were wrongly carried out--this despite the fact that she had already spent the required time training the speech recognition software to adapt to her vocal nuances.

Every human is unique. We each have our own distinguishable voice, accent and mannerism. Give a word to two Singaporeans and you'll hear two different versions of how it'll sound.

As long as we remain uniquely different individuals, it'll be really tough building a software tool intelligent enough to decipher what we mean to say--not that there's anything wrong with a melting pot of varying diacritics. After all, the world would be pretty dull if we all sounded the same, and perhaps stranger, if we all spoke like Arnold Schwarzenegger.

Editorial standards