X
Tech

When Users Talk; Computers Listen

Computers, you might say, are growing up to be more like us. Someday, they will be able to process information as quickly as we can, which means they will be able to understand what we're saying.
Written by Todd Spangler, Contributor

Computers, you might say, are growing up to be more like us. Someday, they will be able to process information as quickly as we can, which means they will be able to understand what we're saying. An intelligent electronic companion that anticipates our every informational need isn't very far away.

For the next few years, however, computers will more or less continue to ex ist as we've grown ac cust omed to them, in dustry experts say.

"The three basic elements of computing we have today — the desktop, the laptop and the server — we firmly believe will continue to exist into the next decade," says Pat Gelsinger, Intel's chief technology officer.

But while PCs will still be at the heart of computing, an anticipated profusion of smart phones, handheld computers and other devices will extend access to Internet services and information over multimegabit-per-second wireless networks. Doug Heintzman, manager of strategy and standards of IBM's pervasive computing division, says mobile devices will become much more powerful — with memory capacities of 50 gigabytes within five years — and each one will be embedded with global positioning system technology so it's aware of exactly where in the world it is. "We'll be putting GPS into just about everything, because it will get very cheap," Heintzman says.

Meanwhile, the PCs we'll use to perform most of our information- processing tasks will look and act different. Computers and their components will become even smaller, faster and better-connected to the Internet. Gelsinger's dream machine, which he says will take at least five years to materialize, is a laptop computer he calls the "111a": It weighs 1 pound, is 1 inch thick, has a battery that lasts for one day without needing to be recharged and is always connected to a wireless network. (For a Q&A with Gelsinger about the future of computing, see next page.)

And almost as regularly as seasons change, the processing power of these machines is expected to continue to climb the growth curve it's followed since the early 1970s. "We don't anticipate running into any problems with Moore's law for another 20 years," Heintzman says, referring to Intel co-founder Gordon Moore's maxim, which says the number of transistors per integrated circuit doubles every 18 months.

Today's desktop PCs have the brainpower of lizards, says Paul Horn, director of IBM's research division. "It's almost like dealing with a child," he says. Within the next two decades, though, the lowly desktop computer will evolve to the point where it has a processing capacity roughly equivalent to the human brain. Horn predicts that by 2020 we'll see a computer running as fast as 1 million gigahertz — or a million times faster than today's PCs — able to perform 1015 calculations per second.

That embarrassment of riches will afford us new ways to inter-act with computers. Speech recognition will be one of the most an- ti ci pated ben eficiaries of in creased pro cessing power. Making a computer understand people when they talk is a complex computing problem that still hasn't been refined to the point where the technology is suitable for mainstream applications.

Microsoft has been trying to crack the code on speech recognition for several years. One big barrier is that speech recognition is a processing-intensive application, and the hardware — particularly for small, handheld devices — just can't deliver, says Alex Acero, a Microsoft Speech Technology Group senior researcher.

MiPad, short for "my interactive notepad," is Microsoft's 2-and-a-half-year-old project integrating speech recognition into a portable device. To improve its accuracy, MiPad limits the words it has to understand and the number of functions it can perform via speech input. For example, a user sending an e-mail taps the "To:" field with a stylus, then speaks the name of the intended recipient — boosting MiPad's odds of successfully understanding what was said, since it only has to match that with a name in the address book.

As far as continuous speech recognition, Microsoft has developed an engine that tries to predict what a user will say, learns from experience and infers words from con text. Acero says studies show that most people use no more than 5,000 words in spoken English — but one person's 5,000 words may be very different from another person's.

"Within five years, I hope speech has gotten to the point where if you take it away from people, they complain," Acero says. "That's my definition of success."


Quick Hit

Ultraconnected

By 2007, more than 60 percent of people in Europe and the U.S. ages 15 to 50 will carry or wear a wireless computing or communications device at least six hours per day.

Editorial standards