X
Business

Computers still aren't smart enough

It's 2001, but we're still years away from the likes of the HAL 9000.
Written by Michael Slater, Contributor

COMMENTARY--Despite all the progress in clock speeds and memory capacity, computers have remained remarkably primitive. If the dreams of futurists, science-fiction writers, and artificial-intelligence theorists had been fulfilled, computers would be doing much more to automate tasks and serve as intelligent assistants.

The vast majority of software applications rely on the computer as a manual tool—a very capable manual tool, but a manual one nonetheless. A word processor captures just what you type, helping a bit with formatting, but not in the least with content. Spreadsheets handle countless computations with ease and speed, but they don't help you understand the data. Even in computing's biggest growth area—the Web and e-mail—the computer simply transports information.

All too often, attempts to add "intelligence" to computer software end in frustration. Grammar checkers, for example, have come a long way, but I still find them far too annoying to use. They complain much too often about sentences I like, and miss many instances where I've used the wrong word. Word processors also try hard to correct common mistakes, such as forgetting to capitalize a word at the start of a sentence. Although I sometimes find these features helpful, they're often merely irritating, making it harder to do what I want.

The heart of the problem is that the gap between computing and thinking remains huge, and software's "intelligent agent" features suffer as a result. Intelligence is still the sole province of flesh-and-blood creatures; artificial intelligence remains an oxymoron. Software is trying to act intelligently when intelligence is still far beyond its reach. Whenever software attempts to act intelligently, it may automatically do something you don't want it to do.

Some argue the coming decades will bridge the chasm between computation and intelligence, as computers approach a level of complexity comparable to a human brain. I don't think there's any prospect for major improvements in the next few years, however, and I'm skeptical whether even a decade or two of technology advances will lead us to truly intelligent machines.

Transistors versus neurons
Even if we assume we'll have microprocessors with as many transistors as human brains have neurons, there are huge differences between transistors and neurons. Neurons are far more complex in their behavior, and they still aren't very well understood. Each neuron also has vastly more connections to other neurons.

Finally, and perhaps most significantly, there's the software issue. The most complex computer is nothing but a dumb machine until a program tells it what to do, and humans must write those programs. Despite many advances in understanding the human brain, it remains hugely mysterious, and no software even approaches the kind of activity even the most modest brain handles easily.

Although many will argue with my conclusions about the future prospects for machine intelligence, it's clear it's not here today, nor is it around the corner. Even if the most powerful computers in a decade or two achieve something close to intelligence, by that time our lives will be filled with literally billions of simpler devices—from phones to PDAs—that will lag their desktop cousins by many years, if not decades.

Do wrong by doing 'write'
As we look forward to the next decade of computing evolution, we should put aside any expectation of intelligence. The best that most software will be able to do is provide reasonable, logical assistance. Even this is far more difficult than it seems; every time you automate something, that automation will be wrong for some users and some applications. Microsoft Word, for example, tries to be helpful by automatically capitalizing the letter "I" if it's used alone. If you really want a lower-case "i," you need to dig through the preferences and disable this feature.

There's no doubt today's software can be vastly improved. Increased processor power will make new capabilities possible, including some that simulate intelligence within a very limited domain. The software industry can, and should, do a better job of delivering programs with well-designed user interfaces that reflect a deep understanding of the tasks the user is trying to perform. The big improvements in the near term will come not from intelligent machines, but simply from better software design.



Editorial standards