Each day, it seems, we are asked to depend ever more on the Internet for our professional and personal well-being.
Yet each week seems to introduce a new computer worm capable of boring into and through our networks, clogging pipes, corrupting data and, in the worst cases, destroying months or even years of hard work.
But the very companies that tirelessly tell us that the Internet is fundamental to our future have done almost nothing to protect us from the very defects in their products that give hackers free rein. It's high time Microsoft, Sun Microsystems and other developers undertake an all-out commitment to eliminating buffer overflows.
In fact, it borders on criminal that they have not done so already.
There could be no Code Red, or dozens of other worms that have plagued the Internet in recent years, if it were not for buffer overflows--programming bugs that have been around since the dawn of computing, and that have long been recognized as a vulnerability that hackers can exploit to spread security nightmares through networks.
And yet operating system developers in particular--the very companies to which we entrust the fundamental safety of our systems and data--have refused to invest the programming resources and time required to rid their code of these Achilles' heels.
There's a simple reason for this: It's hard, expensive work sifting through tens of millions of lines of code searching for buffer overflows. It's far cheaper to let hackers find and exploit unprotected buffers, then release a quick patch.
The problem with that kind of reactive solution is that we all pay a heavy price in corrupted data, clogged bandwidth and sheer frustration by the time the problem is repaired. Even worse, the decentralized Internet provides no means of communicating newly discovered dangers to each user of a vulnerable program, so many users never discover they need a patch until it's too late.
Buffer overflows, or overruns, are easily exploited holes in otherwise secure programs. A buffer is a chunk of a computer's memory or disk drive, of limited size, in which data is stored temporarily. If a user or other source of input tries to shove more data into the buffer than it can hold, the data "overflows" into adjacent parts of the memory or disk.
This would be a mere nuisance, except that the excess data can erase--and replace--programming code adjacent to the buffer, enabling a hacker to insert malicious code into the target software.
Programmers have known for decades how to prevent this kind of bug by checking the buffer size and then limiting or filtering input. In the grand scheme of things, it takes only a few lines of code to add buffer checking. Yet programmers often neglect this crucial safety feature, and the resulting vulnerabilities are frequently not caught in multiple stages of debugging.
Developers say new code routinely includes checks on all buffers. Hackers, they claim, are typically exploiting unchecked buffers in legacy portions of the code. Even if true, this excuse is ridiculous on its face.
Microsoft and Sun, for example, think nothing of investing hundreds of millions of dollars to develop new bells and whistles for their products, yet they have failed to eliminate a simple bug that's been around for decades. If hackers can find and exploit unchecked buffers to make our lives miserable, clearly these giants must find and fix these buffers to protect us.
It is time for a proactive commitment on the part of the largest developers to eliminate this preventable plague--if for no other reason than it would be a very wise investment. What's at stake is our basic faith in the Internet as a trusted platform for commerce, finance, personal information and entertainment. Only the largest developers can ensure the safe cyberneighborhoods that will attract us all to their products.