And it's not just Microsoft that's having trouble with buffer overflows. Carnegie Mellon's CERT also announced a buffer-overflow vulnerability in Oracle's supposedly unbreakable Oracle 8i and Oracle 9i servers. So what exactly is a buffer overflow, and why haven't programmers gotten rid of it?
FIRST, A FEW SIMPLE definitions. A stack is a data structure in which the most recently added item is the first removed; a buffer is a temporary storage area. If a program sends a string of data to a fixed-sized buffer on a stack, everything's fine, and the program usually works as designed. But if an attacker sends a string of data that is too long for the application to handle, it can fill up the buffer's available memory. And, if the attacker is good at this, he or she can then replace the saved return address on the buffer with another value, so that the malicious code is executed instead of the original program.
Since the original program enjoys system privileges, it is possible that a successful malicious program might allow a remote user to take over your PC--allowing the new malicious code to add, delete, and execute programs without your knowledge. Fortunately, this is not so easy. A fair number of buffer overflows simply crash the affected program, or crash the operating system. The trick is for the malicious user to find not only the right return pointer address, but also code that will execute once it has system privileges.
One exploit that successfully took advantage of a buffer-overflow vulnerability was the Morris worm. On Nov. 2, 1988, Robert Morris, Jr., found and exploited a buffer overflow condition in the "finger" service of Unix. In his buffer-overflow attack, Morris replaced finger with shell code that was used to copy a program that uploaded, linked, and ultimately executed the worm on other systems. From there, the Morris worm took off unchecked, exceeding even the creator's original intent.
If you're really curious about how malicious users create these exploits, the hacker group Cult of the Dead Cow has an online tutorial.
WHAT CAN BE DONE to prevent this type of attack? When programming a buffer, software developers could set the program to respond to a worst-case scenario. For example, if an overflow condition existed, the program could throw away the excess data, halt all operations, or provide the user with a warning message.
Perhaps an even better approach would be for the software to automatically check the size of data going into each buffer. Such checks, however, could add to the overall program size and perhaps slow down the program's overall performance. Given the consequences of a buffer-overflow vulnerability, a slight performance trade-off might be worth it.
The trouble is, even if we started imposing stricter programming standards, or began certifying software through some non-partisan software clearinghouse, as some have proposed, a lot of existing systems would remain vulnerable for years to come. The question is, is it too late to stop the tide of buffer overflows? I don't think so. But it will take a long time to defeat this menace.
Would you accept slower performance in exchange for protection against buffer overflows? Do you have any suggestions for how to beat this threat? TalkBack to me!