X
Tech

Is Microsoft liable for security holes?

TechUpdate's David Berlind discusses Microsoft's potential liability in the light of Code Red and Nimda.
Written by David Berlind, Inactive

In legal circles, there's a well-known group of law firms that's commonly referred to as the "plaintiffs' bar." The most distinguishing characteristic of the plaintiffs' bar is that its members build entire practices out of finding people and businesses that have been wronged, and filing class action lawsuits on behalf of those plaintiffs. It should come as no surprise that the targets of these suits typically have deep pockets.

With all sorts of alleged harm being inflicted all the time, the plaintiffs' bar moves from one issue to the next, with plenty of opportunities to build high-profile cases. Perhaps most notable of these issues is the action against the tobacco industry. Another case had to do exposure to EMF produced by high-tension wires. The plaintiffs' bar is a busy bunch, always on the lookout for the next big harm. When it started to look like the Y2K bug was going to bite some companies on the bottom line, the plaintiffs' bar boned up on its computer literacy.

But the Y2K "harm" went from bug to bust and there was no one to sue. In the aftermath of the big hurt that never happened was a battalion of lawyers--with a lot of newfound computer knowledge that it didn't want to go to waste.

Enter Microsoft.

On the heels of the most recent compromise in security that targeted Microsoft technologies (one of many security lapses), and the omnipresent threat from cyberterrorists (see story), I started wondering just how long it would take the plaintiff's bar-fairly bursting with computer knowledge--to turn its sights on Microsoft. For most of us non-lawyer types, Microsoft certainly appears to be liable.

Citing the biggest and most successful security intrusions (Melissa, Anna Kournikova, Love Bug, and Code Red), Peter Tippet, CTO of managed security service provider TruSecure, estimates the total dollar damage incurred as a result of worms and viruses that exploited weaknesses in Microsoft products could be as high as $4 billion. "Compared to Code Red," which was responsible for three of those four billion, "Nimda will be even more," says Tippett. "Nimda cleaned the clock of Code Red. It generated 100 times the attack-related traffic that Code Red did, and did so in about an hour. Code Red took a few days. Nimda may cause ten times the damage."

With that much "harm" and Microsoft's virtually bottomless pockets, it would appear to be a match made in heaven for the plaintiffs' bar.

So, I called a few lawyers and all fingers pointed to Jane Winn, the author of the leading treatise on electronic commerce law (Law of Electronic Commerce) and professor of law at Southern Methodist University. According to Winn, if there is a case against Microsoft, the plaintiffs will have to prove that the company was negligent. "Currently," says Winn, "we don't have all the elements to prove negligence."

The elements Winn refers to are what make up a four-point acid test for determining whether a company was negligent. The first point is called "duty of care." In layman's terms, it tests a homeowner's responsibility to keep ice off their sidewalks. If you don't, and someone slips and breaks their neck, this first test of duty of care has been passed because you are responsible for keeping your sidewalk ice-free. To date, no court has said that MS has the duty of care when it comes to incorporating security into its products. Does Microsoft have that duty? Do the disclaimers on its products adequately protect it from liability? These will be the first questions that the plaintiffs' bar must satisfy.

Once the duty of care is established, then a breach of that duty must be proven. Here, the test is whether a reasonable programmer would have engineered the environment to be secure. The emphasis is on reasonable. The programmer need not be perfect.

The next test is one of causality. Once Microsoft's duty of care has been established and breach can be demonstrated, someone will have to prove that Microsoft--and Microsoft alone--caused the damage. This test might fail if Microsoft could prove that there were other measures the plaintiff could have taken to prevent the damage.

The fourth and final test is whether damage was actually done. The plaintiff will have to show that they were harmed in some way once the previous three tests have been satisfied.

What's the precedent?
As with many legal cases, the legal system looks for a precedent. The Y2K bug might have helped set that precedent in the technology business, but it was a bust. However, according to Winn, there were a couple of cases in the shipping industry that may be precedents in technology. That connection is echoed in a document that shows how one of these cases might have served as a precedent for Y2K liability.

The precedent is known as the T.J. Hooper case. Basically, the case involved a tugboat that exposed a barge to a storm while it was transporting it. The barge and its cargo sunk; the plaintiffs needed to show that the four conditions of negligence were satisfied. Of the four, the breach of duty test was the hardest to pass. The tugboat operator did not have a functional radio (the technology) and therefore had no way of knowing about the approaching storm. The plaintiffs had to prove that a reasonable tugboat operator (not a perfect one) would have a functional radio on his or her boat. This was especially difficult because few tugboat operators had radios on their boats. It came down to a question of what was reasonable--not necessarily what was commonplace.

The judge rule that the test was passed saying, "[I]n most cases reasonable prudence is in fact common prudence; but strictly it is never its measure; a whole calling may have unduly lagged in the adoption of new and available devices. It [the industry] never may set its own test, however persuasive be its usages. Courts must in the end say what is required; there are precautions so imperative that even their universal disregard will not excuse their omission."

Back to layman's terms: If all it takes is a $100 radio to protect millions of dollars of assets, then it is reasonable for a tugboat operator to be expected to have that $100 radio, regardless of what the common practice is.

So, is Microsoft liable?
The T.J. Hooper judgment, which some members of the legal community consider harsh, may very well set the precedent for the software industry. Clearly, in today's interconnected world, any reasonable programmer would look to secure the software he or she is engineering. This is evident from all of the configurable security options that are available to us in everything from operating systems to server-based applications to browsers. But whether a reasonable programming effort can make that software 100% bulletproof remains to be seen.

In Microsoft's case, I wonder what the impact of its architectural decisions might be. By design, it built a software infrastructure that thrives on having access to system resources that many wanted secured. ActiveX exemplifies this, as well as Microsoft's extensions to the Java Virtual Machine. Those extensions poked holes in the sandbox that Sun created to keep local and Internet-delivered code from intermixing. Would any reasonable programmer who designed and built such an architecture also guarantee its security? Are even the most perfect programmers even capable of guaranteeing that security? Recall that Sun's sandbox had its own imperfections.

While TruSecure's Tippett points out the successful attacks and resulting damage against Microsoft software far outdistance those waged on Unix or Linux, he says all operating systems have the same fundamental problem. Calling it his "rule of complexity", Tippett says, "The more complex the system, the more vulnerabilities it has. The only way it can be 100% secure is if there are only a couple hundred or thousand lines of code that one programmer can track. These operating systems have millions of lines of code--too much for any one programmer to track--and could potentially have 1,000 times the number of vulnerabilities than have already been exposed."

Maybe the even bigger question is whether Microsoft should have designed and built such an architecture if a reasonable programmer knew it could never be 100% secured. It's like building a sidewalk when you know you can never keep it ice-free. If Microsoft could prove that other operating system vendors did the same thing, perhaps it would be considered a reasonable practice. Then again, most tugboats didn't have working radios.

No doubt, you will have your opinion on whether the four tests are passed. As Winn says, "There are still some missing pieces before Microsoft can be sued. Within our working lifetimes, there's no question that failure to maintain an appropriate level of computer security will be the basis for legal liability, but we're not there yet. We don't have all the elements to prove negligence."

While we wait to see if we get "there," a lot of TechUpdate readers have written to me wondering just what sort of gluttons we are for punishment. Asks one reader, "Just how much damage has to happen before we start considering an alternative like Linux."

Good question.

Related story: In the wake of Nimda, Gartner is recommending that businesses hit by both Code Red and Nimda investigate less vulnerable alternatives to Microsoft's Internet Information Server.

Editorial standards