CCIA vs Microsoft - the core issues

The first in a three-part series looks at the recent CCIA criticism of Microsoft's software - was the report really the wake-up call that some have claimed?

The Computer & Communications Industry Association (CCIA) has been a long-time Microsoft opponent. The lobbying group filed numerous friend of the court briefs during the antitrust trial in America, and is an active participant in the antitrust investigation being conducted by the European Commission. It is composed of a number of Microsoft's fiercest competitors, among them AOL, Sun Microsystems, Oracle, Intuit and Nokia.

Since the end of the American trial, however, the CCIA has pretty much fallen off the radar screen. Recently, however, they've managed to generate a bit of noise with "CyberInsecurity: The Cost of Monopoly," which is presented as "a wake up call that government and industry need to hear" regarding security issues in Microsoft's near-ubiquitous operating system. The report has garnered an unusual amount of attention, possibly because Bruce Schneier, author of Applied Cryptography and generally recognised expert in the realm of cryptography, was included as one of the report's authors.

My respect for Mr Schneier's work, however, doesn't extend to ignoring flaws in reports to which he contributes. This is part one in a three-part series which rebuts the arguments made in the CyberInsecurity report. Today's instalment deals with the core issues, namely, the risks associated with software "monoculture" and complex systems. Part two is a collection of general criticisms relating to the report's content, and details its uncanny ability to put a negative spin on practically everything Microsoft does. Part three is my treatment of the proposed remedies, and closes with some parting thoughts. The columns will be published throughout this week.

Do note that you can read the entire report yourself by going to

The risks of a software monoculture
"Protection from cascade failure is instead the province of risk diversification -- that is, using more than one kind of computer or device, more than one brand of operating system, which in turns assures that attacks will be limited in their effectiveness. This fundamental principle assures that, like farmers who grow more than one crop, those of us who depend on computers will not see them all fail when the next blight hits." (Page 11)

In other words, by having a diverse operating system environment, you prevent a virus that targets one platform from bringing down the entire infrastructure. The targeted platform might be laid low, but other platforms will live on to propagate the species...or just continue computing.

It's true that a monoculture has certain costs from the standpoint of shared risks which lead to a larger pool within which a computer virus might thrive. On the other hand, there are also real costs to the lack of a standardised computing architecture, which is the flip-side of the monoculture detailed in the report.

The benefits of standardisation
As I discussed in my Tunney comments, software lacks the inherent standards found in other industries. Software APIs can take practically any shape imaginable, which means that the initial state of a young software market is extreme fragmentation.

This is a tremendous inhibitor to development, as a particular software product can only reach a small, platform-specific market. This attracts less developer attention, leading to higher software costs and fewer users. As a result, the market's natural tendency has been to standardise on one provider. That one provider might start with only a slight lead over its competitors, but that slight lead will cause more developers to target the favoured platform, leading to greater economics of scale and lower costs, which attracts more customers and gives rise to the virtuous cycle that gave companies like Microsoft, and IBM before it, a dominant share of the marketplace.

With Windows, consumers have the most hardware and software choice, lower costs due to economics of scale, and are guaranteed compatibility with practically any product on the market. Companies have a large pool from which to draw technical staff, all of whom benefit from the deeper knowledge which comes with the ability to specialise in one platform (Adam Smith would appreciate this). Employers also benefit from the fact that potential employees, if they have computer skills, will have those skills on Windows.

It's not just the Windows' market, however, that realises cost savings in this fashion. Increasingly, the Unix market is organising around an open source operating system named Linux. It is my opinion that this consolidation will continue, making Linux THE standard for the Unix development domain. Few would say this consolidation is a negative thing, nor suggest that government use its influence to stop that consolidation on the basis of national security concerns.

Similarly, Java's "Write Once, Run Anywhere" (WORA) promise is based on the ability to run the same executable on any platform that has a Java runtime installed. Though Java is certainly safer from a security standpoint than native development (no more buffer overflows), Java programs can still have coding flaws that have security implications. Such a flaw would exist, therefore, on every platform the application is run on. Sun Microsystems certainly hopes to make Java the de facto standard for application development. Yet, no one is suggesting that these ambitions should be curtailed in order to preserve platform diversity.

Stan Liebowitz, a professor at the University of Texas at Dallas, estimated that just the break-up of Microsoft would have cost as much as $300bn over three years. I expect that the cost to the industry from government-enforced operating system diversity would be much higher. The "cure" in this case might be far worse than the disease.

In short, though monoculture has its own set of risks, the costs of those risks can be outweighed by the benefits of commodity operating systems.

Regarding software complexity
The report argues that complexity is the enemy of secure software. As a piece of software grows more complex, the code becomes harder to understand, and thus securing the code becomes that much more difficult.

There is certainly truth in this. An application with lots of "extras" creates more places where bugs with security implications might hide. Standard practice for critical-path systems is to remove extraneous components, leaving only what is needed for a particular task. This is what Google has done with the Linux operating systems that run on its custom hardware, creating a lean and fast environment that minimises the surface area upon which viruses might gain traction.

On the other hand, it is likely that most users of desktop systems will have networking, a user interface, a browser, media playing and other features installed. Microsoft provides defaults for these features, and these defaults tend to be quite popular. However, the issue isn't that these defaults add more complexity-derived risk than otherwise would exist, since most users would reject a "lean but secure" desktop system in favour of one with more features. The only risks created are those posed by software monocultures, and as I explained in the last section, the "costs" associated with those risks are often outweighed by the benefits of standardised APIs and economics of scale.

The report argues, however, that the complex system created by Microsoft's integrated platform makes it harder to iron out bugs. If no one can understand more than a fraction of a complex system, then, no one can predict all the ways that system could be compromised by an attacker. Though correct, this analysis is not directly applicable to Microsoft.

Microsoft, like most software companies in the world today, practices object-oriented principles in their software design, a fact clear from the near universal adoption within Microsoft of COM. Granted, this isn't a silver bullet that magically slays all software bugs, but it does imply that the CODE is separate for each component within Windows, whether or not the distribution of compiled code is scrambled with other system dlls (something that is done, in my opinion, in order to satisfy the "integration" requirement of past settlement decrees). In other words, I doubt that a programmer on the Internet Explorer team has to slog through GDI code to find the parts that relate to Internet Explorer. The IE development team likely deals EXCLUSIVELY with IE code, a division of labour that adds no more complexity to Windows OS maintenance than Microsoft Office adds to it.

Of course, applications might interfere in such a way as to create a security issue, but in this case, the advantage goes to a standardised system. With a standardised system, you can predict what configuration will tend to exist on a given computer. This standard system, therefore, will respond in a more predictable fashion than a system with a configuration that can't be predicted in advance.

OEMs tend to prefer standard configurations, as standard configurations are well understood and easier to fix. The same applies to operating systems. An operating system with standard interfaces and components is a standard base that can be updated as needed. Patches are a reality for Linux as much as Windows, and I would argue that the higher levels of standardisation on Windows systems will make it easier to patch more fully a wider swath of systems than a fragmented and diverse system where security bugs can hide within applications an update detection tool knows nothing about.

On a different tack, integrated features are what enable regular users to perform a number of advanced functions they would be unlikely to have discovered on their own. Windows consumers can, out of the box, log onto the Internet, browse Web pages, play music and streaming movies, and create home movies using just the features that come with Windows. Call these training wheels for the non-technical user, but just as training wheels lead to increased proficiency in riding a bicycle, Windows defaults provide entry to areas of technology that the non-technical might not have used on their own. I would suggest that Microsoft's decision to turn Media Player into a competitive product (versus just the stripped-down tool of days past) has done more to boost the fortunes of digital media than any action on the part of third parties with a vested interest in the market. This is market-building, and enables new companies to offer services in areas of technology that, previously, lacked a market of sufficient size as to justify the expense of entry.

Default features also present a standard base that non-technical users can expect will always be present on every Windows system. Why has "vi" managed to persist as a text editor, even though its interface (IMO) is about as much fun as making a transatlantic all using tin cans and a very long piece of wire? Quite simply, Unix administrators and programmers have come to expect that every Unix OS they come across will have it. Such standardisation matters to technical users, who have the wherewithal and interest to investigate new technology. It matters all the more for non-technical users, as such standardisation is what makes it possible for them to navigate the computing universe.

Complexity in the form of more integrated product can make it easier for bugs with security implications to hide. However, the costs are not as severe as they might appear at first glance, given that most computers users would have little use for a stripped down, but highly secure, product. Likewise, there are benefits derived from a product with a high-feature, standardised configuration which may outweigh the remaining costs associated with complexity.

John Carroll is a software engineer now living in Geneva, Switzerland. He specialises in the design and development of distributed systems using Java and .Net. He is also the founder of Turtleneck Software.