X
Tech

Security: Lintel vs Wintel

There is no generalized risk metric worthy of the name - and very little available data on which to construct one. As a result the most convincing general results are arrived at by examining kernel intrinsics - something that always produces a pro-Unix result.
Written by Paul Murphy, Contributor

In the PC community "security" just means defending against attacks aimed at destroying or misusing all or part of a computer system. In that context most of the complexities associated with trying to decide whether wintel or lintel will expose you to less security risk arise from the absense of suitable metrics.

I'll suggest something tomorrow, but today want to look at two efforts to establish something effective. The first of these is represented by CERT's Common Vulnerability Scoring System; the other is from a 2004 article by Nicholas Petreley.

CERT first. Here's part of how they describe their metric:

The Common Vulnerability Scoring System (CVSS) provides an open framework for communicating the characteristics and impacts of IT vulnerabilities. Its quantitative model ensures repeatable accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores. Thus, CVSS is well suited as a standard measurement system for industries, organizations, and governments that need accurate and consistent vulnerability impact scores. Two common uses of CVSS are prioritization of vulnerability remediation activities and in calculating the severity of vulnerabilities discovered on one's systems. The National Vulnerability Database (NVD) provides CVSS scores for almost all known vulnerabilities.

Look further, and you can get a look at the underlying equations. Two excerpts:

Impact = 10.41*(1-(1-ConfImpact)(1-IntegImpact)*(1-AvailImpact)) Exploitability = 20*AccessComplexity*Authentication*AccessVector f(Impact) = 0 if Impact=0; 1.176 otherwise

...

ConfImpact = case ConfidentialityImpact of none: 0 partial: 0.275 complete: 0.660

Now, I don't know about you but whenever I see something like that definition my instant assumption is that the methodologists have wrestled control away from the practitioners - and with the usual results. In this case that first impression seems to be warranted: as nearly as I can make out the entire model seems to be based on the belief that guesses can be elevated to fact merely through the application of double precision arithmetic.

Petreley's 2004 report comparing Windows and Linux security suggests eight sub-categories for each of three major sources of "security" risk and provides a guide for applying these to your own situation.

Here's his summary:

Elements of an Overall Severity Metric

Damage potential of any given discovered security vulnerability is a measurement of the potential harm done. A vulnerability that exposes all your administrator passwords has a high damage potential. A flaw that makes your screen flicker would have a much lower damage potential, raised only if that particular damage is difficult to repair.

Exploitation potential describes how easy or difficult it is to exploit the vulnerability. Does it require expert programming skills to exploit this flaw, or can almost anyone with rudimentary computer experience use it for mischief?

Exposure potential describes the amount of access necessary to exploit a given vulnerability. If any hotshot hacker (commonly referred to as "script kiddies") on the Internet can exploit a flaw on a server you have protected by a firewall, that flaw has a very high exposure potential. If it is only possible to exploit the flaw if you are an employee within the company with a valid login ID, using a computer inside the company building, the exposure potential of that flaw is significantly less severe.

He provides eight categories of risk in each group - a sample, this one for the "Damage potential" element:

Damage Potential

This metric is the most difficult to quantify. It requires at least two separate sets of categories. First, it takes into account how much damage potential a flaw presents to an application or the computer system. Second, the damage potential must be measured in terms of "what it means" to the company affected. For example, there is a single metric where a flaw allows an attacker to read unpublished web pages. That flaw is relatively minor if no sensitive information is present in the system. However, if an unpublished web page contains sensitive information such as credit card numbers, the overall damage potential is quite high even though the technical damage potential is minimal. Here are the most important factors in estimating technical damage potential for any given flaw, in order of severity from least to worst:

1. The flaw affects only the performance of another computer, but not significantly enough to make the computer stop responding.

2. The flaw only affects the attacker's own programs or files, but not the files or programs of other users.

3. The flaw exposes the information in co-worker's files, but not information from the administrator account or information in any system files.

4. The flaw allows an attacker to examine, change or delete a user's files. It does not allow the attacker to examine, change or delete administrator or system files.

5. The flaw allows an attacker to view sensitive information, whether by examining network traffic or by getting read-only access to administrator or system files.

6. The flaw allows an attacker to gain some but not all administrator-level privileges, perhaps within a restricted environment.

7. The flaw allows an attacker to either crash the system or otherwise cause the system to stop responding to normal requests. This is typically a Denial Of Service (DoS) attack. However, the attacker cannot actually gain control of the computer aside from stopping it from responding.

8. The flaw allows an attacker to change or delete all privileged files and information. The attacker can gain complete control of the target system and do virtually any amount of damage that a fully authorized system administrator can do.

In the body of the report he recognizes, at least as I understand him, that this framework can be applied to fully contextualized decisions but cannot reasonbly be generalized to form a simple yes/no decision guide for everyone.

Instead, he suggests that the more general route to a conclusion starts by looking at the operating systems under consideration - and that this will always favor Lintel over Wintel because Linux is Unix and shares its traditional separation between privileged and unprivileged operations, while Windows is a brand name used to sell a wide range of products all of which are characterized by a separation defeating need to maintain backwards compatibility.

And that's his bottom line: on average people applying Mr. Petreley's metric to their own decisions will find that Lintel rates lower on the exposure part of the risk scale - making it generally the better bet.

Editorial standards