X
Business

Sloppy risk assessment raises web fear factor

Web security experts must separate actual risk from theoretical risk, says Mary Landesman
Written by Mary Landesman, Contributor

Security researchers must be able to distinguish between real and theoretical risks on the web, says Mary Landesman.

I recently came across an excerpt from a January 1989 article written by columnist Rachel Parker for InfoWorld. At the end of the column, Parker wrote: "The computer virus problem is real, and it represents some pretty mind-boggling problems... Exploiting the fear that surrounds the unknown detracts from credible efforts and casts a cloud on the entire industry."

Indeed. The line between exploiting the fear and keeping the public informed can be fine. On one hand, the security industry has a responsibility to keep people abreast of threats. On the other hand, we also have a responsibility to measure the risk accurately and ensure we are disseminating information in its proper context.

To avoid crossing the line into scaremongering, security experts must have enough experience to judge the situation and use the right tools to separate theoretical risk from actual risk.

There are parallels with IT risk management. IT managers need to assess which risks require immediate action, which risks can allow a more graduated response, and which risks are simply good to know about just in case.

For example, IT managers need to know which security patches address exploits already in the wild, which vulnerabilities are likely to be exploited, which flaws merely have some potential for exploit, and which will have little or no impact on their particular organisation.

When software vendors release patches, they assist with risk assessment by categorising them according to severity. Patches that have more immediate security implications are generally rated severe or critical, so they can be addressed more rapidly.

Without the proper tools, the web can pose some specific challenges for risk assessment. When legitimate websites are compromised, anyone can — theoretically — be exposed. So the security researchers charged with investigating and reporting web threats have to ensure their risk-assessment plan includes the ability to distinguish actual risk from theoretical risk. Ideally, assessment of real traffic figures should take 'probable' out of the equation.

Of course, a number of more subjective methods can also be used as a sort of sanity checker. For example, real-time traffic assessment can be combined with numbers obtained from verifiable sources, popular forums can be checked to see whether victims are discussing the attacks, traffic-metrics sites can be consulted to determine the popularity of the compromised sites, and so on.

I've found that if the traffic numbers are high, there is generally a corresponding amount of buzz about the attacks in various online forums, the compromised sites tend to be more popular, and our verifiable sources are able to confirm our numbers.

And the opposite is true. When logs indicate the attacks are low, there is a lack of buzz in forums, the involved sites have very low popularity ranks, and verifiable sources also see correspondingly low numbers.

Precise risk assessment is critical. If researchers failed to assess risks accurately before reporting, they would be issuing non-stop alerts. And that, as InfoWorld's Rachel Parker so succinctly put it, would be "exploiting the fear".

Mary Landesman is the senior security researcher for ScanSafe.

Editorial standards