The IT security industry has come to a frank realisation that the current approach to preventing malware is simply not working. Is whitelisting, which is the reverse of our current approach, the answer?
Whitelisting is the process by which only pre-approved applications are able to execute on a network, while unknown and unwanted ones are blocked. It is the opposite of today's approach, by which applications are free to run unless an administrator has moved to block them.
Speaking at the AusCERT 2008 security conference, Graham Ingram, general manager of AusCert (Australian Computer Emergency Response Team), said today's blacklisting approach is simply not working. Defences against malware, he said, can be completely undermined "by the click of a mouse or the enter key of a user".
Scott Charney, vice president trustworthy computing group at Microsoft, said "most people who run machines actually don't know what is executing on their machine".
"I think [whitelists] are a natural progression," said Ingram. "I think the realisation [is] that blacklisting only had a limited life and we're getting towards the end of that."
"I am not so sure that we can get to a place of feeling confident in our infrastructure without doing whitelisting," added John Stuart, chief security officer of Cisco Systems.
While most at the conference agreed that whitelisting is the only available option, the model by which the industry goes about implementing it is the subject of debate.
Security vendor Lumension Security (previously called Patchlink) is hopeful that the problem can be addressed at the application layer, so future security software tools will incorporate the principles of whitelisting.
These tools, according to Andrew Clarke, senior vice president of Lumension Security, would ensure that "if someone is introducing a rogue application into an organisation and it's not on the whitelist and it's not a known good, it won't run."
But Microsoft advocates taking the whitelist concept further.
"We really do need an environment where things cannot execute without the user making certain choices," says Microsoft's Charney. "There are some fundamental engineering changes that have to happen."
Security, says Charney, needs to be built into the "trusted stack" — incorporated not just in software but in hardware.
"We have to start rooting trust in the hardware, because it is easier to manipulate software than hardware," he told ZDNet.com.au. "You'll see more and more hardware-linked functionality like BitLocker in Vista."
BitLocker is a function within enterprise versions of Windows Vista that encrypts the hard disk and only allows it to work on a specific machine. It can also be...
...set up for user authentication, so a computer will only boot after the user enters a unique key stored on USB.
BitLocker is based on the TPM (Trusted Platform Module) standard developed by industry consortium, the Trusted Computing Group. A TPM is a piece of silicon that is attached to the computer's motherboard and handles security functions such as password verification or digital certificate exchange. Being a piece of hardware rather than software, it is arguably less vulnerable to unauthorised misuse.
Further into the stack, Charney advocates that operating systems need to be bound with applications from a security perspective. Applications developed for a given operating system, he said, need to in some way be approved by the operating-system vendor as being safe for use.
"We need to bind operating systems and applications to that hardware so if it's tampered with, people know," said Charney. "We need to get applications signed, and make the signing process both more robust and harder to circumvent."
"We'll need a reputational platform," he asserted. "Software may be signed by someone you trust, someone you don't trust, or someone you don't know. When it's someone you don't know, how do you make a trust decision? We have to focus on all of those things."
Users, of course, would be rightfully concerned if Microsoft or other operating-system vendors pitched themselves as the sole judge of whether any given application was reputable and 'trustworthy'. For a competitive landscape, as exemplified by past antitrust decisions, it is essential that a level of choice is available to users with regards to applications.
Charney said that whatever model is put in place, users should be part of the trust process, so long as the industry is giving those users "more information" on which to base their decisions.
Cisco's Stuart said the strategy Microsoft is pursuing is, in effect, whitelisting: perhaps just by a different name.
"If you have a high degree of confidence in the changes you were making, and you have hardware trust up to software, then you've got a high degree of confidence of everything that is installed," he said. "So you have got a certificate of authenticity, if you will.
"If a piece of malware comes along, clearly it is not going to have that authenticity, and so it's not whitelisted. [While this is] not called whitelisting, it is effectively doing the same thing. It's about behavioural analysis of software as it's running, in effect whitelisting applications and whitelisting operating systems, and that's the next generation [of defence]."
"We've got to do something," said AusCERT's Ingram. "It's going to be a much more difficult concept to implement but I think we can work with it."
"We're starting to understand what the problem is but that doesn't mean we have any easy fixes," he concluded. "Some of the speakers here [at AusCERT 2008] have said openly and honestly, 'We haven't got it right, we've got to change our way of thinking if we're going to get on top of this'."
ZDNet.com.au's Liam Tung contributed to this report.