How effective is endpoint security?

How effective is endpoint security?

Summary: Antivirus software manufacturers all claim to protect us against threats, but how well do they actually perform? We put six popular business internet security packages to the test.

SHARE:
TOPICS: Security
17
 

 

Security software has long been considered a requirement for the Windows environment, whether in the enterprise or at home. Malware cannot only put sensitive information at risk, but can be an expensive nuisance, wasting precious time and bandwidth.

Security vendors claim to protect us against these threats — from a marketing perspective, their products are silver bullets: they'll stop malware in its tracks. When you look at how they perform in the real-world, though, the situation is quite different.

How antivirus software protects you

Antivirus software uses a mixture of signatures and heuristics to detect malware.


Signatures are identifiers that are written specifically for one piece of malware. They could be as simple as locating a fixed string within a binary, or as complex as fingerprinting many different sections of a file and examining the relationship between them. They have a low false positive rate if the signatures are well written, but they need to be manually written for each virus and they can often be bypassed by minor changes to the malware.


Heuristics are algorithms that attempt to understand what a program will do to figure out if that behaviour will be malicious. They do not need to be manually written for each threat, which means that they can potentially block an unknown piece of malware. Since the heuristics are never perfect, though, they generally have a higher false positive rate than signatures.


To find new malware the antivirus companies either maintain their own research teams, or purchase research from other companies. Many antivirus companies also run "honeypots", which are systems configured in a purposely insecure fashion, and heavily monitored for signs of compromise. The antivirus companies will examine any attempts to break in and write signatures accordingly.

The Race to Zero competition at last year's DEFCON Hacker Conference pitted contestants against a large variety of antivirus products. The winner was the first team that was successfully able to modify supplied malware samples to bypass every antivirus product tested. It took a matter of hours for the winning teams to bypass all antivirus suites for all malware samples.

The current threat landscape

Nowadays much malware is polymorphic, modifying its internal code to automatically carry out its own race to zero, resulting in billions of possible permutations in order to evade detection. Effectively, this means signature-based protection no longer has the same impact it once did, as it simply can't keep up. Other forms of protection used in concert, from heuristics, to whitelisting, to Symantec's new Qourom and Insight technology have become more important; but none are perfect.

It's not only the malware that evolves — the motivations for creating it have changed too. Whereas once upon a time malware was created for notoriety alone, the major impetus today is money. Organised crime funds the efforts of malware authors to find new and inventive ways of getting into your system.

In short: no solution will protect against everything. While installing desktop security software will increase the baseline security of your system, you'll still need to keep your software patched, design your network with security in mind, implement hardware firewalls, and generally follow good security practice.

Common types of malware

There are a few common classes of malware that we'll define for our purposes here, based on how they compromise a system:

Viruses: viruses replicate themselves, usually by infecting files, such as office documents or PDFs. This term is also sometimes used to refer to all malware.

Trojans: trojans are viruses that are sent to the user and require the user to run them. The user may think that the virus is a crucial document that needs reading or a birthday message from a friend or even an illegal crack for software. The file may even open as normal, but in the background a program is run to take control of the system or steal information. Trojans are most commonly associated with email, though they can also be loaded on a website, a USB key, a CD-ROM (using the autostart feature) or even an iPod or camera. Our CANVAS and Metasploit tests used in this article could be classed as trojans.

Worms: these are the big ones that make systems administrators sweat. Worms will run an exploit (or several) against widely used software in order to gain access to a system. Once in, it will continue this process in an attempt to compromise more systems. The Slammer and Blaster worms had a huge impact on the internet, and recently the Conficker worm has been causing enormous issues.

The defence

Enterprise editions of antivirus suites all come with a similar set of features. They generally contain the antivirus component, a firewall/packet filter, web and email scanning capabilities. They also all have a management console you can install on a separate server which will allow for easy administration of all of your antivirus suites, and will use both signatures and heuristics when attempting to detect viruses. Our examination is an attempt to separate the leaders from the followers.

About the Author
Securus Global is one the region's leading Information Security consulting and testing organisations. It has been working with many of Australia's and the world's leading organisations since 2003, consulting and testing business security as well as working with technology vendors to improve the security of their products.

Topic: Security

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

17 comments
Log in or register to join the discussion
  • Serious business zd

    Is this an apology for last weeks same review ZD?

    You covered up well. Well done to get someone who seems to know their stuff with Securus!
    anonymous
  • These tests seem flawed...

    There are numerous prima facie problems with these tests and the general methodology used. The default settings have been changed or tinkered with to suit the test environment (e.g. it is not reasonable to disable on-access settings just to introduce samples in a different way than they would have been delivered in the wild). The tester has selected or hand picked the samples (are they in the wild?). There appears to be no false positive testing. On the upside the tester has at least acknowledged some of the problems with anti-malware products (they are not perfect!), as well as mentioning some general limitations to these tests. Yet the bottom line is that the tests do not appear to be based on a repeatable scientific method and so the results should be considered inaccurate (at least in part anyways).
    anonymous
  • Is this a joke?

    So basically you used over 2000 "malicious files" that don't look like they've been analysed or verified and product virus definitions were updated at different dates..hardly a fair and comparable test for starters! Nice work ZD!
    anonymous
  • Fantastic

    This is great.

    Good enough that i think I hear PR people trying to make it sound disreputable to save face.
    anonymous
  • Are you Securus PR or ZD staff?

    Your comment offers no explanation why this is great! I don't know who the other posters are, but they have some good points about the lack of credibility of these tests. The positive comments don't give any feedback on why the tests are so good. Repacking and making new threats to test a product is not really ethical or responsible, you can test using zero day proper threats if you have the know-how to get hold of them. Zero day test sets would have been the best option here as these are the real and actual threats out there in user land, so the products would have been tested against something meaningful! IMHO these are fairly poor quality tests, wrapped up in technical jargon. I think the readership of ZD will see the tests for what they are. I am certainly not vendor PR, I know these products miss things and I just want to see fair tests. Just an opinion.
    anonymous
  • Just liking it!

    Why do people have to expand on liking it. I don't need to justify. I read it. I decide if it is good and if it is, and worth commenting it is, I will.

    Ref to 0days, sounds like 0day type testing was done or did I miss something. Not sure if anyone else has done this like here before. What is your definition of an 0day?
    anonymous
  • Just like it 2!

    You should go after Facebook also where they allow a thumbs up for a post. Surely people should need to expand on why they are giving a thumbs up! Get onto it.

    You work hard to try to explain you don't know who the other posters are. Interesting. :P
    anonymous
  • ?

    FFS, ZDNet get *Securus*, a well known name in the local pen test community, to do a detailed and technically rigorous assessment, and all you can do is bitch and nitpick and miss the whole point of the review.

    The test focused on targeted attacks. Does it not stand to reason that a targeted attack would feature customised code to make the malware more potent? And what about malware that self-mutates? Isn't it 'fair' to test the heuristics model of well known AV products?
    anonymous
  • SG Comments

    Knowing the SG guys, I did make an assumption that ZD did dumb this article down. You can just tell. The guys would have dumped a load of tech talk into this which I expect would have been taken out to make this more readable for the audience (they don't do it for me!). I know the team and I think some PC and advertiser protection would have been included where they broke certain products but ZD may have thought they should not report that. If they had not, I would be surprised!

    I thought the Computerworld story on the iPhone stuff was funny. Securus did that hack for Ruxcon in 2008. Go the Aussies!!
    anonymous
  • Just don't like it!

    This review sucks. Period.
    anonymous
  • The same person...

    Obviously @Just like it @Just like it 2 and @? are the same person. So I reply to all of his / her posts: @Just like it, you are entitled to just like it, but surely if you 'just like it' you have at least one sensible reason - or do you just decide based on what day of the week it is (this also applies in reverse to @Just don't like it, below)? @Just like 2, you are comparing apples and oranges (as is the tester in this review by using products that have been updated at different times), if you compare these comments to a general Facebook post, then you are in the wrong place (this is meant to be a focused business review in a respected publication on which people can base a purchasing decision, not a thumbs up for your mates on a social networking site). Plus, these may be 0day threats yet they are false manufactured zero day threats and not real threats that are actually out there (unless you - SG - have released them???) @? This is not a technically rigorous review, just look at the above comments to get a true insight. If it is nitpicking to expect an accurate report based on real verified threats and a good reproducible test method then yes, I'm nitpicking! It is more than fair to use customized code in a targeted attack, yet this should be clearly stated as the objective and the parameters of the test should be clearly published for scrutiny (and the inference made clear). I sincerely hope you (all) stop posting to defend your tests, and instead answer the points raised by the other posters. I have nothing against SG and am sure you are a very good and respected pen testing company, just learn to take a little constructive criticism please in the spirit it was intended
    anonymous
  • Response to comments

    We haven't responded as yet anon. We've been waiting a while to see what thoughts and questions (if any) were going to be raised here. Shortly. Thank you to everyone that did respond. We'll take it all on board. :)

    Regards
    Drazen
    anonymous
  • Securus Response

    Hi Securus Global here.

    Starting from the top:

    "The default settings have been changed or tinkered with to suit the test environment."

    We didn't want to test default settings since they're usually rather lax and wanted to see what these products could actually do when configured properly. For instance, most of them didn't have any Host Intrusion Prevention/Detection Systems (HIPS/HIDS) on by default or were set to "warn only". While we do acknowledge that its hard to replicate these tests yourself exactly based off the information in the article, everything we did was setting them to more locked down modes.

    "it is not reasonable to disable on-access settings just to introduce samples in a different way than they would have been delivered in the wild"

    Fair point.

    "There appears to be no false positive testing."

    We couldn't think of a way to do false positive testing that would actually be rigorous and useful for users. Any choice of files would be totally arbitrary and unlikely to turn up any results.

    "So basically you used over 2000 "malicious files" that don't look like they've been analysed or verified and product virus definitions were updated at different dates..hardly a fair and comparable test for starters!"

    Every product was on the same footing or as close as possible due to their updating being within a day of each other, and having the exact same 2000+ "malicious files" to scan. The results would not sway much nor effect our final conclusions. Also the products that we tested last turned out to perform the worst despite having the most up to date definitions.

    "Repacking and making new threats to test a product is not really ethical or responsible, you can test using zero day proper threats if you have the know-how to get hold of them."

    We didn't want to test with unknown threats since an article saying that 99% of them weren't detected would be rather boring. Instead we showed how easy it was for an attacker to take some malware that is detected by every AV we tested, run a simple tool to pack/obfuscate/encrypt it, and have it undetectable again.

    This sort of test challenges the AV as chances are they won't have this signature in their database so they're forced to do heuristics and detecting the packers signature as well. It's also what happens often in the real world. There are so many variations of the same malware floating around with slight changes to them or just repacked.
    anonymous
  • Not sure

    Ok, your saying that your making up malware and not testing it, not testing with clean stuff cause its so hard, instaling the products differently with different features, updating them on different days and throwing in a load of tech talk that I do not understand. I do not know a lot about this but it does not sound to good.
    anonymous
  • Ah it is clear.

    A load of tech talk that you don't understand. I am getting an understanding why ZDNet didn't get you to test.
    anonymous
  • @Not sure

    @Not Sure and your various Anons you have used. Let me see if I have this right?

    "Waffle...bitch...bitch...moan...waffle...bitch...moan moan..bitch...waffle..."a load of tech talk that I do not understand. I do not know a lot about this....."

    @Ah it is clear - FTW: "I am getting an understanding why ZDNet didn't get you to test"

    Falling down the stairs LOL.
    anonymous
  • Nice going

    You're doing well Son of Isodor - yeah us dumb readers know who you are rotflmao. Try and pretend these aren't real comments from real people. Put them down then others won't look at the things you didnt do right. It's working - FTW: why didn't ZD get you to test? Oh wait a minute they did! Im not getting an understanding of that. Give it some more "Waffle ****.moan moan.put down the guys who want some help" and post twice in a row it keeps us fooled wish I was as clever as you LOL
    anonymous