System security managers are often left wandering in the dark...The comparison tests widely used by systems security managers to judge which anti-virus software to deploy are often flawed, misguiding users into investing money in software that isn't necessarily the best buy. Dr Igor Muttik, researcher at McAfee's AVERT Labs, said that tests performed by bodies such as Virus Bulletin magazine and the Virus Test Centre could not be relied upon to give fair results, because of the nature of the tests. He said: "The results of these tests are not reflecting the reality," adding the tests often mistakenly showed one vendor's software as better than another. In one example, presented to delegates of the 2001 Virus Bulletin Conference in Prague, he showed that if the samples used were not large enough when testing random viruses, you end up with inaccurate results 80 or 90 per cent of the time. Randy Abrams, anti-virus specialist at Microsoft, the man charged with ensuring there are no viruses in Microsoft software at the point of release, said: "It's important. For most people these comparisons are still one of the best ways of getting independent anti-virus performance data." A spokesman for Virus Bulletin magazine attending the presentation said Muttik's research did not precisely reflect how Virus Bulletin produced its data, which he maintained was largely accurate.