How botnets' rise foretells malware's future

We can learn a lot about the fight against malicious malware from the evolution of the botnet, says Rik Ferguson
Written by Rik Ferguson, Contributor

Since the ancestors of today's botnets emerged a decade ago, criminal innovation has been unrelenting. The paths the crooks will take next are becoming clearer, but then so are the counter-measures, says Rik Ferguson.

To my mind, two contenders vie for being the malware that started the botnet ball rolling: Sub7 and Pretty Park — a Trojan and a worm respectively. They both introduced the concept of the victim machine connecting to an IRC channel to listen for malicious commands. These two pieces of malware both first surfaced in 1999 and botnet innovation has been constant since then.

Bots in early 2000 were aimed at remote control and information theft, but the move towards modularisation and open-sourcing began the huge increase in variants and the expansion of functionality. Malware authors gradually introduced encryption for ransomware, HTTP and Socks proxies allowing them to use their victims for onward connection or FTP servers for storing illegal content.

Over the years, botnets steadily migrated away from the original IRC command-and-control channel. This port is seldom opened through firewalls and the protocol is easily identified in network traffic. Instead bots began to communicate over HTTP, ICMP and SSL, often using custom protocols. They have also continued the adoption and refinement of peer-to-peer communications, as would be demonstrated five years later by another famous botnet, namely Conficker.

Organised crime
Gradually the criminal interest in the possibilities afforded by botnets began to become apparent. At the start of the decade, spamming was still largely a work-from-home occupation with large volumes of spam being sent from dedicated server farms, open relays or compromised servers. Bagle, Bobax and Mytob changed all that for good with Mytob essentially a blend of an earlier mass mailing worm MyDoom and SDbot.

This worm enabled criminals to build large botnets and distribute their spamming activities across all their victim PCs, giving them agility, flexibility and importantly helping them to avoid the legal enforcement activity that was starting to be aggressively pursued.

From then on we have seen the rise and fall of many famous botnets, the oldest criminal spamming botnets. In 2007, we saw the birth of the famous Storm botnet along with Cutwail and Srizbi. Right now, the Shadowserver Foundation is tracking almost 6,000 unique command-and-control servers and even that figure does not represent all the botnets out there.

At any one time we are tracking tens of millions of infected PCs that are being used to send spam and that figure does not include all the other bot-infected PCs being used for information theft, denial of service or any of the other myriad crimes.

The concerted action that both public and private organisations are taking against botnets means criminal innovation never stops. As new technologies arise criminals look for ways to adopt or abuse them, whether to facilitate the generation of profit, to increase their scalability and flexibility or to provide more effective camouflage.

Initially command-and-control IP addresses were hardcoded into each bot, which made their identification and eventual disruption by malware researchers more simple, but the bad guys learn from their failures every time.

Lost in the white noise
Since the second half of 2007 criminals have been abusing the user-generated content aspect of web 2.0. The first alternative command-and-control channels identified were blogs and RSS feeds, where commands were posted to a public blog by the criminal and the bots retrieved those commands through an RSS feed.

Likewise, output from the infected machines was posted to an entirely separate and legitimate public blog for later retrieval by the command-and-control server, again over RSS.

As web 2.0 services have multiplied and even gained a certain level of acceptance within the enterprise, criminal innovation has continued apace. Compromised, otherwise innocent, servers in Amazon's Elastic Cloud Computer (EC2) cloud, for example, have been used to host configuration files for the Zeus bot.

Twitter has been used as the landing page URL in spam campaigns, to attempt to...

...overcome URL filtering in email messages. Twitter, Facebook, Pastebin, Google Groups and Google AppEngine have all been used as surrogate command-and-control infrastructures.

These public forums have been configured to issue obfuscated commands to globally distributed botnets. These commands contain further URLs that the bot then accesses to download commands or components.

The attraction with these sites and services lies in their ability to offer a public, open, scalable, highly-available and relatively anonymous means of maintaining a command-and-control infrastructure, which at the same time further reduces the chance of detection by traditional technologies.

Unwise assumptions
While network content inspection systems could reasonably be expected to pick up on compromised endpoints that are communicating with known-bad sites, or over suspicious or unwanted channels such as IRC, it has been historically safe to assume that a PC making a standard HTTP GET request, over port 80 to a content provider such as Facebook, Google or Twitter, even several times every day, is as acting entirely normally.

However, as botnet owners and criminal outfits seek to further dissipate their command-and-control infrastructure and blend into the general white noise of the internet, that is no longer the case.

Of course we can fully expect criminals to continue this unceasing innovation as we move forward, more botnets will take advantage of more effective peer-to-peer communication, update and management channels.

Communications between bots or between bot and controller will become more effectively encrypted perhaps through the adoption of PKI. Command-and-control functionality will be more effectively dissipated, using cloud services, peer-to-peer and covert channels and through compromised legitimate services. Spamming capabilities will be enhanced.

Botnets, such as the pernicious Koobface, already use social-networking services for propagation by sending messages and making posts, we can fully expect to see social-networking spam capabilities being added to bot agents in the near future

Where do we go from here?
So what can we do? Is all hope lost? Not entirely. The battles continue in a war that must be waged on several fronts. Governments and international organisations such as the EU, OECD and UN need to provide a strong focus on the harmonisation of criminal law globally in the area of cybercrime, enabling more effective prosecution.

Law enforcement agencies need to formalise multi-lateral agreements to tackle a crime that is truly transnational. ISPs and domain registrars also have a key role to play. ISPs should be informing and assisting customers they believe to be compromised, a trend which happily appears to be on the increase.

They should also be terminating services to customers they believe to be acting maliciously. Domain registrars should be demanding more effective forms of traceable identification at time of registration and bad actors should have their service suspended as soon as credible suspicion is raised.

The security industry is already drawing valuable lessons from the levels of co-operation achieved between former rivals during the fight against Conficker and hopefully this effective co-operation will continue and deepen. Initiatives must be financed on a national level to educate and inform citizens more effectively of the dangers posed by cybercrime and to encourage safer computing practices.

Lastly, the security industry must not rest on its laurels. We can take heart in past successes but we cannot rely on past technology alone. Innovation is the key to keeping up with and hopefully surpassing the techniques developed by the bad guys.

Rik Ferguson is senior security adviser for Trend Micro. He has over 15 years' experience in the IT industry with companies such as EDS, McAfee and Xerox.

Editorial standards