IT security is often thought of as being a cost to business, rather than a source of revenue, leading many organisations to see whether corners can be cut to leave security as an option, or to provide protection against only a few threats.
However, criminals have a vast array of attacks that they can employ against organisations for a wide variety of reasons. In this special feature, ZDNet takes a look at some of the more common threats to businesses, companies that have suffered the consequences of not being able to fend off the attacks, and what could have been done.
Distributed denial of service
Distributed denial-of-service (DDoS) attacks are the hacktivist's common choice of protest against issues, whether the hacker is in conflict with a company's values, a political viewpoint, or simply because of what someone said.
Designed to disrupt normal services to a site by overwhelming it with requests, DDoS attacks do not result in a breach of information, but can cause significant harm to online businesses for as long as an attack continues.
Anti-spam organisation Spamhaus was hit by such attacks recently, thought to be from spammers that were sick of being blocked, yet it managed to ride the storm by enlisting the help of providers further upstream.
Politically motivated attacks are common, as was the case when Indonesian hacktivists took issue with news that the Australian Signals Directorate (ASD) had been spying on the mobile phones of Indonesian officials. The hacktivists responded online by DDoS-ing the Australian Federal Police and the Reserve Bank of Australia.
WikiLeaks has historically been under attack by those who disagree with its choice to publish US military cables, yet when MasterCard and PayPal stopped accepting payments destined for WikiLeaks, hacktivists fought back by DDoS-ing MasterCard's sites and hitting PayPal's blog.
Sometimes, the attacks are focused on extorting money from businesses, especially those that rely on online services. The idea is that unless the business pays a ransom, its website is going to remain down until the attacker relents. Unfortunately, some businesses choose to pay the ransom, as the cost of downtime is more expensive and they feel that they don't have the means or money to fight back.
Ransomware and malware
Another type of ransom comes in the form of software designed to hold critical information hostage through encryption unless a fee is paid out to criminals. Once the payment is made, the malware restores the files.
The latest piece of notable ransomware was Cryptolocker, which demanded $300 to unlock files. However, the concept is not a recent one.
In April 2012, two ransomware variants were discovered by TrendMicro, F-Secure, and Dr Web, which demanded €50 to unlock files.
Earlier than that, in 2009, CA (now CA Technologies) discovered more ransomware that held files hostage for $100. The LoroBot malware claimed to use 256-bit AES encryption, but the files were actually encrypted using an XOR cipher, allowing CA's researchers to create a tool for victims to restore their files.
Unfortunately, not all ransomware is created with backdoors or weak encryption, and the fact that the criminals do often keep their word and release the files held hostage presents security researchers with an ethical dilemma.
In some cases, the infrastructure used to spread the ransomware is the same as that used to retrieve paid-for keys. Researchers intervening with the intention of stopping the spread of ransomware can, inadvertently, be responsible for ensuring that victims can never decrypt their information.
Other pieces of malware can open up infected machines to control, sometimes to let criminals steal valuable data or, more likely, become part of a botnet.
Even if a device contains no valuable information, if it has an internet connection, then it represents a resource to an attacker. That is because no matter how small the computing power or bandwidth provided, when combined with hundreds, if not thousands, of other compromised devices, it can form part of a cloud-esque network — a botnet.
After breaching an internet-connected device, criminals force their new victim into joining their army of bots by uploading a client to respond to commands sent to it. These compromised devices, sometimes called zombies or drones, await commands from their masters. The first botnets made use of Internet Relay Chat (IRC) servers to communicate, joining a chat room and waiting for their author to log on and issue commands.
Since then, the method of communication has changed, with botnets using other media such as Skype, Facebook chat, anonymous networks like Tor, or even their own proprietary ones. Just as communication networks have changed, so too have the devices included in botnets.
Smartphones, routers, smart television sets, tablets, internet-enabled fridges, webcams, and industrial control systems are just some of the devices that have been seen contributing to botnets.
Combined, criminals can direct their army of drones to focus on a single target to conduct DDoS attacks, or they can use the combined computing resources to do their own work, such as generating click referral traffic to earn money, or mining for crypto-currencies like Bitcoin.
By examining the traffic between a drone and its master, anyone can trace where the commands are being sent from, which are usually from one or more command-and-control servers. This does leave botnets open to hijacking by other criminals who are intent on growing their own army.
It also allows security researchers to do the same, and interrupt communications to the command-and-control server. Although researchers would have the ability to force victims to rid themselves of their infection, it is illegal in many countries to alter computer data without permission, regardless of the intent.
It also poses an ethical dilemma, as removal of the infection can sometimes cause the victim's device to crash. Given that devices are as wide and varied as they are, it is possible that these could include critical infrastructure systems or even healthcare systems, putting lives at risk.
Web application vulnerabilities
One of the many ways to compromise a device or system is to poke for vulnerabilities in applications, especially those facing the internet. There are several ways of doing so, but the most common techniques take advantage of poor development practices that either allow for code execution or divulge information that's meant to be restricted.
The Open Web Application Security Project (OWASP) has compiled a list of the top 10 methods of entry for web applications alone.
Such examples include SQL injection, which was the choice for hackers targeting Vodafone Iceland; cross-site scripting (XSS), which left Microsoft Office 365 open to attack; open redirects, which leave Facebook applications vulnerable; and insecure direct object references, which saw Yahoo's servers open to root access.
Tools have been developed to automate the process, such as Rapid7's Metasploit toolkit, making it easy for even unskilled attackers to scan for vulnerabilities and deploy exploits against targets without knowing exactly what they do.
These vulnerabilities aren't limited to websites, however. Mobile versions of these applications are in increasing demand, but developers continue to make mistakes on these platforms, as well.
Advanced persistent threats are not a particular type of attack, but, more commonly, a combination of attacks used over a longer campaign. While many security attacks are opportunistic, such as taking advantage of an unpatched web application vulnerability to poke around a system and grab whatever is in sight, an APT is targeted towards a particular person or organisation, with a predefined objective.
If most common attacks can be thought of as criminals walking down a neighbourhood trying to see whether car doors are unlocked, or perhaps even breaking a window to quickly steal a wallet, then the APT is the professional criminal that spends months planning a heist by casing out the location and learning the behaviours of its target. Then, once inside, they continue to siphon off valuables for themselves.
The level of sophistication of these attacks is usually much higher, with criminals carefully considering where their best point of entry might be to reach their objectives. Such is the case in the failed attack on Lockheed Martin and several other US Defence contractors. The attackers went further up the supply chain, recognising that they would be unable to progress without obtaining SecureID two-factor authentication tokens from RSA.
The attacker, thought to be a nation state, went for RSA's softer targets, sending employees a Microsoft Excel spreadsheet that contained a then-zero-day exploit that downloaded a remote administration tool. The purpose of this tool was to target RSA's IT administrators to further escalate their privileges. Ultimately, RSA stated that it detected and blocked the intrusion, but not before placing its SecureID product in question and seeing its customers either replace or withdraw the tokens from use.
RSA's attackers did not have the luxury of sitting within its networks for very long, but more successful APT campaigns have spanned years.
Nortel Networks, for example, was compromised in 2000, with attackers sending information to a China-based IP address for several years. It wasn't until 2004 that the breach was detected, and by that time, the company had lost technical papers, research-and-development reports, business plans, employee emails, and other documents.
Passwords and credentials
Passwords, while in and of themselves a security measure, represent valuable data within a company. It is a prime example of when organisations look at security as an option, rather than as being mandatory.
Like any data that should be protected, it often is not across several industries.
Australian retailer Billabong left passwords in plain text on its servers; when it was hacked into, this put 21,000 customers at risk. On the academic side, Macquarie University lost 1 million plain text passwords in its breach.
While the Australian Taxation Office has not been breached, one of its systems stores passwords in plain text and does client-side verification of poor passwords at signup.
Even further up the scale, the recruitment arm of the UK government's communications headquarters (GCHQ), the British equivalent of the US National Security Agency, stores its passwords in plain text also.
In each of these cases, organisations put their customers and users at risk, as the credentials can be stolen and used to attack other services where they may have been reused. A recent case where this has happened can be seen in Yahoo Mail.
The company stated that although it was not the direct target of an attack, criminals had obtained usernames and passwords from a third party and attempted to use them against Yahoo accounts.
Some organisations will take steps to ensure that they never communicate a view that might make them a target, but when criminals simply have nothing better to do, or one of their customers pulls them into a dispute, defending against an attack will require some technology.
One of the ways that organisations can defend against such attacks is simple on paper: Have enough bandwidth to continue staying up. This can mean scaling out to the cloud for extra capacity; however, depending on the size of the attack and its persistence, this can quickly become cost prohibitive.
But if load balancing is used effectively, it may be able to fend off small attacks by diverting traffic through servers with more capacity at the cost of higher latency.
Black flagging traffic can also be performed upstream by requesting this at a company's network peers, but, depending on the relationships that organisations have with their service providers, the reaction time can also be costly.
While DDoS protection is not the main business focus for content delivery networks like Akamai, these companies can still play a role in softening an attack. By serving up cached data during an attack, the load to an intended target is reduced, giving victims some respite.
Barring exploits, malware is only effective if a user can be tricked into running it. Known malware can easily be detected by antivirus offerings, but with so many different variants, this alone cannot defend an entire corporation.
Instead, whitelists have become the preferred method for dealing with applications that should or shouldn't be allowed to run within an organisation. The ASD ranks this as the top strategy to mitigate cyberintrusions. While antivirus software, deployed at end points or at a network gateway, is still recommended, it only ranks 25th on the ASD's list of strategies.
Malware also has to have a chance to hide somewhere on a system to provide backdoor access, or continue to exfiltrate data. The ASD highly ranks non-persistent operating systems for machines that are used for reading emails and surfing the web. This would ensure that sensitive data and the underlying operating system are separated, and that even in the event that the user's machine is breached, removing malware is as easy as restoring the operating system from a virtualised image.
Botnets present a difficult challenge in that infected hosts cannot be cleaned due to ethical and legal reasons, but their persistence continues to threaten others, since drones can be used to seek out and infect other machines.
In Australia, the Internet Industry Association has established a voluntary iCode, where internet service providers (ISPs) can pledge to protect networks from infected drones. Each ISP subscribes to a list of known infected clients, and, if they are its customers, restricts their access to the internet temporarily.
As the ISPs are able to control the service they provide — the network layer — they are able to cut off communications with command-and-control servers without having to modify data on a customer's device. Instead, customers are served a notice informing them of the infection and how to get help, and then given the ability to continue online if they wish.
Outside of Australia, however, others are tackling the issue in different ways. Microsoft, for example, placed a $250,000 bounty on the heads of the operators of the Russian Rustock botnet.
The tech giant has also used the US District Court to force botnet operators to either come forward and be charged, or see their domain ownership forfeited in the public interest. It's a slow and costly method, but it gives Microsoft a chance to win by default.
Closing web application vulnerabilities
Even though prominent online organisations like Facebook and Yahoo fall victim to web application vulnerabilities, it doesn't mean that attackers cannot be slowed down.
The very tools that criminals use to automate their attacks against their targets are often used by professionals to audit systems for vulnerabilities so they can be patched.
Exposing a company's own weaknesses in order to strengthen them often requires specialist outside help from information security professionals. This can be in the form of managed security providers, or by engaging an outside consultant. Companies handling credit card information will be familiar with this process, as it is mandatory under the Payment Card Industry Data Security Standard (PCI DSS) to conduct penetration testing on a regular basis.
The scope of such tests does not always encompass the whole business, though, and in these cases, they don't follow the thinking of a real-world hacker. Startups like BugCrowd attempt to solve this problem by enlisting the help of actual hackers and paying them to help companies identify weaknesses. The hackers themselves have their own reputations at stake by participating in such bounties, and nefarious attacks are controlled by requiring the hackers to do their work through BugCrowd's systems.
On a less involved level, there's nothing stopping organisations from running their own bounties without the hacker-controlling infrastructure in place that startups such as BugCrowd offer. Companies like Google offer financial rewards to hackers that choose to report vulnerabilities in a responsible manner, and others like Facebook go as far as to set up test accounts for curious hackers to attempt to break in within a controlled environment.
Knowing about a vulnerability is only half the problem, however. In most cases, web application vulnerabilities are a result of poor programming practices by developers, compounded by an increasing need to ship products quickly.
Temporary respite can be found in web application firewalls, designed to highlight the early warning signs of an attack and detect common breach techniques like SQL injection. In some cases, websites can be configured to automate a defence when detecting attack in order to fight back.
Defending against APTs
No security is absolute, and all organisations have an element of risk that is either unknown or considered an acceptable part of doing business. Taking the approach of minimising risks to an acceptable level still applies, even if the attacker is a nation state.
Actions to lower an organisation's risk profile include examining what is acceptable in terms of how often system audits and penetration tests are conducted, the frequency of patch updates, restricting the access privileges of accounts, blacklisting known malicious domains and IP addresses, and hardening end points.
Despite these measures, on a long enough time scale, a determined attacker is eventually going to gain access to its target, whether it be due to a lapse in security or a zero-day exploit.
However, the speed at which an organisation responds to a breach and the information it can gather may be the difference in whether the attack is ultimately successful.
Intrusion detection and prevention systems can assist in quickly narrowing down an attack as it happens, supported by information provided through logs, and security information and event management (SIEM) tools.
Companies such as IBM and RSA are currently drawing upon the huge amounts of information provided by network devices to analyse what is normal behaviour and what is not, to provide more intelligence rather than just visibility.
The idea behind this process is to identify when an attack is occurring, but in the event that an attacker does manage to gain access, the time taken to gather information about what has been put at risk and what needs to be remediated can be significantly reduced.
With this time shortened, and thorough security checks conducted regularly, the chances of an attacker being able to persist with an attack should be much lower.
Passwords were meant to be a convenient way for people to store a shared secret on a particular system. But convenience is now an afterthought, given that increased computing power means that most dictionary-based passwords can be guessed through brute force. Additionally, the increase in the number of online services requiring authentication has left users with the problem of memorising several passwords, or otherwise sharing the same password across several sites.
Returning convenience back to the user can be done in several ways. Password managers can ensure that users don't have to remember as many passwords, as they have services that they log in to. Passwords do represent a single point of failure, however, and some cloud-based password managers require an implicit trust in the provider not to be compromised.
Passwords themselves can be made more convenient by increasing the number of bits of entropy not through complexity, but through length. A sufficiently long password consisting only of lower-case characters can easily provide the same level of protection against brute force attacks as a short password that uses symbols, numbers, and a mix of upper- and lower-case characters.
Moving the problem away from passwords can be achieved through the use of two-factor authentication. This effectively provides a one-time code to be used at login, supplementing but not replacing the password for the user.
Security is one area that does not stand still; a zero-day exploit found today only adds to the risks that organisations already face, compounding the amount of work that must be done. For organisations that do not see security as an option, but instead see it as critical to the ongoing success of their business, investing in keeping abreast of such threats is a key task.
A few years ago, Android did not exist as an operating system, yet the rise of the consumerisation of IT has meant that its security must be considered when evaluating an organisation's risk profile. This is because it, too, is vulnerable to many of the same threats listed above — botnets, web application vulnerabilities, DDoS attacks, poor credentials, and malware — but to consider it optional to secure is leaving an open end point.
Thankfully, organisations choosing to stay on top of their security have several options and new technologies, such as outsourcing the work to managed security service providers, using big data and analytics to detect threats, and possibly even modifying their virtual network infrastructure in real time in response to attacks.
It remains to be seen whether the support provided by advances in technology can keep up with the new ways that criminals find to break into systems. But if maintaining an acceptable risk profile is an arduous task for those that see security as essential, then those that consider IT security as an optional extra will no doubt fall further behind.