When Akamai noticed an uptick in traffic to a web domain, the company could be forgiven for thinking: another day, another distributed denial-of-service (DDoS) attack.
DDoS attacks are a common attack method, accessible to all, which utilizes malicious traffic to disrupt website operations and online services. Traffic is often generated through botnets made up of enslaved devices ranging from PCs to Internet of Things (IoT) products including routers, smart lighting, and smartphones, which are issued a command to visit a website at the same time.
Sudden traffic spikes can overload systems and prevent legitimate users from accessing an online resource.
One of the largest DDoS attacks on record was experienced by GitHub last year, an attack that peaked at 1.3 Tbps.
In this case, taking place in early 2018, the surge in traffic was detected heading towards a website belonging to an Akamai customer in Asia, according to a case study due to be published by the cloud services provider on Wednesday.
The initial traffic spike -- over four billion requests -- was so large it was close to crashing logging systems. On average, the website then received 875,000 requests per second with traffic volumes reaching 5.5Gbps.
Such a massive amount of traffic, without a contextual reason, is a hallmark of a typical DDoS.
However, the unnamed customer was about to be taught a lesson in how buggy code can be just as disruptive as an external cyberattack.
The incident was reported to the Akamai Security Operations Command Center (SOCC), which began to examine traffic flows a few days prior to the day of the attack with the help of SIRT researchers.
"There were 139 IP addresses approaching the customer's URL a few days before the peak, with the exact same "attack" features," Akamai says. "This URL went from 643 requests to well over four billion, in less than a week."
Close to half of the IPs were flagged as network address translation (NAT) gateways,and the traffic in question was later found to be generated by a Microsoft Windows COM Object, WinhttpRequest.
Typical traffic forwarded to the domain before the incident contained both GET and POST requests. However, the 'malicious' traffic was only sending a stream of POST requests.
"Examining all the POST requests hitting the customer's URL showed that the User-Agent fields were not being forged or otherwise altered, boosting the confidence researchers had for their conclusion that a Windows-oriented tool was responsible for this massive flood of requests," the cloud service provider says.
To give the firm time to work out what was doing on, SOCC was able to mitigate most of the strange requests over the next 28 hours, leading to the discovery that the traffic smashing the URL was "the result of a warranty tool gone haywire."
Buggy code, and not a botnet, was the problem. The warranty tool's errors meant that it sent constant POST requests to the domain automatically and with enough frequency to potentially take down the website.
A fix was created and deployed quickly by the vendor at fault for the tool which resolved the issue.
It is important to note that not all bots are bad and many are used for legitimate purposes such as warranty systems, search engine crawling, archiving, and content aggregation. DDoS attacks are common but should traffic surges impact a domain, website operators also must explore other reasons for slow responses and disruption that traffic spikes can cause.
The company has also released a separate report on DDoS attacks taking place over the past few years. Below is a chart of the strength of most DDoS attacks recorded over 2017 - 2018, an average DDoS generally featuring in the 1Gbps range.
These are the worst hacks, cyberattacks, and data breaches of 2018