X
Tech

This is how artificial intelligence will become weaponized in future cyberattacks

Real-time, autonomous decisions are only some of the techniques AI can bring to the table.
Written by Charlie Osborne, Contributing Writer

Artificial intelligence has the potential to bring a select set of advanced techniques to the table when it comes to cyber offense, researchers say.

On Thursday, researchers from Darktrace (.PDF) said that the current threat landscape is full of everything from script kiddies and opportunistic attacks to advanced, state-sponsored assaults, and in the latter sense, attacks continue to evolve.

However, for each sophisticated attack currently in use, there is the potential for further development through the future use of AI.

Within the report, the cybersecurity firm documented three active threats in the wild which have been detected within the past 12 months. Analysis of these attacks -- and a little imagination -- has led the team to create scenarios using AI which could one day become reality.

"We expect AI-driven malware to start mimicking behavior that is usually attributed to human operators by leveraging contextualization," said Max Heinemeyer, Director of Threat Hunting at Darktrace. "But we also anticipate the opposite; advanced human attacker groups utilizing AI-driven implants to improve their attacks and enable them to scale better."

Trickbot

The first attack relates to an employee at a law firm who fell victim to a phishing campaign leading to a Trickbot infection.

Trickbot is a financial Trojan which uses the Windows vulnerability EternalBlue in order to target banks and other institutions. The malware continues to evolve and is currently equipped with injectors, obfuscation, data-stealing modules, and locking mechanisms.

In this example, Trickbot was able to infect a further 20 devices on the network, leading to a costly clean-up process. Empire Powershell modules were also uncovered which are typically used in remote, keyboard-based infiltration post-infection.

AI's future role

Darktrace believes that in the future, malware bolstered through artificial intelligence will be able to self-propagate and use every vulnerability on offer to compromise a network.

"Imagine a worm-style attack, like WannaCry, which, instead of relying on one form of lateral movement (e.g., the EternalBlue exploit), could understand the target environment and choose lateral movement techniques accordingly," the company says.

If chosen vulnerabilities are patched, for example, the malware could then switch to brute-force attacks, keylogging, and other techniques which have proven to be successful in the past in similar target environments.

As the AI could sit, learn, and 'decide' on an attack technique, no traditional command-and-control (C2) servers would be necessary.

See also: Artificial intelligence agent pilot launched to expose liars at EU borders

Doppelgängers

At a utility company, a device loaded with malware used a variety of stealth tactics and obfuscation to stay hidden.

A file was downloaded onto the device from an Amazon S3 service which established a backdoor into the compromised network, utilizing a self-signed SSL certificate which tricked standard security controls. Traffic was sent via ports 443 and 80 to further blend into the environment.

"Further Open Source Intelligence (OSINT) suggests that this particular threat actor utilizes alternative Doppelgänger techniques to reduce detectability in other infrastructure," Darktrace says.

AI's future role

It is possible that AI could be used to further adapt to its environment. In the same manner, as before, contextualization can be used to blend in, but AI could also be used to mimic trusted system elements, improving stealth.

"Instead of guessing during which times normal business operations are conducted, it will learn it," the report suggests. "Rather than guessing if an environment is using mostly Windows machines or Linux machines, or if Twitter or Instagram would be a better channel [...] it will be able to gain an understanding of what communication is dominant in the target's network and blend in with it."

CNET: Google DeepMind's AI can detect over 50 sight-threatening eye conditions

Patterns of life

In the final example, Darktrace uncovered malware from a medical technology company. What made the findings special was that data was being stolen at such a slow pace and in tiny packages that it avoided triggering data volume thresholds in security tools.

Multiple connections were made to an external IP address, but each connection contained less than 1MB. Despite the small packets, it did not take long before over 15GB of information was stolen.

By fading into the background of daily network activity, the attackers behind the data breach were able to steal patient names, addresses, and medical histories.

AI's future role

AI could not only provide a conduit for incredibly fast attacks but also "low and slow" assault, but also be used as a tool to learn what data transfer rates would flag up activity to security solutions.

Instead of relying on a hard-coded threshold, for example, AI-driven malware would be able to dynamically adapt data theft rates and times to exfiltrate information without detection.

TechRepublic: The 14 AI technologies businesses should be pursuing

"The extrapolation of AI-driven attacks is entirely realistic. We see sophisticated characteristics in existing malware on the one hand -- and narrow AI understanding context on-the-fly on the other," Darktrace says. "The combination of the two will mark a paradigm shift for the cybersecurity industry. Companies are already failing to combat advanced threats such as new strains of worming ransomware with legacy tools."

"Defensive cyber AI is the only chance to prepare for the next paradigm shift in the threat landscape when AI-driven malware becomes a reality," the company added. "Once the genie is out of the bottle, it cannot be put back in again."

Nanowires, silver, and AI: The future of our smartphones (in pictures)

Previous and related coverage

Editorial standards