How Microsoft's AI spots ransomware attacks before they even get started

Microsoft is targeting human-operated ransomware operations.
Written by Charlie Osborne, Contributing Writer on
A bearded professional types on a tablet. Programming and cybersecurity-related symbols float in the foreground.
Image: Shutterstock

Microsoft has revealed how artificial intelligence (AI) technologies are used in the fight against ransomware. 

Ransomware is one of today's most prolific and vicious digital threats. Ransomware families including Locky, WannaCry, NotPetya, and Cerber plague consumers and businesses alike, locking up infected systems and demanding payment in return for decryption keys, which may or may not return access to encrypted files. 

Ransomware as a service (RaaS) is also now a standalone and popular criminal business. Operators can purchase access to ransomware for use in their campaigns, whether they are targeting the general public en masse or going after 'Big Game' enterprise companies. 

SEE: Ransomware attacks: This is the data that cyber criminals really want to steal

According to Microsoft's 365 Defender Research Team, human-operated ransomware campaigns are complex and multi-faceted, which can make early detection very difficult to achieve – especially as the campaigns continue to evolve.

In a blog post on Tuesday, the tech giant said it was exploring "novel ways" to harness AI in the face of an "increasingly complex threat landscape."

Microsoft is focused on disrupting the earliest stages of a ransomware attack with AI enhancements for Microsoft Defender for Endpoint. In what the company calls "early incrimination," machine learning (ML) algorithms are being developed to determine "malicious intent" in files, processes, user accounts, and devices. 

However, to do so, the ML protections have to analyze patterns and behavior in attacker contexts, as well as related events on target devices or enterprise networks.  

Indicators that a human-operated ransomware campaign is underway can include suspicious user account activity. For example, a cyber criminal purchases stolen credentials and starts poking around a network, listing files and processes as they go, or testing out their privileges. In addition, attackers might move across a network in ways outside of typical work activity associated with an account. In the final stage, of course, encryption software is executed.  

Three sets of AI-generated inputs have been developed in the cybersecurity solution, which independently generates a risk score determining whether an entity is likely involved in an active ransomware attack:

  • Time-based and statistical analysis of security alerts at the organizational level
  • Graph-based aggregation of suspicious events across devices
  • Device-based monitoring to flag suspicious activities

By correlating these datasets, Defender can detect patterns and connections that might have been missed otherwise. If a high enough confidence level is reached, the files and entities involved in the ransomware operation are automatically blocked. 

SEE: Cloud computing security: Where it is, where it's going

In tests, Defender could detect and stop a ransomware attack in the early encryption stage, when less than 4% of network assets were encrypted. 

"With its enhanced AI-driven detection capabilities, Defender for Endpoint managed to detect and incriminate a ransomware attack early in its encryption stage, when the attackers had encrypted files on fewer than four percent (4%) of the organization's devices, demonstrating improved ability to disrupt an attack and protect the remaining devices in the organization," Microsoft said.

Microsoft says that the AI protections are designed to trigger at the earliest stages of a ransomware outbreak, at the point that malware begins to encrypt devices. These protections will be bolstered over time and expanded to "incriminate and isolate compromised user accounts and devices to further limit the damage of attacks."

While the Redmond giant is utilizing AI in defense, in related news, the company is also paring back facial recognition technology use in the name of ethical AI standards.

Previous and related coverage

Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0

Editorial standards