X
Innovation

Facebook is creating a network filled with bad bots to help it understand real scammers

Facebook's engineers are tired of always running after scammers. So they are re-creating bad behavior to try and anticipate it.
Written by Daphne Leprince-Ringuet, Contributor

It is the bane of every security researcher: no matter how sophisticated the tool is to fight harmful behavior on any given interface, hackers will always adapt, scale up their game, and find new ways to work around the mechanism. 

In an effort to get ahead of the scammers, Facebook is trying a new approach: to unleash an army of bots on a version of the platform, tasked with harmful actions – so that the Facebook-controlled bots can discover the loopholes before the real scammers even get to them.

The technology will be operating in an alternative version of Facebook, dubbed WW to reflect that the system is a scaled-down version of the World Wide Web (WWW). 

SEE: Navigating data privacy (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)

Contrary to traditional simulations, in which the simulated bots would be working on a simulated platform, WW is built on Facebook's real-world software platform

The company's engineering team developed a method called Web-Enabled Simulation (WES), which consists of carrying out simulations on real web infrastructures, rather than artificial ones, to better reflect real user interactions and social behavior.

Using WES, Facebook's engineers built WW – a parallel version of the social media platform, complete with Messenger, profiles, pages, and inopportune friend requests, but exclusively reserved for bots. 

Presenting the technology at a webinar, Mark Harman, research scientist at Facebook, said: "The simulations happen on the actual tens of millions of lines of code that make up the Facebook infrastructure. The bots use all of the same software and tools that a user would be using on the platform." 

"It means the simulation results are much closer to the reality of what happens on the platform, and to the many subtleties where harmful behavior can occur," he added. 

The bots, therefore, operate in an environment that is very close to the actual users on the platform, but a safe distance has wisely been kept. The bots' actions are carefully constrained, and the engineers set up both a privacy layer and interaction mechanism layer to separate the two worlds. 

ww-simulator.jpg

Facebook's engineers built WW – a parallel version of the social media platform, exclusively reserved for the artificial doing of bots.  

Image: Facebook

Harman's team is currently focusing on using the bots to simulate scamming behavior, to find out if the detection mechanisms on Facebook are good enough, but also to uncover the new ways that scammers might try to extort money from unknowing users. 

Real-life scammers typically crawl over the social-media platform until they find a target. And so, in a similar vein to game development, the engineering team recreated a scenario in which innocent bots simulate interactions with bots that are rewarded for crawling and acquiring another agent that they can scam.

Harman explained that several methods were used to train the bots. They ranged from the old-fashioned rule-based approach, in which bots chose to carry out actions like sending a friend request based on a predetermined set of rules, to unsupervised learning, in which the bots are given the reward criteria but not the rules to get there.

SEE: Sixteen Facebook apps caught secretly sharing data with third-parties

Supervised learning was also a part of the mix: using anonymized data, the researchers defined patterns of real user behavior and trained the bots to imitate them.

"There is a strong relationship with AI-assisted gameplay," said Harman. "Simulated game players are a little bit like our bots. We are automating the process of making the game ever-more challenging, because we want to make it harder for potentially sophisticated and well-skilled bad actors." 

From an engineering perspective, the proposal is ambitious, and Harman stressed that the project is still in a research phase. Hartman hopes that it is only a matter of months before the WW initiative comes to life, but admitted that further research was needed in various fields such as machine learning, graph theory or AI-assisted gameplay.

If the project were to come about at scale, however, the research team anticipated a significant boost to Facebook's defense in the war against harmful behavior. 

"The bots, in theory, can do things we haven't seen before thanks to reinforcement learning," said Hartman. "That's something we want because it will let us get ahead of the bad behavior, rather than catch up with it."

What's more: using the WES method, WW could be replicated for any large-scale web system in which a community's behavior can be observed. It could go a long way, therefore, in alleviating moderation efforts for many organizations.

Editorial standards