X
Tech

Facebook removes sick baby hoaxes, urges users to report more

Facebook has not changed its stance when it comes to removing scams and hoaxes, even if the images are of sick babies. On the flipside, the company has removed many of the offending images.
Written by Emil Protalinski, Contributor

Update: Facebook admits it needs to fight scams more efficiently

Over the weekend, I wrote about how five anti-scam websites (Hoax-Slayer, That's Nonsense, The Bulldog Estate, Facecrooks, and facebookprivacyandsecurity) have banded together to fight back against a very viral type of Facebook hoax that exploits pictures of sick babies. I'm pleased to report their efforts have resulted in 26 out of the 26 very popular hoaxes have now been removed, as well as countless others. That's cause for celebration, albeit a temporary one.

This type of hoax typically involves photographs of ill and/or disabled children in hospitals being shared virally across Facebook, often asking users to donate for the child's medical expenses and/or promising that sharing the photo will result in donations from Facebook itself. Both claims are of course false. The real victims are not, however, the users who are being tricked – it's the families of these children who learn photos of their sick relatives are being used to perpetuate the scams and hoaxes.

Facebook currently relies on reports from users to stop the sharing of such images. The five aforementioned websites encourage users to report popular instances of offending photos, but when it comes to viral content, Facebook just doesn't react quickly enough. The quintet says it is playing catch up: new instances of these images are being uploaded and shared faster than users can report them in order for Facebook to take them down. That's why the group wrote a letter pleading for media attention: the hope was that more publicity would not only educate users about the problem but it could possibly also pressure Facebook into being more proactive when it comes to removing the hoaxes.

I wasn't so hopeful. After I explained the situation and how the images went viral, I wrote this:

I don't believe Facebook is going to hire the manpower to actively scan for such scams and hoaxes. I do, however, believe the company can work harder on improving its algorithms for flagging such content and build a system to kill off a viral image if it is reported enough.

In other words, I believed Facebook wasn't going to change its practices, but would instead work harder to improve its current system. Unfortunately, I was right. At first, here's what I was told:

Protecting the people who use Facebook from spam and malicious content is a top priority for us. We have spent several years developing protections to stop spam from spreading and have sought to cooperate with other industry leaders to keep users and their data safe. We've built enforcement mechanisms to quickly shut down malicious Pages, accounts and applications that attempt to spread spam by deceiving users or by exploiting several well-known browser vulnerabilities. We have also enrolled those impacted by spam through checkpoints so they can remediate their accounts and learn how to better protect themselves while on Facebook. Beyond these protections, we've put in place backend measures to reduce the rate of these attacks and will continue to iterate on our defenses to find new ways to protect people.

In addition to the engineering teams that build tools to block spam we also have a dedicated enforcement team that seeks to identify those responsible for spam and works with our legal team to ensure appropriate consequences follow.

As always, we advise people not to click on links in strange messages, even if those messages have been sent or posted by friends. This tip and many more can be found on our Facebook Security Page (http://www.facebook.com/security), which is followed by over four million people.

Facebook Security Tips:

  • Review your security settings and consider enabling login notifications. They're in the drop-down box under Account on the upper right hand corner of your FB home page.
  • Don't click on strange links, even if they're from friends, and notify the person if you see something suspicious.
  • Don't click on friend requests from unknown parties.
  • If you come across a scam, report it so that it can be taken down.
  • Don't download any applications you aren't certain about.
  • For using Facebook from places like hotels and airports, text “otp” to 32665 for a one-time password to your account.
  • Visit Facebook's security page, http://www.facebook.com/security, read the items "Take Action" and "Threats."

Clearly, I got back a generic answer. Don't get me wrong: this is important information, but it wasn't specific to the sick baby hoaxes I was writing about. Thankfully, another contact at the company was willing to be more specific.

"Facebook supports the efforts of these websites in raising user awareness of online scams," a Facebook spokesperson said in a statement. "Our abuse reporting tools allow users to flag content that violates Facebook's statement of rights and responsibilities. Every item posted on the site has a drop-down menu containing the option to report spam. Where content is identified as spam by a number of users, it is typically automatically mass deleted from the site. We also use sophisticated monitoring systems to identify postings or links that are being shared at an abnormally high rate. These can be reviewed by our user operations teams which will remove malicious content where appropriate. Our systems are constantly being refined to ensure that Facebook remains a safe, trusted environment for our users."

In short, Facebook is still leaning on its "Report This Photo" feature. Soon after I got this response though, reports began surfacing at the websites that many of the images of sick children they were tracking had started getting removed. That's Nonsense has a list of 26 sick baby hoaxes that were around for particularly long and were shared way too many times. I checked, and the majority of them had been removed. So I asked if the letter the group sent to the media, or my article, had made an impact. I was told that this was unlikely.

"I'm not aware of specific action as a result of the article," the Facebook spokesperson said in a statement. "If the examples cited were taken down, then I would suggest that reflects that the reporting and detection systems are working effectively. Clearly there will always be an element of cat-and-mouse here, however we are constantly getting better and faster at eliminating this sort of abuse."

I wasn't satisfied. I checked the list, and six of the 26 hoaxes were still active. I pressed about them in particular, noting that they were pretty graphic, and deserved another look.

"Just wanted to let you know that we have circulated the baby images story and are looking at those examples you sent," the Facebook spokesperson said in a statement. "Will come back to you."

At the time of writing, I haven't yet gotten an update. A picture is worth a thousand words, however, or in this case, the removal of six photos is worth six thousand words. All six are now dead links: one, two, three, four, five, and six.

This means that all 26 hoaxes and variants that Facebook was not removing for whatever reason (they apparently weren't getting reported enough for the company to notice them), are now dead. Unfortunately, the images are actually still on Facebook's servers, due to a very long-standing bug, but that's an issue on its own. What's important here is that they can no longer spread virally, with their tens of thousands of shares and Likes.

As they say, the battle has been won, but the war is far from over. Facebook insists that relying on its reporting feature is the solution. This is because removing content manually only reinforces the fact that its systems aren't working. The company wants to make sure users surface questionable content so that it doesn't have to check every single photo uploaded to the service. I understand that this is the long-term goal, but as I've learned time and again, the social network thankfully does make exceptions.

There are two possibilities here. The extra attention to this issue could have made users report the photos in question more, causing Facebook to remove them. The other possibility is that the extra attention caused Facebook to remove them. Most likely, it's a combination of both.

Here's another excerpt from my last article:

These scams and hoaxes spread through the Facebook News Feed, where your friends see them and also share them. If a post is reported often enough, it shouldn't appear in anyone's News Feed until Facebook can look at it and determine whether it should be going viral. This is especially true if the caption claims that Facebook is going to be making donations. It wouldn't be that hard to have algorithms check for an image: that is being shared and Liked a lot, that is also being reported a lot, and that mentions "Facebook" and/or "donation" in the caption.

My suggestion probably isn't very good. Facebook knows best how to improve its algorithms. I think the company will look at the images in question and try to figure out why they didn't get removed sooner.

This is not over: the big 26 may be gone, but there are more of them out there, and even more coming. In the end, all the average Facebook user can do is report hoaxes and scams, as well as encourage his or her friends to do the same.

I will be keeping in touch with Facebook on this matter and will let you know if anything further develops.

Update: Facebook admits it needs to fight scams more efficiently

See also:

Editorial standards