​Facebook to outline Lumos, its machine learning platform for images

Facebook is providing more details about its Lumos platform, which allows those without deep learning and computer vision training to use it.
Written by Larry Dignan, Contributor

Facebook has provided more details about its Lumos machine learning platform, which aims to understand images and video, and is designed to be used without deep learning and computer vision training.

The Lumos platform, announced at Web Summit last year, is a scalable system to sort through photos and videos. Lumos is the front end to a new set of 12 image classification changes made to Facebook's automatic alt text (AAT) for photos and its FBLearner Flow system. These systems describe photos like you would to a friend.

Lumos has a bevy of implications for Facebook. First, the system will improve photo search and the AAT system for the visually impaired. Facebook's presentation will be delivered at the Machine Learning @Scale technical conference for data scientists, engineers, and researchers.

Apple joins Partnership on AI to hang with likes of Amazon, Google, Microsoft, Facebook | Facebook's giving away servers for AI: So what does it get in return? | Facebook to use AI to find fake news, offensive live video

In a blog post, Joaquin Quiñonero Candela, Facebook's director of Applied Machine Learning, said the goal for social media giant is to "weave AI into the Facebook engineering fabric."

Like Google, Facebook is betting a big chunk of its research budget on AI. On Facebook's fourth quarter earnings conference call, CEO Mark Zuckerberg said:

I think AI is going to be great for the experience people have in our community. There are a few types of systems here that we're working on around understanding content. One is around visual content, the other is about language. For visual content we want to be able to look at a photo and understand what's in it and whether that's something that you're going to be interested in. And similarly, we went to be able to look at a video and watch it and understand whether that's something that you're going to be interested in. And you can imagine that today we consider putting things in your news feed that you're connected to in some way, that are from a friend or a page that you're following or that one of your friends likes. But there's no reason that we shouldn't be able to match you up with any of the millions of pieces of content that you might be interested in that get shared on Facebook every day except for the fact that we don't have the AI technology to know what those are about and that they match your interest today. So a combination of being able to understand the texts that people message, read the articles that people would want to look at, watch the videos, look at the photos are going to be great too.

Zuckerberg also said that AI has a lot of potential policing the community and identifying objectionable content.

On the technical front, Candela also outlined that:

  • Facebook is running 1.2 million AI experiments a month on FBLearner Flow.
  • Lumos improves via newly labeled data and annotated information from applications.
  • More than 200 visual models have been trained and deployed on Lumos by dozens of teams. These deployments were for objectionable-content detection, spam fighting, and automatic image captioning to name a few purposes.
  • AAT was updated by the AI team, which gathered a sample of 130,000 public photos shared on Facebook that included people. Humans annotated and then Facebook used them to build a machine learning model.
  • Facebook said the goal is to improve precision and recall of its machine learning system to be pixel perfect.

Here's a look at the AI in action scoring pictures.


Should you use Facebook at work?

Editorial standards