How Sittercity sped up its babysitter screening process with Google's AutoML

By automating its process of screening babysitter photos, Sittercity has been able to cut back on man-hours and reduce a revenue-draining delay in the sitter approval process.

How Sittercity is using Google's AutoML to help screen babysitters Sittercity's Phil Brown, head of operations, and Sandra Dainora, head of product, explain how they've used AutoML to eliminate a time-consuming manual portion of their screening process.

Cloud service providers are appealing to customers with a growing catalog of AI and machine learning capabilities and services. But to reach all of its potential customers, cloud companies like Google and AWS have to make machine learning more accessible. Google is doing that with its relatively new service AutoML, software that automates the creation of machine learning models.

When Sittercity migrated to Google Cloud in the first half of 2018, it had the opportunity to try out AutoML. The site, which connects families to babysitters, boasts that it pioneered "tech-enabled child care," but its small staff didn't include machine learning experts.

ZDNet spoke to Sittercity's Phil Brown, head of operations, and Sandra Dainora, head of product, to hear more about Sittercity's decision to use AutoML and what the experience has been like.

Here are some highlights of the conversation:

A tech-focused but small staff, without ML expertise

Dainora: "Sittercity was founded in 2001, and we were the first company to develop an online childcare solution. Since then, we've connected millions of families and sitters on our platform. As technology has evolved, so have we. So we're really embracing mobile first, data driven, on demand world that we're living in...

"We currently have 50 employees on our team, and a 13-person tech team. We're headquartered here in Chicago, and this is a place where we actually have not, natively, had a lot of expertise in this area, but a lot of the tools that we've been utilizing have actually made that transition and learning process actually a pretty easy step for us."

Automating the babysitter photo review process

Brown: "Our first use case of AutoML has largely been around redesigning the way that we review user photos. We have 1,500 sitters, roughly, joining the platform every single day. Those sitters upload photos for identification purposes when connecting with families on our platform, and we review those to ensure that they meet our quality standards. For example, things like photos not including sitters with sunglasses, or making sure it doesn't contain one of 10 or so basically disqualifying attributes.


"Previously we had a team of people reviewing those photos individually, and actioning on those photos. We set out to actually automate this photo review process with machine learning. And we've joined Google's alpha testing team for AutoML. What AutoML basically allows us to do, is to detect these disqualifying photo features, like obscured faces, Snapchat filters, things of that nature, with greater than 90 percent accuracy, ultimately meeting our requirements."

Why the manual review process was a drag on the business

Brown: "I think it's two fold: number one, it's a drag on the business in terms of just people needs, for actually manually going in a reviewing those photos. But I think the bigger drag on the business is actually in the delay that it occasionally takes in order to review photos. So if you have up to a two-day delay in approving a sitter photo, that's two days where the sitter can't find a job, that's missed employment opportunity for the sitter, and, frankly, missed revenue opportunity for the company."

Building a custom model

Brown: "The AutoML big distinguishing feature is the fact that you can create custom models... We use Google Vision API in conjunction with the AutoML platform, and the reason for that is that Cloud Vision API was perfect for certain photo qualities we wanted to identify. Labels like whether there are multiple people in the photo, or if someone is wearing sunglasses, or if there are pets in the photo, as a few examples of something that Cloud Vision API is quite good at detecting already.

"But we needed a custom photo model to identify very specific things we care about, that didn't have robust Cloud Vision API labels around them. Attributes like I said before: filters on Snapchat. There's just a lot of nuance there.

"Another example is what Vision API identifies as a face might not necessarily mean it meets our quality standards. For example, that face could be cut off in the image, it could be obscured by a phone, it could be blurry. And those are the things that we need custom models around to identify whether it - yes, it might be a face that Cloud Vision API can understand, but does it meet those more specific requirements that we have in our photo review process?"

A manual labeling process but with quick results

Brown: "The dataset itself is fairly straightforward, consisting of photo URLs and individual labels. How you get to those individual labels - we're lucky that we have a photo database in the millions, so we didn't have to use, basically, outside sources to gain those labels. We did manually label the data, and I think what that exactly entails is actually going through physically looking at a photo, and assessing whether or not some of those individual labels, like various Snapchat filters, apply to that photo.

"It is manual work you have to do in order to do it. A lot of it is either putting brand new labels, it could be relabeling, or it could be essentially cleaning up old labels because you really want to be training with a 100 percent accurate data set. And so one other interesting thing I'll say is one of the phenomenal things about AutoML is just that we saw remarkably accurate early results with as few as 1,500 photos in our training data set."

Reducing that two-day delay and cutting back man-hours

Brown: "We were able to cut down on man hours. Previously [we had] a team of almost six people, at times dedicating their entire days to this. Again, the bigger impact is on the business side of just reducing that two-day delay when a sitter's photo could be published to the app. Now it's really moments when a photo is accepted. Early results are pretty promising: reducing our photo review team, freeing up our team to focus on other things, more important things for our customers."

This article was amended to note that Sittercity has 50 employees.