Though none of the systems were particularly stellar at spotting masks, they were twice as likely to identify male mask-wearers as female mask-wearers.
So what did they think the women were wearing? Well, Google's AI identified 28% of the images as being women with their mouths covered by duct tape. In 8% of cases, the AI thought these were women with facial hair. Quite a lot of facial hair, it seems.
IBM's Watson took things a little further. In 23% of cases, it saw a woman wearing a gag. In another 23% of cases, it was sure this was a woman wearing a restraint or chains.
Microsoft's Computer Vision may need a little more accurate coding too. It suggested that 40% of the women were wearing a fashion accessory, while 14% were wearing lipstick.
Such results may make many wonder where these AIs get their ideas from. A simple answer might be "men."
The researchers, however, suggested the machines were looking for inspiration in "a darker corner of the web where women are perceived as victims of violence or silenced."
It's hard not to imagine that's true and it's something that potentially has awful consequences as we disappear ever more readily into AI's odiferous armpit.
The researchers say they're not trying to demonize AI. (AI is quite good at doing that for itself.)
Instead, as Wunderman Thompson's director of data science Ilinca Barsan put it: "If we want our machines to do work that accurately and responsibly reflects society, we need to help them understand the social dynamics that we live in to stop them from reinforcing existing inequalities through automation and put them to work for good instead."
Still, when I asked the researchers what they thought about IBM withdrawing from the facial recognition business, they replied: "Our research focused on visual label recognition rather than facial recognition, but if it's this easy for an (admittedly general) AI model to confuse someone wearing a mask with someone being gagged or restrained, then withdrawing from a business that is so prone to misuse, privacy violation, and training bias seems to be the right (and smart) thing to do for IBM."
Scary smart tech: 9 real times AI has given us the creeps
Humanity hasn't done too good a job of helping machines understand vital elements. Humanity itself, for example. Partly because machines just don't have that instinct. And partly because humans struggle to understand themselves.
How often have you been driven toward head-butting walls during even the briefest encounter with customer service AI?
I fear, though, that too many AI systems have already been dragged into a painfully biased view of the world, one from which they may never entirely return.
How much more darkness does that risk propagating?