Artificial intelligence is a work in progress.
Or, as some critics might say, a work in abject regress that will wreck humanity's remaining faith in itself.
Even some tech companies seem a touch unsure about their own AI systems. Why, not too long ago IBM announced it was withdrawing from the facial recognition business altogether.
We'll come back to IBM in a moment. You see, I've just been handed the results of a study that leaves a lot to consider.
Performed by marketing company Wunderman Thompson's Data group, the study examined whether well-known visual AI systems look at men wearing PPE masks in the same way as they do women.
The researchers took 256 images of each gender -- of varying qualities and taken in varying locations -- and then used generic models trained by some of the larger names in tech: Google Cloud Vision, Microsoft Azure's Cognitive Services Computer Vision, and IBM's Watson Visual Recognition.
The results were a little chilling.
Though none of the systems were particularly stellar at spotting masks, they were twice as likely to identify male mask-wearers as female mask-wearers.
So what did they think the women were wearing? Well, Google's AI identified 28% of the images as being women with their mouths covered by duct tape. In 8% of cases, the AI thought these were women with facial hair. Quite a lot of facial hair, it seems.
IBM's Watson took things a little further. In 23% of cases, it saw a woman wearing a gag. In another 23% of cases, it was sure this was a woman wearing a restraint or chains.
Microsoft's Computer Vision may need a little more accurate coding too. It suggested that 40% of the women were wearing a fashion accessory, while 14% were wearing lipstick.
Such results may make many wonder where these AIs get their ideas from. A simple answer might be "men."
The researchers, however, suggested the machines were looking for inspiration in "a darker corner of the web where women are perceived as victims of violence or silenced."
It's hard not to imagine that's true and it's something that potentially has awful consequences as we disappear ever more readily into AI's odiferous armpit.
The researchers say they're not trying to demonize AI. (AI is quite good at doing that for itself.)
Instead, as Wunderman Thompson's director of data science Ilinca Barsan put it: "If we want our machines to do work that accurately and responsibly reflects society, we need to help them understand the social dynamics that we live in to stop them from reinforcing existing inequalities through automation and put them to work for good instead."
See also: 2084: What happens when artificial intelligence meets Big Brother | No matter how sophisticated, AI systems still need human oversight | AI's big problem: Lazy humans just trust the algorithms too much | What is AI? Everything you need to know about Artificial Intelligence
Still, when I asked the researchers what they thought about IBM withdrawing from the facial recognition business, they replied: "Our research focused on visual label recognition rather than facial recognition, but if it's this easy for an (admittedly general) AI model to confuse someone wearing a mask with someone being gagged or restrained, then withdrawing from a business that is so prone to misuse, privacy violation, and training bias seems to be the right (and smart) thing to do for IBM."
Humanity hasn't done too good a job of helping machines understand vital elements. Humanity itself, for example. Partly because machines just don't have that instinct. And partly because humans struggle to understand themselves.
How often have you been driven toward head-butting walls during even the briefest encounter with customer service AI?
I fear, though, that too many AI systems have already been dragged into a painfully biased view of the world, one from which they may never entirely return.
How much more darkness does that risk propagating?