Myth-busting AI won’t work

Myths, including myths about AI, arise in response to the unknown. Debunking myths about AI won’t create knowledge.

hype-manufacturing-2.png

Tiernan Ray for ZDNet.

People have myths because that is one kind of response to the unknown. If you take away their myths, you may leave them with nothing. 

That's why a very well-intentioned, thoughtful effort of scholars over at the Mozilla dot org foundation to debunk nonsense about artificial intelligence is bound to fail. 

The new website, AI Myths, purports to debunk pernicious lies and mischaracterizations about artificial intelligence, such as the notion that AI has agency, or that "superintelligence is coming soon."

What the very astute authors have failed to confront is that people have no idea what AI is. The authors have dutifully surveyed the landscape to come up with bad examples of ridiculous claims. But it doesn't appear they've spent any time talking to ordinary individuals about what those individuals might actually know about the science of AI, what it means for AI to be a scientific discipline. 

In fact, most people haven't the slightest inkling of how AI in any form functions as a science and as a technology. 

There are many reasons why that is, and many who can be blamed if one wants to place blame. Journalists have indulged in the myths that the website decries, such as imputing agency. But the greater sin is a sin of omission, failing to perform real science journalism that would explain to a lay audience the actual practice of artificial intelligence. Articles could discuss a statistical approach that starts with counting things, in the simplest sense, and then moves up the chain of thought to creating computer programs that improve their estimation of a probability distribution via gradient descent

Also: Why is AI reporting so bad?

Those are tough concepts, and really hard to explain in simple terms, but that's what science journalism is supposed to be for. 

Corporations have not helped. They have sought to obfuscate in subtler ways, sometimes because journalists ask terrible questions, often because what corporations would like to promote is far in excess of their actual achievements. As MILA's Yoshua Bengio remarked in February at a meeting with a small group of reporters including ZDNet, if corporations won't tell you how their AI works it's because there's not much there. "It's hidden because it's making it seem important," said Bengio. "Companies make it look a lot more sophisticated than it is."

Scholars themselves have rarely taken the time to present their science to the public, at least not in an accessible form. Perhaps it was never the case with Einstein and such thinkers that they took time away from work to explain science. But the lack of discussion has left the average individual with no concept of what people such as Bengio actually do. 

In other words, the myths about AI are part of a general trend of science illiteracy. No myth busting is going to help when there is no concept of science to fall back on. 

The original sin, of all of these parties, scientists, corporations, and journalists alike, is that they left aside the most important piece of the puzzle, the computer. Algorithms, good or bad, don't happen in a vacuum, they happen inside a machine of capacitors and resistors and digital ones and zeros. The fact that AI doesn't have agency is because it's a product of a machine which is in turn a tool. People use the tool to do things, just as people have used tools forever. 

Also: No, this AI can't finish your sentence

A general lack of understanding of AI is coming out of a general lack of understanding of computers. Somehow, the entire ecosystem of parties in AI have been comfortable to talk mostly in high-level concepts and ignore the humble fact of the computer

It's a pity, because the fact of the computer goes to the very profound critique brought by the Mozilla authors and those whose work they cite, such as Fanny Hidvégi of the organization Access Now. Hidvégi and others have been pointing out for a long time now how obfuscation allows humans to escape responsibility for what humans do to one another, in this case via technology. It's a critique that goes back to Norbert Wiener, at least, but it's a critique that is lost if one fails to return to the central fact that algorithms exist in a tool built by people called a computer. 

Given science illiteracy, given the forgotten role of the computer, the authors at the Mozilla foundation have unfortunately framed the discussion backwards. The lead statement on the site reads, 

With every genuine advance in the field of 'artificial intelligence,' we see a parallel increase in hype, myths, misconceptions and inaccuracies. These misunderstandings contribute to the opacity of AI systems, rendering them magical, inscrutable and inaccessible in the eyes of the public.

Misunderstandings actually originate in the opacity of the discipline, they don't create it. The myths emerge from ignorance, are a product of ignorance because no one has bothered to explain the science. 

The truth of these authors' critique, which is that the myths are laughably bad, is grasped immediately — by those who have some knowledge to fall back on. The rest of the world is likely to shrug their shoulders and go back to the myth. 

see also

Artificial intelligence in the real world: What can it actually do?

What are the limits of AI? And how do you go from managing data points to injecting AI in the enterprise?

Read More