For most of the past sixty years, a rich critique of artificial intelligence was avidly pursued, mostly by insiders, people either practicing AI or interested onlookers who were in close proximity.
Now the world finds itself in a strange state: Just as AI has gone mainstream, showing up everywhere from your Instagram feed to your smartphone voice assistant, many of those voices of criticism have been lost as a generation of thinkers passed away, people like MIT scientist Marvin Minsky and UC Berkeley professor of philosophy Herbert Dreyfus.
But a small contingent of critics remains, and the world needs them to keep a balance in its view of AI as the use of AI becomes more entwined with everyday life. They include Judea Pearl, whose Book of Why reminds AI practitioners of the need for causal reasoning; and University of Toronto professor Hector Levesque, whose test for common sense, the Winograd Schema Challenge, sets a high bar for conventional AI.
But none have been more prolific in the modern era in the critique of AI than NYU professor of psychology Gary Marcus. In five books and numerous articles in popular publications such as The New York Times and The New Yorker, Marcus has skewered the latest AI headlines, to remind people of the limits to present AI.
Marcus has teamed with his colleague, Ernest Davis, a professor of computer science at NYU, to carry that fight forward in a new, highly accessible book titled Rebooting AI: Building Artificial Intelligence We Can Trust, which goes on sale today from Pantheon Books.
Rebooting AI is a refreshing, delicious critique of the hype around modern science, the lazy assumptions of the media, and the dangers of letting automation go unquestioned. It's an excellent volume for anyone who cares to stop and think about what might be happening to the built world around them.
Marcus talked with ZDNet in advance of its publication to answer the question, Why write this book now?
"There are many reasons to write a book, one is to consolidate a position, and another is to address a mismatch, between what AI is now and what it needs to do," says Marcus.
What AI needs is to be "trustworthy," Marcus and Davis emphasize, something it currently is not, in their view. Now that some kind of AI is everywhere, its shortcomings are no longer of purely academic interest, Marcus and Davis contend. A habit they call "overattribution," the tendency to give AI too much credit, "can actually be deadly," they write, citing the incident of a fatal crash of a Tesla on auto-pilot in 2016 while the human driver was supposedly watching a Harry Potter movie.
The book has echoes of past critiques over the decades, such as Drew McDermott's 1976 article, "Artificial Intelligence Meets Natural Stupidity." But that was a letter to fellow practitioners, a cautionary tale about professional hubris in the practice of AI.
Marcus and Davis's book has a different urgency, a need to tell individuals in plain language why the mythology of smart computers in the popular media is misleading. "The net effect of a tendency of many in the media to overreport technology results is that the public has come to believe that AI is much closer to being solved than it really is," they write.
Marcus and Davis propose a six-point checklist of what to ask oneself "whenever you hear about a supposed success in AI." They include "stripping away the rhetoric" to ask what the AI system "actually" did. In other words, ask about the science behind the hype.
That's sage advice when every week brings a computer breakthrough that sounds as if it is some form of sentient life. A recent example is the Allen Institute for AI's announcement of a language-modeling computer called "Aristo." The New York Times implied in its reporting that Aristo had mastered eighth-grade science topics, when in fact the science is nothing even close to that.
Within the delightful skewering of AI hype is a more serious indictment of the current practice of machine learning. The deep learning school of AI, Marcus contends, is not willing to investigate what Marcus calls "cognitive models" of the world.
"I think it requires very careful analysis of the world in a way that philosophers are comfortable with, and a way people in classical AI sometimes did, but not with the right tools," says Marcus in a phone interview. "People in AI right now are just not interested, they're just interested in more data and a faster machine."
The heart of the critique is best expressed in the chapter, "If Computers Are So Smart, How Come They Can't Read?" Today's prevalent form of AI, deep learning, has made enormous progress in creating probability distributions of the frequency of words in sentences, as the Alan Institute's Aristo shows. But they always shy away from the question of meaning.
As Marcus sees it, that is a product of the fact that most AI practitioners today don't want to reflect on big questions of understanding but rather are focused on creating the next big computer model, generally by appropriating the artifacts of human society with little thought.
"People want to steal by approximating a big corpus of knowledge humans have done," says Marcus, alluding to AI data sets such as CommonCrawl, used to train language processing. "The whole field of linguistics, thinking about what are the rules, all that is vital, but it's a whole lot more fun to collect some annotated databases," a practice that achieves new benchmarks, but that is not, he contends, "getting us further in advancing our understanding."
Systems such as OpenAI's massive natural language program, "GPT-2," introduced this past February, are "pretty impressive but totally incoherent," he points out.
By indicting the theft of today's AI models, Marcus is pushing against one of the most beguiling aspects of deep learning, which is precisely its ability to take any test of "reasoning" or "thinking" and reformulate it into an engineering challenge.
Levesque, in proposing his Winograd Schema Challenge several years back, urged scientists to "put aside any idea of tricks and shortcuts, and focus instead on what needs to be known, how to represent it symbolically, and how to use the representations." But the current best performance on his test, achieved this summer by a group from England's Alan Turing Institute, simply added millions of examples to training data to boost the skill of Google's "Bert" language algorithm in picking the answer. The engineers, defying Levesque's admonition, trumpeted their achievement with a paper titled, "A Surprisingly Robust Trick for the Winograd Schema Challenge."
Despite the criticism, Marcus says he is much closer to Marvin Minsky in his enthusiasm for AI's potential than he is to Dreyfus's legendary skepticism. "We are very optimistic about what AI could do, and we are depressed about how little has been accomplished," says Marcus of himself and Davis. "We are not like Hubert Dreyfus who says what AI can't do."
If theft and engineering tricks have created an untrustworthy AI, and, potentially, and unsafe one, what, then, is the answer?
The book has suggestions of where to turn, but they are less convincing than the critique itself. The book launch coincides with Marcus's recent founding of a company to develop AI technology, Robust.ai, of which he is founder and CEO. Marcus, who previously founded and sold a startup to Uber, hopes the venture will bring about some of the ideas he expresses in the book as to where AI needs to go. "That's why I built a company," he says.
"We want to build tools to make something like Rosey possible," he says, referring to the robotic maid in the cartoon The Jetsons. "We want to build pieces of equipment that could be autonomous, or a cognitive engine that can be equivalent to the prefrontal cortex." Marcus isn't disclosing the investment, save to say that there was a "very strong seed round." The company is "hiring fast and starting to build prototypes," he says. "Maybe we will address some of the unknowns here, these are very hard problems," he says. "We don't want to say we are going solve all these challenges, we are not delusional, but maybe we can make more progress."
As a scholar and critic, and now an entrepreneur, Marcus is engaged in a debate that sometimes breaks out on Twitter with the luminaries who defend current AI. They include Facebook's AI research director, Yann LeCun, and the University of Toronto professor Geoffrey Hinton who also works at Google's Google Brain unit.
One of the most startling claims of Rebooting AI is Marcus's contention that the systems built by LeCun and Hinton, and others, the core systems of deep learning, for all their obsession with engineering and benchmarks and data, actually draw upon some of those "rich" forms of knowledge for which he advocates, what are known as "priors," what Marcus characterizes as "some systems built-in with rich properties to begin with."
Take for example the most dominant form of deep learning neural network, the convolutional neural network, which has made vast progress in image recognition. The functional circuit at the heart of the CNN, the convolution, was an inspiration that LeCun had thirty years ago, observes Marcus. "The thing that Yann won his award for is an innate piece of wiring, he didn't learn it from the data," says Marcus, referring to LeCun's award for lifetime achievement, the ACM Turing Award.
"Fifty years from now, people will remember who LeCun is but won't understand why he was so anti-native," contends Marcus. The same goes for AlphaZero, the program built by Google's DeepMind that beat all the world's masters at chess and Go. AlphaZero, he points out, relies extensively on a decades-old search strategy known as the "Monte Carlo Tree Search." That technique "is a very structured prior," observes Marcus, without which the system couldn't function.
"Sure you have a neural net to classify patterns, but you put that in a context of a prior that says, 'If I go there, the other guy is going to maximize his benefit'," he explains. Deep learning's practitioners, "sneak it in the back door," but fail to acknowledge the borrowing, claims Marcus, so that "they've kind of warped the conversation" about AI.
Until Robust.ai bears fruit, of course, scientists of deep learning may want to dismiss Marcus. To his observations they can reply, Well, it works, it works in a lot of circumstances, like predicting products you'll like online, or answering your natural-language queries on your smartphone, or translating your phrases in real time.
What right, then, has Marcus to criticize what works?
"You may want to insulate yourself by saying that people who have made criticisms haven't built something," Marcus acknowledges, "but that's obviously fallacious; if criticism is correct, then it has to be dealt with."
When Gregor Mendel, the monk who created the framework for modern genetics, made his discoveries about heredity in the 19th century, it was without a mechanism to explain his findings, and his discoveries went unnoticed for decades, Marcus observes.
But Mendel was on to something. "Before Mendel, people thought genetics was a blending process, rather than particulate," he reflects. "Mendel ruled that out, he figured out some prosperities of the system.
"We are, similarly, trying to identify some properties of the cognitive system that AI ought to have, and maybe I'll have that."
In the meantime, there is the book, a delightful breath of fresh air amidst media hype for those who feel inundated by it, and a nice introduction to the subject for the uninitiated.