Artificial general intelligence is a Rorschach Test: Perhaps we need orangutans?

A panel discussion between Facebook’s Yann LeCun and fellow AI thinkers debates whether the term artificial general intelligence even means anything. Perhaps the answer is machines more like orangutans.

The risks of predictive AI Omar Tawakol, founder and CEO at Voicea, tells Tonya Hall about the various risks of predictive AI and ways organizations can work to combat them.

Artificial general intelligence, or "AGI," the idea of a machine that can approach human levels of cognition, is a great topic to get people all worked up. Because no one can really define it, it serves as a Rorschach Test, onto which one can imprint whatever thoughts and feelings they care to. 

What is artificial general intelligence?

Everything you need to know about the path to creating an AI as smart as a human.

Read More

The result was a spirited discussion this past Friday night at John Jay College in Manhattan, site of the World Science Festival, now in its twelfth year. The evening's discussion, which played to a packed auditorium, was called "Making Room for Machines: Getting Ready for AGI," and moderator Daniel Sieberg, who is CEO and founder of a startup called iO, suggested that what was at stake was "What happens if we succeed or if we don't" in creating AGI and "Who are the winners and who are the losers?"

But the panelists, for the most part, were not inclined to believe AGI was anywhere near coming into existence, and disputed whether it's even a valid way to frame the goals of AI research. 

The group was composed of practitioners of artificial intelligence and those who are not practitioners but very much interested: Facebook's head of AI, Yann LeCun; former world chess champion Garry Kasparov; AI ethicist Shannon Vallor, who holds a visiting research position at Google; and Hod Lipson, director of the Creative Machines Lab at Columbia University.

"I actually hate the phrase AGI," LeCun said, when his turn first came to speak.

"I don't think it represents reality. The technology we have today is nowhere near anything like human intelligence, it's a misnomer," he said. Humans, in fact don't have a "general intelligence" themselves, he noted; humans are more specialized than we like to think of ourselves.  

Still, LeCun said, "I am optimistic we will reach it in a few decades."

Kasparov called AGI "fake news," pointing out that machines such as AlphaZero, Google's system that can beat humans at chess, go, and shoji, is operating in a "closed system." 

img-0177.jpg

Yann LeCun, Facebook's A.I. leader, center, during the panel discussion on artificial intelligence at the World Science Festival in New York. 

Tiernan Ray for ZDNet

"It's specialized," he noted, rather than an intelligence that can generally navigate the world. It evokes the famous debates of Spinoza and Descartes, Kasparov said, whether a mind can exist without a body. Presumably, a body would let the machine explore a much larger terrain than such closed systems. 

Hod offered that robotics is a pathway of that kind, the ability for machines to learn by navigating the world. LeCun concurred, referencing the work of Sergey Levine, a U.C. Berkeley professor whose lab does a lot of work on what might be called self-supervised, or semi-supervised learning in robotics systems. The same kind of work is going on at Facebook, LeCun noted. "One of the most interesting areas today is learning models of the world," he offered. "It would be a step toward the kind of learning humans do, which is different from supervised learning or reinforcement learning," he added. 

Also:  Why is AI reporting so bad?

Sieberg, the moderator, put up a video on the screen above the panelists showing a baby orangutan that falls over laughing when shown a magic trick. "This baby orangutan has a model of the world," observed LeCun. "Baby humans have this as well, there are a lot of things we learn about the world; how do we come up with new paradigms that let machines learn a model of the world?"

Sieberg asked about Hollywood, which prompted Gasparav to launch into a rumination about how "it can't be evil," meaning, machines, because "humans still have the monopoly for evil." LeCun observed that people are stuck in scenarios from "Terminator" that he dismissed as unreal. "Ex Machina," he said, is "a beautiful movie, but they get absolutely everything wrong!" Among the things the movie flubs is "this myth of one single genius who will invent AI -- it's just not going to happen that way."

LeCun offered that probably AGI will happen as a "progressive " development. It's "not going to be a singularity," referring to Ray Kurzweil's idea of a kind of threshold beyond which humans suddenly integrate with machine intelligence. "I don't believe the singularity effect of a hard take-off," he said, "because any physical process in the real world has friction, which limits its progress."

Vallor seemed to agree with LeCun that the world is nowhere near AGI, saying "We don't need to worry in the immediate future; there is no sign we're on the cusp of this breakthrough." The term itself "can be misleading," said Vallor, and she pointed out that intelligence itself is a cultural construction, something that "different cultures define in different ways."

Also: Google DeepMind's Demis Hassabis is one relentlessly curious public face of AI

There were nods to creativity, such as when Kasparov said that what's required is a sense of failure, the idea of not knowing what the outcome may be. "Everything now, it's mimicry," in current AI, said Kasparov, but "with creativity, you don't have a recipe." Vallor seemed to agree with this line of thought, arguing that "we won't have a machine anytime soon that creates because of a burning need to create," like the human drive. LeCun added that there's something special in the reaction of an audience that is evoked by a human musician creating on stage. 

"If a machine performs at the level of John Coltrane, it's interesting, but what emotion will it produce?" he asked rhetorically. 

Hod, the Columbia professor of robotics, however, was not having it. "I disagree. Human creativity is on the chopping block," he pronounced. The rapid improvements in so-called generative adversarial networks, or GANs, which can create impressive images was evidence that the creative pursuits are fair game. 

"Creativity is information," he declared. 

Must read

When challenged by Kasparov, Hod responded that "we are now debating the nuances of creativity," a sign, to Hod, that "we have come a long way in the ability of machines to be creative."

The biggest flash point, predictably, came when Sieberg asked the panelists to reflect on AI ethics. 

Kasparov charged both Google and Facebook with violating ethics. Google, he noted, had reportedly been helping China to develop a sanitized search engine called "Dragon Fly." Was Google still working on that, he demanded to know of Vallor. "Not to my knowledge," she said, and offered that technology itself "doesn't have an authoritarian character, but functions as a mirror, a magnifier of that potential that's already in society." 

Transparency is the answer, Hod chimed in, now that not only can the government watch citizens, but citizens can watch the government. "When everyone is watching everyone," that's the solution, he said. But "transparency itself doesn't grant rights to act on that," Vallor replied, noting the uneven power dynamics in society. 

Also: MIT finally gives a name to the sum of all AI fears

At this point LeCun, mildly exasperated, admonished his fellow panelists that "we are talking about things that don't have anything to do with AI." The same concerns of government control could have been leveled against television and radio and other technologies, he explained. Vallor objected that those technologies didn't amplify an existing social "drift" toward authoritarian rule the way AI might, but LeCun shot back, "Yes, radio helped create fascism!"

Having exhausted almost every blot of the Rorschach Test of AGI after an hour and a half, Sieberg asked the panelists for closing thoughts. LeCun, picking up on Vallor's suggestion that humans are afraid of their own dark will to dominate, observed, "We are social creatures, and we want to dominate, so perhaps we should build systems that are like orangutans, because they don't have that quality."