X
Innovation

All that glitters is not quantum AI

You should be skeptical of current attempts to make a quantum computer that enhances artificial intelligence.
Written by Tiernan Ray, Senior Contributing Writer

Why hasn't the field of artificial intelligence created the equivalent of human intelligence? Is it because the problem, "artificial general intelligence," isn't well understood, or is it because we just need much faster computers, specifically quantum computers?

The latter view is the source of a vibrant field of research, "Quantum Machine Learning," or QML. 

But a bit of skepticism is warranted.

"We need to look through a skeptical eye at the idea that quantum makes things faster and therefore can make machine learning advances," says Jennifer Fernick, the head of research at NCC Group-North America, a cyber-security firm based in Manchester, U.K.

Fernick was a keynote speaker a week ago at the O'Reilly A.I. conference in New York. She sat down this week to tell ZDNet why she's skeptical about all the hype that's emerging in the pairing of quantum and A.I.

"Right now, if we look at work in QML, people are experimenting with things such as, could we build a Support Vector Machine (SVM) or a Boltzmann Machine — can we build these existing canonical machine learning models — in the quantum machine," observes Fernick. She is referring to two older models of machine learning that emerged in the 1980s and the 1990s, prior to today's deep learning systems. 

jennifer-fernick-headshot.png

Recent quantum work in speeding up machine learning is "cool," says Jennifer Fernick, head of engineering for cybersecurity firm NCC Group, "but not necessarily inherently a revolution in A.I."

NCC Group.

Indeed, recent research by IBM has attempted to show that even today's simple quantum systems, such as a 2-qubit model, can theoretically go well beyond what "classical" computers using the flow of electrons can compute. 

Also: Is IBM's AI demonstration enough for a quantum killer app?

The IBM work is part of a recent craze to find uses for quantum computing before large systems are commercially viable. The trends is known as "shallow quantum circuits," also referred to as "Noisy Intermediate-Scale Quantum Devices," or "NISQ."

However, attempts in NISQ to speed up a shallow machine learning task, such as SVM or Boltzmann Machines, may not really be achieving much, she reflected. 

"Quantum computing can make certain things faster if the underlying math has a structure that is exploitable via quantum and we have the right quantum algorithms," she says. "Before we jump on the bandwagon, we need to ask, What are the true algorithmic innovations?" 

In the case of cryptography, one of Fernick's areas of focus as a security specialist, quantum computing is "clearly worth it," she says. 

A quantum computer can render trivial the operation of "factoring" a given number into its component prime numbers where a classical computer would find it impossible.

Also: Intel offers AI breakthrough in quantum computing

"For machine learning, I feel like we've translated that same enthusiasm with cryptanalysis, but there is not even the theoretical demonstration that we are going to have that same impact," says Fernick. 

Fernick's skepticism finds inspiration in the relatively young field of computational complexity theory. In particular, she is enamored of the work of Massachusetts Institute of Technology's Scott Aaronson, who is associate professor of electrical engineering and computer science

Aaronson, whom Fernick deems the most interesting mathematician alive, has pointed out that simply being able to speed up computing of a given learning model in A.I. may not be the key to artificial general intelligence. Is it the case that simulating a human mind is an operation that requires exponential computing time, or is it not? he asks. If it were, it might be the case that the speed-up from quantum would genuinely be an advantage. 

According to Aaronson, it is neither "trivially true" nor "trivially false" that simulating a human brain is an exponentially difficult computing operation. That means it's not clear that AGI is the kind of "inefficient" operation in classical computing where quantum can gain.

On the contrary, Arronson implies it may in fact be the case that something is going on in the mind that is achievable in "polynomial time," a less-demanding form of compute than exponential time. As Aaronson wrote in a 2011 paper, it's possible software that can describe the mind is "a compact, efficient computer program" that "includes representations of abstract concepts, capacities for learning and reasoning, and all sorts of other internal furniture that we would expect to find in a mind."

Also: AI pioneer Sejnowski says it's all about the gradient

That observation, that the right AGI might be computationally less demanding, not more, accords with Fernick's instincts.

"Oftentimes, the revolution we might be looking for in A.I. is not a minor speed-up of an existing problem we might already do efficiently."

Things such as building a quantum SVM are "cool, but not necessarily inherently a revolution in A.I. It doesn't mean we will suddenly get a lot better ML."

Fernick's own career was inspired by such questions of what is efficiently computable. "I hated computers into my late teens," she says, preferring the field of neuroscience. That changed when she took a course in computers as an undergrad at the University of Toronto. 

"I had a wonderful professor, Diane Horton," she recalls. "On the last day of class, as she was putting papers away, she said to me, 'if you keep studying computer science, there are some topics you might get exposed to, and one is that there are things that are not computable before the heat death of the universe." 

Complexity of compute goes back to the early days of both computer science and A.I.

"The very early A.I. practitioners were starting to relate what they were doing to computational hardness," Fernick says of work in the 1940s and 1950s on symbolic logic as it relates to cognition.

"That hasn't been as dominant a theme in the last couple decades in computer science," she observes. Now, "it's time to start asking those really deep questions yet again," says Fernick. 

As for quantum itself, the place to look, says Fernick, to deduce whether it will have any advantage, in A.I. or anything else, is a 2013 paper in the journal Science by researchers M. H. Devoret and R. J. Schoelkopf. They propose seven "milestones" that have to be met for engineering quantum systems. 

Must read


"The core insight from that paper — the one I found most interesting — is that it's really not the number of qubits that we have, say, 100 versus 110 qubits, but rather, what among those seven milestones, which engineering problems, have we solved?" Fernick says.

Near the top of the stack of seven milestones, far from the field of today's NISQ, are the quantum algorithms that will ultimately drive the use of logical gates of qubits. 

Science is "still very much in the infancy of quantum algorithms," observes Fernick. "It's very naive to think that the quantum algorithms we have now are what we will be excited about 20 years from now."

Those still-undiscovered algorithms are probably a better place to look for a quantum A.I. gain. 

Muses Fernick, "Wouldn't it be more interesting to exploit those quantum physical properties in an entirely new way to make algorithms that are very different?"

Cloud services: 24 lesser-known web services your business needs to try

Editorial standards