X
Innovation

Google’s TensorFlow is ready for quantum, but is AI ready for quantum?

Google this week unveiled a new version of its TensorFlow toolkit for AI that will make circuits to run on quantum computing hardware. But there are some caveats for deep learning scientists before they jump into the quantum realm.
Written by Tiernan Ray, Senior Contributing Writer

Spoiler alert: Quantum computers may not make your cats and dogs classifiers go any faster.

Google this week announced a new version of its TensorFlow framework for building machine learning models, a kind of mash-up between TensorFlow and Cinq, another framework developed at Google that's designed for building quantum computing algorithms. Together, they could let you build a deep learning model to run on a future quantum computer with no more than a bunch of lines of Python. 

But there's a caveat buried in the research paper explaining TensorFlow Quantum.

"To the best of the authors' knowledge," writes lead author Michael Broughton of Google Research, "there is no strong indication that we should expect a quantum advantage for the classification of classical data using QNNs in the near term." Classical data, in this case, means taking your typical distribution of photos to classify, say, or any other data set you just have lying around.

google-hqcnn-march-2020.png

A sketch of how classical neural nets -- the blue "DNNs" -- are combined with "quantum neural network" circuits running on quantum hardware to form a hybrid machine learning approach.

Broughton et. al.

Rather, what TensorFlow Quantum, or TFQ, as it's generally referred to, envisions, is using today's "noisy, intermediate-scale quantum" machines, or NISQ, to run problems that leverage data directly generated by a quantum mechanical process. That could be, for example, "states of quantum chemistry simulations used to extract information about chemical structures and chemical reactions." Or the famous "many-body problem" in physics, understanding the complex interactions between electrons, atoms, molecules, or various other entities.

Within that restriction, there is a lot that can be done, albeit most of it in simulation, unless you have access to Google's Sycamore quantum computer, unveiled last year, or another system. In addition to Cinq as a simulator, TFQ includes "qsim," a new simulator that Google claims offers a dramatic speedup for many kinds of machine learning tasks.

TFQ starts with the premise that we are in the second age of quantum computing forms of machine learning. In the first stage, several years back, all a quantum computer was for was to speed up the matrix multiplication and other linear algebra operations at the heart of deep learning. (Turns out linear algebra is at the heart of physics and quantum computing, so they mesh well, at least, in theory).

In recent years, however, the field has moved past the use of qubits as simple "black boxes" for running linear algebra. We're now firmly in the age of "quantum neural networks," or QNNs, according to Broughton, who holds an additional appointment at the School of Computer Science at The University of Waterloo, Ontario.

The QNN phase develops the "unitary," the name for quantum algorithms, as a function that can have learned parameters. That means quantum circuits can, in theory, be automatically differentiated using backpropagation, as are classically computed neural nets.

Broughton has teamed with 19 colleagues from Google and from a variety of other institutions including NASA Ames and the University of Erlangen-Nuremberg, to assemble fairly extensive documentation of the new software.

Recipes described with code in the article include a "hybrid quantum-classical convolutional neural network classifier." Using the same principle of "translation invariance" that happens in convolutional neural networks, the QNN can be used to classify quantum states based on their "translational symmetry." 

There are also applications to "learn" the control of quantum circuits themselves in a more optimal fashion. There's a version of a "discrete optimization problem" known as "MaxCut," which was developed previously and is given new firepower with TFQ. The authors even have an extended section on "meta-learning," or "learning to learn," picking up on earlier work by some of the authors. The basic premise is that a plain-old classical recurrent neural network is going to learn weights that will then be transferred to the quantum unitary operations so that the classical helps to optimize the quantum.

That last point should hover in the air for a while. For there is one other big caveat for deep learning scholars looking to use the TFQ. The QNN as currently pursued is a hybrid system, referred to as a "hybrid quantum-classical" neural network, or HQCNN. What that means is that some of its parts would be quantum circuits, first run in simulation, and then ultimately run on quantum hardware, but other parts of the overall program would be classically-computed deep learning code meant to remain in the classical realm. 

google-quantum-classical-neural-net-evaluation.png

An outline of the training procedure for a hybrid classical-quantum neural network, with some work operating on the quantum hardware and some being evaluated in the classical domain.

Broughton et. al.

In particular, to find the right parameters for the quantum gates, the classical neural network acts as a kind of front-end to initialize the weights and then pass them to the quantum unit. And then after each unitary operation has presumably been run in the quantum hardware, something has to be gotten back out of the quantum realm through measurement: the classical part of the neural net has to gain output from whatever the quantum circuit is discovering in its tensor operations on the input data. That involves sampling the quantum data, estimating how values have been shifted by the weight changes, to further update weights as part of the learning procedure of stochastic gradient descent. 

All of this back and forth, while quite beautiful, injects levels of separation between quantum and classical realms that will have to be taken into account by machine learning scientists. It's possible to end up with what the authors term "triply stochastic gradient descent." When one is sampling from a sample of a sample, the questions of data distribution, data drift, can conceivably become large ones.

All that to say that with kit in hand, one could do some amazing experiments, though deep learning scientists will find some even deeper theoretical issues to ponder in all this.

Editorial standards