If quantum computing is ever to transcend the realm of experiment and become a legitimate industry, it will need to sustain a legitimate business model. Businesses invest in machines and technologies — or, in the case of the cloud, in the services that provide them — when they have become reliable. They function as their users and operators expect them to. If any one machine is declared supreme above all others in its class, businesses expect this supremacy to be sustained, and as a result, for their investment decisions to be subject to immediate change. In this world, one machine's momentary performance advantage over another, especially when it's briefer than a product cycle, doesn't count as supremacy.
In the real world, supremacy is a trait you expect will endure.
"Our experiment achieves quantum supremacy, a milestone on the path to full-scale quantum computing," declared a team of researchers in a paper published in the journal Nature in October. The team brought together scientists and engineers from Google, NASA's Ames Research Center, the University of Massachusetts Amherst, and Oak Ridge National Laboratory. "The extended Church–Turing thesis," their paper concludes, "asserts that any 'reasonable' model of computation can be efficiently simulated by a Turing machine. Our experiment suggests that a model of computation may now be available that violates this assertion."
The story so far
ZDNet's Liam Tung covered the Google team's being published in Nature. When IBM immediately followed up with a rebuttal to Google's claim, shining light on a heretofore unseen supercomputing project, ZDNet's Daphne Laprince-Ringuet covered that development, then Forrester Research provided ZDNet with background. Next, our Tiernan Ray took a deeper dive into both Google's and IBM's assertions, along with an introduction to the IBM technique that it says could help classical computing play catch-up with quantum.
Together, they tell a story of one type of quantum computing (QC) system. There are actually several architectural approaches to the topic of conducting a computing operation using sub-atomic particles, the Google team's "Sycamore" model being just one. Any methodology for leveraging the unusual and even inexplicable behavior of particles at the quantum level, for producing calculations that, in the end, have at least a high probability of accuracy and a reasonable chance of being repeated, is a QC system.
If these quantum behaviors are exploited successfully, then the emerging computer should be several orders of magnitude faster than any "classical" computer (you can imagine a little bust of Mozart resting comfortably on top of the server) that involves transistors and semiconductors. Supremacy, as Google has claimed it and as one of QC's leading scientists has defined it, would happen when there is no reliable way, even with a QC, of measuring just how much faster the QC actually is from any classical system. In the Google team's case, it's 200 seconds versus, maybe, thereabouts, who knows, possibly, 10,000-plus years.
The Sycamore processor is the product of an effort to exploit a certain behavior of electrically charged particles (fermions) called entanglement.
The most well-known fermion is the electron, which is the sub-atomic component of all information represented in digital systems. In a typical electronic system, manipulating long sequences of information consumes time. In a quantum system, it can happen instantaneously, in a process that one can say, at the very least, is successful more often than it's not. When electrons become entangled, they share a certain characteristic so closely with one another that, when the state changes for one electron, that same state simultaneously changes for all the entangled members — that is, essentially, what entanglement means. What's more, the number of possible positions that shared state may take increases exponentially for each entangled electron added to the set. So rather than exhibiting just a binary state (0 or 1), any one qubit in an entangled set may exhibit any in a range of 2 to the power of the total number of entangled qubits.
A qubit is the QC counterpart of the binary digit. All it is, is an electron swimming in a magnetic field, along with others with which it has been entangled. A handful of entangled qubits, such as Google's pattern pictured above, can potentially represent the same amount of data as megabytes, then gigabytes, in a classical binary system. This enables a quantum algorithm to apply a single function to a huge array of data in an astonishingly small interval of time.
"When you think of a quantum system, I want you to think of it in terms of coins," explained James Clarke, director of quantum hardware for Intel Labs, during an earlier interview with ZDNet. "A qubit has a probability of being heads or tails at the same time — sort of like flipping a coin in the air and having it spin. It's neither heads nor tails; it's simultaneously heads and tails.
"If I have two qubits," Clarke continued, "I can represent four states. If I have three coins, I can simultaneously represent eight states. So it's exponential. Imagine if, on a table, I had 50 coins spinning at one time. I would be able to simultaneously represent more states than even the largest supercomputer on Earth could represent. If I had 300 coins spinning on a table, I would have more states available than there are atoms in the universe. That speaks to the exponential scaling of a quantum computer, and potentially its power over a classical computer."
The theoretical limit Clarke has witnessed thus far for simulating qubit capacity in a classical supercomputer environment, using Intel software, is about 45. A high-speed laptop computer may be able to reach 35. "Every time you try to simulate one more qubit," he said, "you double the classical computing requirements. In the absence of a really large quantum computer, our ability to simulate these algorithms is very limited."
The way engineers and scientists visualize the massive scale of the problem space in which a 53- or 54-qubit system would operate, in a sensible fashion, is with a mathematical tool they call Hilbert space. It's a figurative geometric grid, any point in which may have value. In a classical electronic system, value is represented by the presence and absence of electrons in binary digits (bits). Eight bits is the original composition of one byte, which can collectively represent any integer in the sequence 0 through 255 (or ‑128 through 127). So you can imagine an 8-bit byte having a very narrow Hilbert space. Think of this space as the slot into which the solution of an algorithmic function would be placed.
In the 53-qubit Hilbert space of the QC, instead of 0 or 1, each qubit may represent any of 2 to the 53rd possible quantum states (over 9 million billion), all but the greatest and least of which are considered states of superposition. In Erwin Schrödinger's famous analogy, all these intermediate superposition states would be varying degrees to which the poor cat is both alive and dead simultaneously. A set of 53 entangled qubits (for which no one has coined the term "qu-byte") has a Hilbert space equivalent to what can only be simulated in an 8-bit binary computer by exactly one petabyte of memory (1,024 terabytes). In other words, a full petabyte is this processor's solution space.
Now, consider the fact that a petabyte of real-world memory can represent a tremendous amount of data, when it's put to use representing a database. Even in classical systems today, the fastest database managers utilize "in-memory" engines. If qubits could be treated as memory, then a quantum operation may be executed as quickly on a single set of qubits acting as a one-petabyte database as a simple binary logic operation can be executed in one clock cycle of a semiconductor.
In 2011, for a publication of the 25th Solvay Conference on Physics, Prof. John Preskill of the California Institute of Technology put forth a reasonable, explainable milestone for quantum computing, which he called quantum supremacy. One way to achieve this milestone, and perhaps the quickest way, Preskill wrote, "would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers." By "super-polynomial speedup," he meant, to a degree that cannot be expressed with a polynomial formula (one with multiple terms) — a difference that can't be mathematically explained.
Put another way: Any mathematical analysis function applied to an area should become exponentially more complex as the size of the analyzed area increases linearly. Subsequently, the function should consume much more time, until eventually it becomes bottlenecked. For entangled qubits — the counterparts of both memory and data nodes in a quantum system — this phenomenon does not hold true. The same function may theoretically take only negligibly, or even immeasurably, greater time when applied to larger areas than to smaller areas. There should be an asymptotic barrier that classical computing cannot cross, where the estimate of time consumed becomes essentially infinite... while the quantum system keeps chugging along as if nothing changed.
At that point, Prof. Preskill asserted, quantum computing will have reached the point of supremacy. This is indeed what the Google team claims to have accomplished with Sycamore. But is there a reliable method of independently verifying this or any claim to supremacy — one which does not require 10,000 years of uninterrupted calculation?
"I think there is a general desire within the quantum computing community to have some sort of measure of whether we're making progress towards quantum computers," remarked Dr. Travis Humble, a distinguished scientist at Oak Ridge National Laboratory — the current owner and operator of the #1 and #2 supercomputers on the world's Top 500 list, "Summit" and "Sierra."
"Preskill's definition of quantum supremacy provided a convenient academic ideal for what this would mean," Dr. Humble continued, speaking with ZDNet. "Unfortunately, it leaves open, how do you actually implement that idea?"
Sycamore is theoretically capable of assembling a quantum device incorporating 54 entangled qubits, and Google has observed it successfully entangling 53. (Entanglement of long strands of qubits is extremely difficult, since assembled components are very susceptible to decoherence, or falling apart.) In its Nature paper, Google engineers said they gave Sycamore a test where it estimates the probability distributions for a set of values among a very large set of random bitstring patterns, not unlike looking for the ASCII character codes for words hidden within a photograph of black paint spattered on a white wall.
There was a point, the paper asserted, where Sycamore could somewhat accurately estimate a set of probability distributions in about 200 seconds' time, that would take a classical supercomputer array about 10,000 years. Google based the latter part of that assertion on a binary computer-based simulation of a series of smaller problems, giving engineers the exponential formula they needed to scale the problem out to the size that Sycamore effectively tackled.
"The race is on," declared Dr. Sergey Frolov, an experimental physicist at the University of Pittsburgh, and director of the Frolov Lab, which investigates the behavior and dynamics of matter at nanoscale. Speaking with ZDNet, Dr. Frolov explained that Preskill's supremacy definition was not intended to be a permanent statistic, like the first man on the moon, but an ongoing operational measurement. Frolov continued:
It's not a mathematical proof, that once you have 50 qubits, there is no way a classical computer can catch up with you. It's an operational concept, which means that, at a given moment in time, a quantum computer outperformed any classical computer. And of course, the hope of quantum engineers is that this gap will just grow exponentially — that there will be no way to catch up. This could be just around the corner. So maybe if Google had not 53 qubits but 60 or 70, then any hope of a classical computer catching up with Google on that one task, would be lost, and it would be beyond dispute.
IBM's rebuttal, while praising the numerator in Google's supremacy equation, attacked the legitimacy of the denominator. In his definition of quantum supremacy, Preskill said nothing about the classical supercomputer having to emulate, simulate, or otherwise pretend to be, a quantum circuit — something Prof. Richard Feynman, who coined the idea of quantum computing, stated during the very same lecture would be impossible anyway. In any event, IBM's engineers are working on an as-yet-untried approach to segmenting solid-state memory (flash circuits), and allocating each segment among various simulated qubits using patterns like the actual examples shown above. This way, a QC would not face bottlenecks when trying to write or read values. It's these bottlenecks, IBM implied, that were responsible for Google's 10,000-year prediction in the first place.
"The way the Google team has approached it," said Oak Ridge's Humble, "is to show that a quantum device could actually behave in a way that no classical computer could ever anticipate." As a result, he believes, Google may be prompting scientists and engineers in this field to re-evaluate both the meaning and the efficacy of supremacy.
Humble's position gives him a unique perspective on both projects, working in separate collaboration with both Google and IBM. As he told ZDNet:
We initially had evidence that indicated the best classical method to compare against, was given by this simulation software developed by the NASA team. And that was a state-of-the-art implementation of how to do this verification step. Then IBM came out and proposed an alternative method that appears to be promising, for not only performing the verification step, but maybe performing it more quickly. But I think that is a familiar tune in the scientific world, that people are constantly building on each other's work. In that sense, I don't believe that the IBM [rebuttal] necessarily negates what Google has accomplished, but I do think that it provides us with more insights into how we need to benchmark these ideas in the future.
Allow me to speak for a moment as the veteran of many a sleepless night covering the first battles of the Intel/AMD dual-core performance wars in statistical detail: When one manufacturer stakes a claim to a certain class of performance advantage over another manufacturer — for example, besting the other guy in a video codec benchmark test, or surpassing some hypothetical milestone — then when that claim is presented on a PowerPoint slide, it's generally accompanied by an asterisk, dagger, double-dagger, and possibly other qualifying superscripted marks that attest to both the theoretical and practical limitations of that claim. (If the vendor knows Timothy Prickett Morgan or Kurt Marko is in the audience, its PR people will wisely slap enough dingbats to the end of that claim to populate footnote slides the breadth and depth of the Oxford Dictionary.)
So what would the small print on the bottom of the supremacy slide read, if Google were a veteran vendor in the quantum processor business? Here are some candidates:
* Denominators in supremacy claims are subject to change without notice
The performance ratio of a supreme quantum system to any classical competitor is, extrapolating from Preskill, an imaginary number — one divided by infinity. As long as we're speaking in terms of infinities and imagination anyway, then any counter-assertion from the classical side, even prior to its experimental proof, may knock that denominator out of the realm of imagination, out of the Twilight Zone and into reality — making the ratio a real value again. As long as this ratio is less than fanciful, quantum supremacy cannot be guaranteed.
For quantum computing to be a contributor to the global economy, quantum supremacy will need to be a value proposition that lasts at least as long as one cycle of its marketing campaign. If a classical supercomputing competitor is capable of converting that supremacy claim to a mere advantage in a matter of weeks, then any justification a quantum system provider may have made for its premium service price, will decohere faster than Sycamore's 54th qubit.
"There is no theorem that says that any classical algorithm cannot outperform a given quantum algorithm," remarked Frolov. "We can only say that, at this moment in time, this particular quantum computer outperformed these classical computers — maybe all classical computers that we have at the moment. But there is no proof that this means any classical computer we could ever build cannot actually outperform, or be on par with, a quantum computer."
If we take Frolov's counter-argument as a given, and give Alan Turing the benefit of the doubt, then the legitimacy of any claim of quantum supremacy can only be as solid as the opportunity given a classical system to meet the same challenge and fail on its own merits. Google's claim may be the first ever in history to be treated with some degree of legitimacy. But the size of the hole IBM poked in Google's assertion attests to how readily any competitor in the broader field of computing or supercomputing may be willing and likely to do the same in the future. (For its part, Google has not responded to our request to participate in this article, and IBM declined our request.)
* Side effects may include situational volatility, decoherence, and improbability
"In an ideal quantum computer," said U. Pitt's Frolov, "you can entangle anything to anything, and there will be no problem. But that may still be years away. In a practical quantum computer, this entanglement propagates and is still somewhat local. What it means is that, each qubit can maintain entanglement for a relatively short time. It's not years or days — it's less than a second. It's milliseconds. And after that, entanglement is lost. That's obviously a limitation."
The way Frolov paints the picture, Sycamore's successful experiment was one round in a massive plate spinning exercise among many, that went right. Spinning up entangled qubits becomes exponentially more difficult with each new qubit in the chain — which is evidently why Sycamore could not light up its 54th qubit.
* Your having scrolled down to this paragraph implies your acceptance of laws of physics nobody understands
The test the Google team gave Sycamore was intentionally designed to be hard. It's an example of a random circuit sampling (RCS) problem called boson sampling. A photon — the quantum carrier of light, from which all of quantum mechanics derives — is the most common boson, though there are other, more exotic examples.
The photograph above shows an actual boson sampling device. The objective of one of these devices is to set up an experiment where several bosons (mostly photons) are fired from a singular source called a quantum dot (a synthetic crystal that emits light when excited by ultraviolet rays). Like marbles in a machine, these bosons travel through a conducting maze called a Haar-random interferometer. When they emerge on the other side, there's an array of detectors ready to catch them. Given enough input data, an RCS algorithm should be capable of correctly estimating the exact sequence of catches made by detectors in the array, for a randomly selected sample of events.
If there were a real-world allegory for this setup, it might sound like this: Situate 15 catchers on a baseball field and tell them to stay silent. Darken the field and blindfold the pitcher. Have the only source of light come from self-illuminating baseballs. Now, have the pitcher turn around in circles several times, then toss out a few hundred balls into the distance. Have the balls be intercepted in mid-flight by a grid full of rotating, unshielded jet engine fans. Tell the catchers to collect as many balls as possible, keeping in mind they'll probably drop one or two.
Then have the computer solve this problem: Given the heart rate and respiration of the pitcher, the rotation speeds and electricity consumption rates of the fan blades, and the weights and shoe sizes of each of the catchers, estimate the time and place of as many successful catches as possible, beginning with a pitch that may not be the first one — say, the 17th.
On November 6, Scientific American first reported the publication of a paper that had not yet been peer-reviewed, but that boasted a new record: a boson sampling technique, proven using a real interferometer array, that enabled the accurate prediction of a sequence of 14 randomly sampled boson collections — the most for any attempt thus far, should these results be verified. The unchecked results were published by researchers from the University of Science and Technology of China, Hefei.
As impressive as the Hefei team's results were for maintaining the state of its boson sampling experiment for a record 14 rounds, it illustrates the level of effort required not only to develop but simply to run an algorithm to which a quantum processor will display an affinity. QCs are not, and may never become, general-purpose processors. To the extent that they're not, their supremacy may always be viewed in a very limited, exclusive light. And that's an intractable problem in itself.
* Quantum supremacy may not be available in all areas
Conventional thinking would have people believe that, if a quantum computer can successfully run an algorithm as complex as boson sampling, then certainly it can be put to use for more practical purposes. As Google CEO Sundar Pichai wrote for his corporate blog, "Quantum computing can accelerate solutions for some of the world's most pressing problems, from climate change to disease... With this breakthrough we're now one step closer to applying quantum computing to, for example, design more efficient batteries, create fertilizer using less energy, and figure out what molecules might make effective medicines."
But that's the problem: Not only was boson sampling chosen for the experiment because it's hard, but also because it's the category of function to which a QC is mechanically best suited. For an earlier edition of Scale, I invoked an analogy about the fickleness of a quantum system: Imagine you had direct contact with the greatest living mathematician, but only so long as this person remained locked in solitary confinement at Alcatraz. This person would produce miracles for you, but only if the problem you gave him was phrased to make him believe he was an astronaut presently in space on an extra-vehicular maneuver. Even then, occasionally, the mathematician would grow tired of working with you, and will abruptly and randomly fall asleep.
Proving a QC's supremacy, scientists attest, involves the implementation of an algorithm that is intractable to classical computing techniques — meaning, resistant to translation to traditional logic. Maybe a method exists, but we don't know what it is. This same characteristic — making a quantum function unreachable through common logic — also serves to render it difficult to interpret using symbolism or language. And that means, almost by definition, that programming a QC-friendly algorithm will be, in the worst cases, to borrow a phrase, intractable.
"[With] the current demonstration, the practical value is that it has validated the idea that we can build devices with 50-plus qubits," said Oak Ridge's Travis Humble, "and predict their behavior. For me, that really is what this latest result emphasizes: Our understanding of the quantum mechanical processes, how to build these superconducting, transmon qubits that they use — we've got enough grip on that, from an engineering perspective, that there's some hope that, in the future, we'll actually be able to build full-scale, practical quantum computers."
"The mapping between classical problems and quantum algorithms," said U. Pitt's Sergey Frolov, "can be done for all practical cases. The only area where this is not practical, I guess, is what Google does right now: They want to model a quantum system on a quantum computer. This is entirely quantum, and we don't know the answer — we cannot check it against anything. And we cannot look in the middle and see what it did. We just have to trust that the machine works. Quantum problems giving quantum answers — those problems are of interest to scientists, physicists, but not of much practical use directly. We may use the solutions to find something amazing, like a room-temperature superconductor. Then we can send electricity without dissipation around the globe, which would be a huge boost to society. Through this, we could check the answer — we ask the quantum computer, and it said, 'Build this material, and it will superconduct at room temperature.' We can go do that in reality, not in a computer — assemble it out of atoms, then check if it superconducts, and then we can start replacing our cables. But other problems that quantum computers might solve, like optimization problems or cybersecurity problems, those can be checked instantaneously.
* Quantum algorithms are exclusive to the quantum processors for which they were built, and may not be used for other purposes without express written permission from the creator of the universe
It's almost like an Einstein thought experiment: The problem with a quantum algorithm for any system with a fixed n number of qubits, that doesn't slow down when the problem space scales up, is that you can't speed it up either. The only way to improve is to add more qubits, which becomes more of an architectural challenge as the stack gets taller. Eventually, you're building an entirely new computer every time you seek to improve your processing ability.
Meanwhile, you could always speed up a supercomputer with better software. It's not really a foot-race between two competitive computing classes, but rather one where quantum sets the new goal in leaps and bounds, and supercomputing keeps catching up. Conceivably, quantum computing could become the greatest single incentive ever developed for the advancement of supercomputing.
"I do agree that there is a certain aspect of convenience," said Oak Ridge's Humble, "that we have come to expect from our computing technologies. Quantum does not meet that expectation at the moment. It's far too experimental, in order to be used in that way. I think we will see social pressure for this technology to really pay off and become useful, to push designs in that direction. People will eventually want access to these systems, and that'll create a demand for making the interfaces more accessible. And that will eventually evolve over time into some types of interfaces that extract away all of the unique quantum features."
Quantum computing research is, and will continue to be, funded by investments into its future, not just as a headline grabber but a common-sense technology. It's the same temporal paradox looping around again: It's the project's practical value at the end that funds its experimental stage in the beginning. If the technology doesn't find a way to map Hilbert space onto the real world, then QC may yet become the supreme leader of an island unto itself.
Learn more — From the CBS Interactive Network
- Inside Azure Quantum: Microsoft's plan to deliver quantum resources to developers by James Sanders, TechRepublic
- What is quantum computing? Understanding the how, why and when of quantum computers by Scott M. Fulton, III, ZDNet
- Quantum as a service: How to product-ize a hole in space and time by Scott M. Fulton, III, ZDNet Scale
- Did Google achieve quantum supremacy or not? by Jerry Bowles, Diginomica
- Quantum Computing's Challenging Liftoff to Commercialization by Joab Jackson, The New Stack
- Why I Called It 'Quantum Supremacy' by Dr. John Preskill, Quanta Magazine