A complete quantum computer (QC), fulfilling the mission set forth for it by scientists and engineers, and realizing the ingenious vision of Dr. Richard Feynman [PDF], has yet to be constructed. There are QC devices, and in the sense that they're receiving power and executing, or trying to execute, programs, they're operational.
They are not computers as we have come to understand them: digital semiconductor-filled processor boxes with interface busses and external networks. They resemble classical computers mostly in one single respect: They receive input, and they produce output. They would perform certain classes of algebraic tasks faster than a classical machine would -- which is to say, you'd see the output within your lifetime, and perhaps within the next minute, rather than setting an appointment with Alexa for the next millennium. For now, their margins of error are somewhat high, but if you don't mind waiting three or four minutes longer, they can be compensated for.
Also: Quantum computers are coming. Get ready for them to change everything
The most important thing to know about a quantum computer is that it leverages concepts of quantum mechanics that are not completely understood even now, to process very large and repetitive algorithms -- usually linear algebra -- producing results in seconds, if not milliseconds, instead of centuries. Unsolvable problems, outside the scope of our own lifetimes even with the fastest supercomputers, would be solvable by the time it would take you to explain what those problems were to a reasonable person.
The goal of quantum computing research is to discover a means of expediting the execution of long waves of instructions. Such a means would exploit an observed phenomenon of quantum mechanics that, when you write it down on paper, doesn't appear to make sense.
Also: Eight leading quantum computing companies in 2020
I'm trying to get you people who think about computer-simulation possibilities to digest the real answers of quantum mechanics and see if you can't invent a different point of view than the physicists. . . We never really understood how lousy our understanding of languages was, the theory of grammar and all that stuff, until we tried to make a computer which would be able to understand language. I was hoping that the computer-type thinking would give us some new ideas. Trying to find a computer simulation of physics seems to me to be an excellent program to follow out. The real use of it would be with quantum mechanics. Nature isn't classical, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy.
-Dr. Richard Feynman
"There's Plenty of Room at the Bottom," 1959
The implications for a computing network where quantum processing plays a role, are enormous: hurricane models that account for every water molecule; global warming models that pinpoint next year's forest fires; pandemic models that ascertain which species of bat, when consumed and under what conditions, produce future coronaviruses. Much of supercomputing as we know it could be rendered instantaneously obsolete.
Perhaps the most immediate implication, though, would be the vanquishing of the barrier of difficulty that protects modern-day digital communications from being decrypted.
A staggering number of implausible events would need to happen for this scenario to come to fruition, not the least of which is the introduction of a mechanism or methodology to manage entropy. In a quantum mechanical system (one example being the universe), events at the subatomic level tend to happen spontaneously. The measurable phenomenon that tends to lead to spontaneity is entropy. In a quantum computing system, entropy translates directly into error rates. The error rates in quantum computation well exceed the tolerance levels we permit today for electronic semiconductors, and because QCs are quantum-mechanical, they may always do so -- this may never be a factor we can control.
Also: Quantum computers could soon reveal all of our secrets. The race is on to stop that happening
The question for us becomes, are we willing to accept probabilities instead of certainties as answers? As long as we see actual results in minutes instead of centuries, it would seem we always have the option of running quantum algorithms multiple times and comparing results (from a stylistic standpoint, not unlike predicting a presidential election).
If you've ever programmed an Excel macro, you've personally experienced the following: You add input rows to the bottom of a worksheet whose columns serve as inputs for a long formula. Each time the formula recalculates, the time consumed is longer and longer. If you're on a slow enough computer, you can witness this phenomenon for yourself: As the number of input rows grows linearly, the time consumed by the macro grows exponentially.
If you've ever written a program for a supercomputer, you've witnessed exactly the same phenomenon. The scale may be different, but the effect is the same. And if you read through the supercomputer's logs, you can verify this observation personally. There's a point in time in which every algorithm, no matter how simple, simply becomes unworkable on account of the overwhelming weight of its input data.
This is the phenomenon that QC would eliminate. A fully-functional QC would become more capable exponentially by scaling its computing capacity linearly. As a result, for each increase in the number of steps in a quantum algorithm, the amount of time consumed during execution increases by a smaller amount, until eventually the time gap between exponentially different workloads becomes so small as to be immeasurable.
"The important thing about quantum computing is that it's a continuous computation," explained Joseph Emerson, the founder and CEO of Ontario-based consultancy Quantum Benchmark, during the recent Inside Quantum Technology Europe 2020 conference. Emerson continued:
It's more analogous to analog classical computing than digital. What that means is, if you take a train from A to B, there's no question you're going to end up at B. It's just a question of the speed of the train, and how fast you're going to get there. On the other hand, with quantum computing, it's more like navigating a rocket ship. If you have navigational errors, the rocket ship can take you to places where you can't otherwise go, but you will end up in the wrong location — you won't end up at B.
To be very clear: It would be inaccurate to say that a QC runs programs faster than a PC or an x86 server. A "program" for a QC is a very different order of beast than anything ever produced for a binary processor. The translation between a mathematical problem intelligible by college professors into a binary program, and the translation between the same problem into a QC program, are as different from one another as "20 Questions" is from billiards.
There are several fundamental compromises when you move into the realm of quantum computing. Here's one that's daunting just by itself: Solutions will rarely be exact or definitive. A QC is not a deterministic machine; in other words, there is no singular solution for which any other result would be an error. Instead, a QC will tend to render sets of answers with their respective probabilities.
If that doesn't discourage you, get prepared for this: The atom-level device that actually performs the quantum calculations will, as a result of its work and as is its nature, self-destruct when it's done. A quantum computing mechanism would actually be a machine that automatically builds the computing device out of atoms (calcium atoms are good candidates), sustains the operating conditions of that device for the duration of its program, applies the program, allows it to execute, looks the other way (because quantum logic gates are shy and will explode if anyone sees them), interprets the final state of its registers as the final probability table of results, then resets itself to rebuild another mechanism all over again.
Imagine if Alan Turing's incredible machine that cracked the Nazi "Enigma" code, was guaranteed to explode after every run. (QC engineers prefer the term "collapse," but let's call it what it is: explode.) And if Turing, an ingenious engineer, devised an automated manufacturing operation that rebuilt that machine out of new parts, each and every day. Every quantum computer engineer has done more than imagine such a scheme, but built a plan for such a device on the quantum scale. Indeed, such hypothetical "on paper" schemes are called Turing machines. Quantum engineers believe their computers can and will work, because their Turing machine experiments give them cause for faith.
Also: Riding a Quantum Computing Wave
Are there real-world applications of quantum computing technology, or some derivative of it, that people are putting to good use right now? Put another way, what does quantum actually do, and whom does it directly serve?
Also: Quantum computing may make current encryption obsolete, a quantum internet could be the solution
Now to the more controversial question: Assume someone built a mechanism that successfully leaps over the hurdles imposed by quantum physics, producing a full quantum computer capable of performing all the tasks currently relegated to the realm of theory and simulation. What do experts in this field think a quantum computer should be able to do, assuming every phenomenon that physicists have theorized and that scientists have observed and verified, is ultimately exploitable?
Prof. Richard Feynman, Caltech, approx. 1983
Also: What CIOs need to know about quantum computing (free PDF)
Also: Quantum computing: A cheat sheet TechRepublic
The word "computer" here has a very basic context -- not a handheld device or a cooled server with a processor and memory. Think of a computer the way Charles Babbage or John von Neumann considered it: as a mechanism guaranteed to deliver a certain output given a specific set of inputs and a defined configuration. At the deepest microscopic levels of a modern microprocessor, one logic unit is what these fellows would have called a computer.
Every classical electronic computer exploits the natural behavior of electrons to produce results in accordance with Boolean logic (for any two specific input states, one certain output state). Here, the basic unit of transaction is the binary digit ('bit'), whose state is either 0 or 1. In a conventional semiconductor, these two states are represented by low and high voltage levels within transistors.
In a quantum computer, the structure is radically different. Its basic unit of registering state is the qubit, which at one level also stores a 0 or 1 state (actually 0 and/or 1, which I'll confuse you with in a moment). Instead of transistors, a quantum computer obtains its qubits by bombarding atoms with electrical fields at perpendicular angles to one another, the result being to line up the ions, but also keep them conveniently and equivalently separated. When these ions are separated by just enough space, their orbiting electrons become the home addresses, if you will, for qubits.
While a conventional computer focuses on voltage, a quantum system is (passively) concerned with one aspect of electrons at the quantum level, called spin. Yes, this has to do with the electron's angular momentum. The reason we use the term quantum at the subatomic level of physics is because of the indivisibility of what we may observe, such as the amount of energy in a photon (a particle of light). Spin is one of these delightfully indivisible components, representing the angular momentum of an electron as it orbits the nucleus of an atom. The spin of an electron is always, as physicists calculate it, 1/2 (one-half); the only difference here is polarity, which very simply may be either up or down.
Also: Quantum computers will blow your mind CNET
It's the up or down state of electron spin that corresponds to the '1' and '0' of the typical binary digit. Yet it's here where quantum computing makes a sharp turn into a logical black hole, through a tunnel of white noise, and jettisons us helplessly into a whimsically devious universe whose laws and principles seem concocted by the University of Toontown.
A qubit maintains the quantum state for one electron. When no one is looking at it, it can attain the '1' and '0' state simultaneously. If you look at it, you won't see this happen, and if it was happening before, it immediately stops. (This is literally true.) Yet the fact that the qubit's electron was spinning both directions at once, is verifiable after the fact. Quantum mechanics calls this simultaneous state of both here and there superposition. It is impossible to witness an electron in a state of superposition because witnessing requires the very exchange of photons that causes such a superposition to collapse.
As one Fordham University lecturer put it, "We don't understand this, but get used to it."
There are multiple possible states of superposition. Here is why each additional qubit in a quantum system is more influential than the last: In a system with n qubits, the number of possible superposition states for each qubit is 2 to the power n. If you remember the history of binary computers, when 16-bit processors were first replaced with 32-bit processors, suddenly a byte's maximum unsigned value was no longer 65,535 but 4,294,967,295. In a quantum system, each qubit in a 32-unit quantum register would have 4,294,967,296 possible superposition states.
Why does this matter, if the final state only collapses to 0 or 1 anyway when someone or something takes the bold step of just looking at the thing? Because before that collapse takes place, each of these states is a valid, possible value. (This is why you don't hear a lot about quantum computers needing much DRAM.) During that strange, black-box period where it can work unobserved and undisturbed, a quantum processor is capable of performing real algorithmic functions on units that are much less like binary digits than they are like wheels in one of Charles Babbage's difference engines -- except with billions of settings rather than just 10.
Also: IBM quantum computers will unleash weird science on business CNET
Instead of giant wheels, quantum engineers have chosen a better way of representing qubits' spin states. More specifically, they borrowed this model from a Swiss emigrant physicist to the U.S., Felix Bloch, who shared the 1952 Nobel Prize in physics for discovering the principle of nuclear magnetic resonance. If you can imagine a billiard ball with one dot, and an imaginary line from the core of the ball through the center of the dot and outward as a vector, then you can picture a Bloch sphere like this one. Each superposition state a qubit may take may be represented by a vector in a Bloch sphere, which you can think of in terms of angles along the x and y axes of the sphere. Using ordinary geometry, the vector may be expressed as a function of the cosine of that angle to the z axis, added to the sine of that angle to the z axis.
(In quantum mechanics, functions that describe the functions of waves are represented as vectors in multi-dimensional Hilbert space [PDF]. Typically this makes the relationships between functions easy to solve, as problems in linear algebra. The types of problems best resolved by a QC can be explained as linear algebra. A Bloch sphere is one way of simply representing a quantum function in two-dimensional Hilbert space, as something a human being can more easily visualize.)
The initial mystery of superposition, for anyone who's studied computing since before he was granted a license to drive, concerns how one writes a program to account for any number of possible states. When Microsoft Windows first started out, there was a question of how to program for whatever the user may choose to do, when there were so many choices. The system it came up with was a kind of "user event trap" that read the "intelligent keyboard interface" (which actually included the mouse) for signals. Then it processed each signal through a list, like Santa Claus double-checking the Naughty List for each chimney he didn't recognize. This task became monumental when touchscreens were first introduced.
So obviously, the same method would be impossible for any rational form of quantum computing. The answer here is that logic doesn't work the same way. Instead, you feed the system the problem and have faith that there will be a solution.
The metaphor I came up with for envisioning this, begs the indulgence of the reader. As a child, I loved how in my local barber shop, the mirrors were positioned along two walls facing each other, so I could see myself stretching into infinity. I also loved how my mother made me flash cards with digits on them, quizzed me about math problems, and had me hold up digits for my answers -- making sure I flipped the order of my digits the right way around facing her, so that "29" wasn't "92."
For binary math, the only flash cards such a child would ever need would read "0" or "1." So if you brought your child with you to the barber shop or salon, and played the math game with her, quizzing her, "What's 1 XOR 1?" she could hold up a "0" and see a long stretch of zeroes through the infinite reflections. There would only ever be a handful of Boolean arithmetic problems she'd ever need to memorize.
Now, imagine you and your child discovered something absolutely fascinating: For as many reflections of her as you could see, each digit held up would be an independent power of 2. So if you quizzed, "What's 4 + 5?" the first reflection you'd see would hold up a card with a "9" on it, and the rest of the reflection would hold up zeroes. You could pose for your child an impossibly long problem, but solving it would be as simple as basic logic for each child being reflected. If you could see 20 reflections, you could see 20 independent digits for each flash card.
There's another paranormal phenomenon at work here: Instead of just 0 or 1, each card could register as many unique digits as you could see child reflections, as though it had that many sides. If you could see 16 children, each card would render a digit of the result in hexadecimal. And still, from the child's perspective, the logic would be just as simple as Boolean. Neither you nor the child would know how she keeps getting the right answer. And every once in a while, though not too often, she would get the wrong answer, but never so often as to score worse than an "A-" on a math quiz.
No one -- not you, the child, the barber, or the people clamoring at the window to get a glimpse of the math whizzes in the mirrors -- would actually know how this works. But you could all plainly see that it does work.
Also: IBM promises steady quantum computing progress through 2023 CNET
The trick in writing a quantum algorithm is to imagine that you could actually see, or measure, qubits in their superposition states, so that you can instruct them as to what happens next and cause adjustments to those states. In reality, the very act of attempting to witness superposition results in decoherence -- the reversion of qubits to their classical 0 or 1 states. Decoherence always happens eventually to a quantum system, often after a few minutes, or if you're lucky, in under an hour.
The whole point of a quantum program becomes to take full advantage of the ability to manipulate which way all these billiard balls are pointing while no-one is looking, prior to their decoherence. There are two types of quantum programs, which function very differently from one another:
D-Wave's 1000Q quantum annealing device.
Quantum computing is an experiment and, for now, it's an incomplete one. It could either rewrite the entire language of the computing industry, or it could sputter out in an invisible plume of quantum decoherence. It is a gamble that the practical world can interface with the quantum universe. It still might not pay off. But in the meantime, it's as fascinating as anything that human eyes have ever cleverly avoided witnessing.