X
Innovation

Quantum computing: New tool makes sure the qubits don't lie

Quantum computers can solve intractable problems – but they don't always get the answer right. Now, a research team in the UK has developed a way to test to make sure that qubits aren't getting confused.
Written by Daphne Leprince-Ringuet, Contributor

The promise of quantum computers is that they could resolve problems that would take classical computers thousands of years to overcome, but this prospect raises a practical question: who – or rather, what – will be able to correct the answers provided by qubits?

With this problem in mind, researchers from the University of Warwick started thinking about ways to check the results of quantum calculations; and they have just published their results in the form of a "verification protocol".

The new tool tackles the root cause of error in quantum computing: noise. Caused by random factors such as flaws of fabrication or fluctuations of temperature, noise is any quantum physicist's nemesis – because it is the reason that quantum computers are so error-prone.  

SEE: Special report: How to automate the enterprise (free ebook)    

By checking that the noise affecting a computer is below a certain level, therefore, scientists could make sure that their calculations are solved accurately. 

The new protocol developed in Warwick proposes to do exactly that. To understand how it works, explained the lead author on the paper Samuele Ferracin, you have to imagine a quantum calculation as a complicated circuit made of gates, wires, measurements, and so on. 

The tool he developed with his team draws several alternative versions of a given circuit, which are similar to the original calculation, but can all be simulated on a classical computer. 

In other words, the protocol creates easier calculations called "trap circuits", which are nevertheless reflective of the noise happening inside the original quantum circuit. 

Classical computers can therefore establish the accuracy of the results generated by trap circuits, providing the basis for researchers to determine how accurate the quantum computer will be in solving the "harder" calculation it has been given.

"By hiding the bigger calculation behind several smaller circuits," said Ferracin, "we can verify things that we cannot simulate on a classical computer."

The outcome of the alternative calculations are accepted by classical computers as either "correct" or "potentially incorrect", which gives researchers an indication of where the computer sits on the scale of noise.

The test even produces two percentages to refine the verification: how close it estimates the quantum computer is to the correct result, and how confident a user can be of that closeness.

Using trap circuits to check the accuracy of a quantum calculation is not an innovation in itself, clarified Ferracin. It is already a common approach to simulate smaller operations on classical computers, but the method is limited.

Because of the complexity of quantum calculations, current simulations can only be performed by creating traps that are bigger than the original circuit.

Verification, therefore, is feasible for small quantum circuits, but can't be scaled up much. On the contrary, the traps in the new protocol do not carry any more qubits nor gates than the original circuit that is being tested.

"The circuits implemented in our protocol are not bigger than the circuit which output we want to verify," said Ferracin. "This hasn't been done before – and it means that the test is practical and scalable."

SEE: What is quantum computing? Understanding the how, why and when of quantum computers

He estimated that it would take just a few minutes for the test to run for Google's 53-qubit quantum computer, for example. 

The tech giant claims its technology has achieved quantum supremacy by taking 20 minutes to carry out a calculation that would have taken classical computers 10,000 years to complete.

The new test, however, is still only a protocol. Ferracin said that the research team is working with experimentalists to see how the tool performs, and to keep improving on it. 

Editorial standards