Researchers at London’s Imperial College have demonstrated a new theory of quantum data analysis that could allow a future quantum computer to tolerate data error rates of up to 25 per cent.
The researchers, working with colleagues at the University of Queensland, have shown that it is possible to correct for a particular kind of error, in which qubits are lost from the computer altogether.
From the announcement: They used a system of ‘error-correcting’ code, which involved looking at the context provided by the remaining qubits to decipher the missing information correctly.
"Just as you can often tell what a word says when there are a few missing letters, or you can get the gist of a conversation on a badly-connected phone line, we used this idea in our design for a quantum computer," said Dr Sean Barrett, the lead author of the study, who is a Royal Society University Research Fellow in the Department of Physics at Imperial College London..
They discovered that the computers have a much higher threshold for error than previously thought – up to a quarter of the qubits can be lost – but the computer can still be made to work. "It’s surprising, because you wouldn’t expect that if you lost a quarter of the beads from an abacus that it would still be useful," he added.
But here's the rub: the work is still all theoretical. Quantum computers are tricky things to build, and in the real world they have only been built on a very small scale – no pun intended. So far, rather than the thousands of qubits that would be needed to crack public key encryption – for example – the largest systems ever built have been two or three qubits.
Consequently, testing the ideas of the Imperial/Queensland collaboration will have to wait until someone build a quantum computer big enough for a 25 per cent qubit loss to even be possible, let alone not catastrophic.
The research is published here.