IBM shows quantum computers can solve these problems that classical computers find hard

Big Blue's quantum team has mathematically demonstrated that a quantum algorithm could work better than a classical one for machine-learning classification problems.

gettyimages-946807166.jpg

The most standard example of a classification problem is when a computer is given pictures of dogs and cats, and is required to label all future images it sees as either a dog or a cat.

Image: ekapol / Getty Images

Among some of the most promising applications of quantum computing, quantum machine learning is expected to make waves, but how exactly remains somewhat of a mystery.  

In what could shed light on how realistic those expectations are, IBM's researchers are now claiming that they have mathematically proven that, by using a quantum approach, certain machine-learning problems can be solved exponentially faster than they would be with classical computers. 

Machine learning is a well-established branch of artificial intelligence that is already used in many industries to solve a variety of business problems. The approach consists of training an algorithm with large datasets, to enable the model to identify different patterns and eventually calculate the best answer when presented with new information. 

SEE: Building the bionic brain (free PDF) (TechRepublic)

With larger datasets, a machine-learning algorithm can be optimized to provide more accurate answers, but this comes at a computational cost that is fast reaching the limits of traditional devices. This is why researchers are hoping that, one day, they will be able to leverage the huge compute power of quantum technologies to bring machine-learning models to the next level. 

One method in particular, called quantum kernels, is the focus of many research papers. In the quantum kernel approach, the quantum computer steps in for only one part of the overall algorithm, by expanding what is known as the feature space – the collection of features that are used to characterize the data that is fed to the model, such as "gender" or "age", if the system is trained to recognize patterns about people. 

To put it simply, by using the quantum kernel approach, a quantum computer can distinguish between more features and, therefore, see patterns even in a huge database, where a classical computer would only see random noise. 

IBM's researchers set out to use quantum kernels to solve a specific type of machine-learning problem called classification. As IBM's team explains, the most standard example of a classification problem is when a computer is given pictures of dogs and cats, and is required to train with this dataset to label all future images it sees as either a dog or a cat, with the goal of generating accurate labels in as little time as possible. 

Big Blue's scientists developed a new classification task and found that a quantum algorithm using the quantum kernel method is capable of finding relevant features in the data for accurate labeling, while for classical computers the dataset looked like random noise. 

"The quantum kernel estimation routine we use is a general method that can be in principle applied to a wide range of problems," Kristan Temme, researcher at IBM Quantum, tells ZDNet. "In our paper, we formally prove that this quantum kernel estimation routine can give rise to learning algorithms that for specific problems outperform any classical learner." 

To prove the advantage that the quantum method has over the classical approach, the researchers created a classification problem for which the data can be generated on a classical computer, and showed that no classical algorithm can do better than random guessing when attempting to solve the problem.  

When viewing the data in a quantum feature map, however, the quantum algorithm was able to predict the labels with high accuracy and at speed.  

ZDNet Recommends

The best cloud storage services

Free and cheap personal and small business cloud storage services are everywhere. But, which one is best for you? Let's look at the top cloud storage options.

Read More

"This paper can be viewed as a milestone in the field of quantum machine learning, since it proves an end-to-end quantum speed-up for a quantum kernel method implemented fault-tolerantly with realistic assumptions," concluded the research team. 

Of course, the classification task developed by IBM's scientists was designed specifically to find out whether the quantum kernel method is advantageous and is still far from ready to be applied to any type of larger-scale business problem.  

This is mostly due, according to Temme, to the limited size of IBM's current quantum computers, which to date can only support under 100 qubits – far from the thousands and even millions of qubits that scientists reckon will be necessary to start creating value when it comes to quantum technologies. 

"At this stage, we can't point to a specific use case and say 'this will make a direct impact,'" says Temme. "An application of a 'large' quantum machine learning algorithm has not been conducted yet. The scale to which one will be able to go for such an algorithm is of course directly tied to the development of the quantum hardware." 

IBM's latest experiment also only applies to a specific type of classification problems in machine learning, and does not mean that all learning problems will benefit from the use of quantum kernels.  

But the results open the door to further research in the field, to find out whether other machine-learning problems could benefit from the use of this method. 

Much of the work, therefore, remains theoretical for now, and IBM's team has acknowledged that there are many caveats to any new discovery in the field. But while waiting for quantum hardware to improve, the researchers are committed to continuing to demonstrate value of quantum algorithms, if only from a mathematical standpoint.