X
Innovation

The next frontier for artificial intelligence? Learning humans' common sense

Spain's artificial intelligence research institute is looking at teaching robots to know their limits, but think human-level AI is a way away just yet.
Written by Anna Solana, Contributor

Nearly half a century has passed between the release of the films 2001: A Space Odyssey (1968) and Transcendence (2014), in which a quirky scientist's consciousness is uploaded into a computer. Despite being 50 years apart, their plots, however, are broadly similar. Science fiction stories continue to imagine the arrival of human-like machines that rebel against their creators and gain the upper hand in battle.

In the field of artificial intelligence (AI) research, over the last 30 years, progress has been similarly slower than expected.

While AI is increasingly part of our everyday lives - in our phones or cars - and computers process large amounts of data, they still lack human-level capacity to make deductions from the information they're given. People can read different sections of a newspaper and understand them, grasp the consequences and implications of a story. Just by interacting with their environment, humans acquire experience that gives them tacit knowledge. Today's machines simply don't have that kind of ability. Yet.

As a result, common sense reasoning is still a challenge in AI research. "We have machines that are very good at playing chess, for example, but they cannot play domino too," Ramon López de Mántaras, director of the of Spanish National Research Council's Artificial Intelligence Research Institute said. "In the last 30 years, research has focused on weak artificial intelligence - that is to say, in making machines very good at a specific topic - but we have not progressed that much with common sense reasoning," he said during a recent debate organized by the Catalan government's ministry of telecommunications and information society.

Will this situation change with the development of smart city and cognitive computing systems, designed to be able to carry out human-like analysis of complex and diverse data sets? López de Mántaras, who has been exploring some of the most ambitious questions in AI since 1976, doesn't think so. "Neither big data nor high performance computing are bringing us closer to robustness in AI," he said.

Futurists who talk about 'the singularity', meaning the hypothetical advent of artificial general intelligence (also known as strong AI), predict it will occur between 2030 and 2045. López de Mántaras is skeptical, however: "If there is no big change in computer science, it won't happen."

The main difficulty in artificially reproducing the functioning of the human brain stems from the fact that the organ is analogue. Its ability to process information not only depends on the electrical activity of neurons, but also on many kinds of chemical activity, which can't be modelled with current technologies. López de Mántaras speculates that non silicon-based technologies such as memristors, a type of passive circuit elements that maintain a relationship between the time integrals of current and voltage across a two terminal element, or DNA computing will be needed to move forward. However, he notes that we need something more than a technological change to solve the problem: we also need new mathematical models and algorithms to artificially reproduce the human brain - algorithms that are as yet unknown.

By 2030 though, humanoid robots that interact with the environment and that may have more general intelligence may have been developed, he said, and businesses will take advantage of the trend. Social robots as domestic assistants or to help elderly people or those with mobility problems are being worked on, as are self-driving vehicles, though their AI isn't yet anywhere near human-level.

Meanwhile, López de Mántaras' team is working on a project to illustrate the problems a machine faces in understanding its own limitations.

Grasping what we can and can't do may be obvious to humans, but not so to machines. The Artificial Intelligence Research Institute (IIIA) has teamed up with Imperial College London on the project, which uses an electronic musical instrument developed by the University Pompeu Fabra (UPF) of Barcelona, called Reactable. Inspired by modular analogue synthesizers such as those developed by Bob Moog in the early 1960s, Reactable uses a large, round multi-touch enabled table, which performers 'play' by manipulating physical objects on the table, turning and connecting them to each other to make different sounds and create a composition.

The IIIA's machine is learning how to play the Reactable, adjusting its movements using common sense reasoning. If the IIIA machine moves too quickly, it can't perform the action correctly. The machine can learn when its actions will succeed, and develops the ability to foresee what will happen when and if it fails too. Such learning is trickier than it seems.

"We are now doing experiments to see what happens when you move the instrument around to see whether the robot is able to rediscover a sound position," López de Mántaras said - finding where the object originated from and the sound it made there.

The learning process should be similar to the human way of doing things, known as developmental or epigenetic robotics. "It is basic research without an immediate application, but it is important for the future," López de Mántaras said.

This research is necessary to allow future robots to develop common sense knowledge. For example, to know that in order to move an object attached to a rope you have to pull the rope, and not push it. This and other physical properties of an object can only be learned by experience. Ultimately, says López de Mántaras, for any real future artificial intelligence to created, it will need to be have such common sense knowledge as its disposal.

Ethics of AI

Robots that play music might not seem to raise much in the way of ethics questions. But what if AI was used by a musician to play better - does it matter if that musician wins a prestigious competition? And who should regulate AI improvements? Such questions were among those raised at the #èTIC debate where López de Mántaras and Albert Cortina, lawyer author of a book on singularity and posthumanism, shared their concerns for reducing the risks to society from intelligent machines.

Cortina said the debate is not only whether humans should improve their capabilities, but whether these improvements will generate inequality. It's easy to see a situation where those with the means are able to augment their own physical or mental capacities with AI, leaving the rest of society with their all-too-human abilities. Should humans' use of AI to improve themselves be capped to promote equality, or would society be better off if humans were able to add machine intelligence to their own?

López de Mántaras said there are two key aspects to AI that need be regulated: the use of lethal autonomous weapons systems, and privacy. In this sense, López de Mántaras signed, with other AI experts around the globe, an open letter that pledges to carefully coordinate progress to artificial intelligence it does not grow beyond humanity's control.

"There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls," the letter says.

Nevertheless, who should establish the limits on the use of AI remains unanswered.

Read more on artificial intelligence

Editorial standards