The world has come a long way since 1955 but has AI? What are AI researchers and their machine learning systems up to these days? And will there ever be a truly intelligent machine?
It's more than half a century since US computer scientist John McCarthy came up with the term "artificial intelligence" while working as an assistant professor of mathematics at Dartmouth College in New Hampshire.
It was 1955 and Dwight Eisenhower was the 34th President of the United States, while Anthony Eden had replaced an ailing Winston Churchill as Prime Minister just months earlier. 1955 was also the year when physicist and Nobel Prize laureate Albert Einstein died and the year Rosa Parks' refusal to give up her seat to a white man galvanised the US civil rights movement. Times were a-changing and technology was too.
McCarthy coined the term artificial intelligence in August that year in a proposal for a conference that would firmly establish AI as a research field. He was not alone in laying the groundwork for the Dartmouth Summer Research Conference on Artificial Intelligence - his fellow proposers for the two-month brainstorm were Marvin Minsky, at the time Harvard junior fellow in mathematics and neurology; Nathaniel Rochester, manager of information research at IBM; and Claude Shannon, mathematician at Bell Telephone Laboratories.
"We propose that a two month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire," the proposal begins. "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
How far has McCarthy's bold conjecture become reality? And how far has the field come since the 1950s?
In today's popular imagination AI still conjures up ideas of intelligent robots - Data in Star Trek, or a disembodied and often malevolent super-intelligence of the kind seen in The Matrix. These incarnations of AI project an image of machine intelligence that is superior to man's, at least when it comes to things like reasoning and problem solving. Emotionally, of course, fictional AI has always been a bit simplistic - if not downright psychopathic, and keen to do away with the human competition.
Needless to say, the reality of today's AI technology is not even close to achieving the far-reaching visions of sci-fi. But could scientists one day - albeit at some distant far off point - create an artificial general intelligence (AGI), a machine that possesses human-level smarts?
"Ultimately I think so," Daphne Koller, a professor in the Stanford AI Lab at the Computer Science Department of Stanford University in California, tells silicon.com. "Yes, I think ultimately it is possible. Ultimately, we will get machine learning technology to the point where the machine can adapt itself sufficiently that it's actually learning from lifelong experience, and in all realms, and I think that would eventually drive us towards that goal but it's going to take a very, very, very, very long time."
Koller's caution is a recurring sentiment among scientists when talk turns to human-level AI. And little wonder - the field feels like it's still suffering from the hangover of being proved wildly over-optimistic in some of its past predictions.
"People who are much wiser than myself have made predictions about the future that have turned out to be ridiculously false," says Koller. "I think it's not because they were stupid, it's because such predictions are impossible to make."
Certain individuals, such as futurist Ray Kurzweil, remain vocally optimistic about an AGI being created sooner rather than later. Kurzweil, for instance, believes fully human-level AI, capable of passing the Turing Test, will have arrived by 2029 - a mere two decades' time.
Kevin Warwick, professor of cybernetics at Reading University, is another believer in super-intelligent AI arriving in the next few decades and ushering in an accelerated technology change - something that's been called the Singularity.
"I feel that by 2050 that we will have gone through the Singularity and it will either be intelligent machines actually dominant - The Terminator scenario - or it will be cyborgs, upgraded humans. I really by 2050 can't see humans still being the dominant species. I just cannot believe that the development of machine intelligence would have been so slow as to not bring that about."
However there are many scientists who are far more circumspect on the short and long term prospects of creating an AGI.
Eric Horvitz, president of the Association for the Advancement of Artificial Intelligence (AAAI) and a principal researcher at Microsoft Research, has a relatively measured view of the pace of progress towards this goal.
"I do believe that we might one day understand enough about intelligence to create intelligences that are as rich and nuanced as human intelligence. However, I don't believe that we will be able to come to this competency for a very long time. Such a competency may take hundreds of years," he said.
Professor Alan Winfield, the Hewlett Packard Professor of electronic engineering at the University of the West of England, who conducts research at the Bristol Robotics Lab, also believes we may be waiting some time.
"I certainly think human-level artificial intelligence is a long way into the future," he told silicon.com. "There's a lot of nonsense written about it and people say 'yes, but you know computing power is increasing - Moore's Law and all of that'. Well, that's true, but just having a lot of raw material doesn't mean you can build a thing - having lots and lots of steel doesn't mean you can build a suspension bridge. You need the design."
Arguably, the best design for intelligence created to date remains the biological brain - and scientists are already looking into whether it will one day play a part in AI.
Reading's Warwick's current work combines AI, robotics, electronics and neuroscience by using cultivated brain cells taken from rats as the controlling mechanism for a robot body - a hybrid AI.
"My brain research project at the moment is putting brain cells into a physical robot body - so this is actually taking brain cells initially from a rat brain, separating them, growing them within an incubator and then linking them up to the robot body so the only brain of the robot is this biological brain and the physical body is a robot body - which is tremendously exciting," he tells silicon.com.
By doing this Warwick is able to directly compare the performance of the rat brained robots to other robots that have purely software for brains - flagging up differences in biological and silicon components. "What we find over a period of time is that the habit of doing a particular action strengthens the neural pathways and [the rat brain robot] gets better at doing it and is more reliable at doing it," he said. "Because the biological brain is changing its physical makeup - the connections, the strength of the connections are changing. And that takes a while for them to change."
One imagined far-future application for such work might be a mechanical robot with a biological brain - and Warwick says using human brain cells to power robots is the project's next step.
But if AI is to create intelligence that can outstrip humanity's own, why use the humble human brain as a template? For all its failings, it's still thrashing the competition.
"Depending on what assumptions you make you might think that the most powerful supercomputers today are just beginning to reach the lower end of the range of estimates of the human brain's processing power," Nick Bostrom, director of the Future of Humanity Institute (FHI) at Oxford University, notes. "But it might be that they still have two, three orders of magnitude to go before we match the kind of computation power of the human brain."
It seems even trying to decide how the current generation of computer hardware compares to the processing power of the human brain is a matter of conjecture - and uncertainty is very much a recurring theme in the world of AI.
At the heart of AI lies a conundrum that has not only not yet been solved, but one which scientists don't yet even know how to solve: how do you manufacture intelligence? Is it even possible? You can read more about the subject in this article, Artificial intelligence: The conundrum of consciousness.
While the problems associated with determining the very nature of consciousness and intelligence have doubtless dogged the progress of developing human-level AGI, there's another reason why its creation remains a far-distant prospect: research simply isn't being directed down that route.
"I think in the '60s, '70s people were maybe making predictions - we were going to have AI systems like the human brain in 10 years and things like that - maybe they were completely unrealistic scenarios [but] we've tended to use AI systems for specific areas. We've not been looking at recreating humans," Reading's Warwick tells silicon.com.
The bread and butter of AI research is instead to be found in making iterative improvements to existing systems, according to the FHI's Bostrom.
"There is a large amount of research on very specific applications and on fine-tuning different algorithms and doing work that amounts to incremental improvements on what we have today," he says, adding: "There is another much smaller set of people who are interested in trying to develop general artificial intelligence and it's much harder there to judge whether there is any real progress or not, because there hasn't been any useful or impressive applications of the intermediary results so far."
Incremental improvements may sound less exciting than having machines as smart as we are, but such work is exactly where AI starts to get interesting. After all, specific systems are far more immediately useful to the modern world than some amorphous AGI.
Some of these specific systems - sometimes referred to as narrow AI - deployed in the world today include things like autopilot software for aeroplanes and just-in-time inventory systems that keep shelves stocked with goods.
Search engines too are riddled with AI, according to the AAAI's Horvitz.
"The large search engines are really large-scale efforts in AI," he says. "They are already more intelligent than humans in their ability to find information, interpret people's intentions from queries, and do such tasks as translation between many language pairs. So we don't have 'human-level' intelligence but we've had a number of important breakthroughs in machine perception, learning, and reasoning, and are seeing rich applications in areas like online services, robotics and conversational systems."
There are hundreds of narrow AIs beavering away under the skin of society according to futurist Ray Kurzweil: "Every time you send an email or connect a cell phone call, intelligent algorithms route the information. Pick up any complex product and it was designed at least in part by intelligent computer-assisted design and assembled in robotic factories with inventory levels controlled by intelligent just-in-time inventory systems. Intelligent algorithms automatically detect credit card fraud, diagnose electrocardiograms and blood cell images, fly and land airplanes, guide intelligent weapon systems and a lot more."
"These were all research projects just a decade ago," he adds.
So while AI research hasn't so far spawned an electronic entity that can both do complex maths, locate a bottle of ketchup in a Tesco Metro and pick out a tie to wear with your blue suit, it has created increasingly sophisticated software that is acting as the electronic brains behind many practical, useful and even essential applications to our modern infrastructure - performing complex tasks that are often impossible for a human brain to do, certainly anywhere near as quickly and accurately.
Unsurprisingly then, AI's influence looks set to spread and deepen.
"Every company now realises that AI can improve how they do things," Stanford's Koller says. "Every company has an IT component, realises that AI can just dramatically improve how their system functions and so I've had students who have gone to everything from industry giants like Google to small start-ups doing computer security and anything in between."
Asked whether the increasing complexity of society is making AI more imperative, she is unequivocal: "Absolutely. We're faced with this flood of data in all realms of our existence - whether it's in the scientific disciplines where people are collecting data much faster than they're able to understand it, or whether we consider the web where the amount of data that people are putting out there is just enormous and there is so much information out there it's just impossible for people to keep up with that and sort through it, and figure out what it is they should be looking at when they're looking for something."
"When you think about how the entire world around us is being equipped with sensors of all different kinds - your refrigerator is probably a computer, your car has at least six computers in it, all of these are little computers that are sensing the world, telling us useful information that currently no one's doing anything with trying to make sense of that and so I think that all of these data-rich, knowledge-poor domains are just a tremendous opportunity for AI systems and AI's likely to dramatically revolutionise all of them," she adds.
And Koller is not the only one with such a view - the AAAI's Horvitz also believes AI will be a disruptive force in many walks of life: "AI will have a great deal of influence in transportation, education, healthcare, and many areas of commerce, as well as in basic scientific research."
With the energy industry needing to upgrade its creaky and limited infrastructure, AI could be just the ticket there too.
According to Koller, research currently ongoing at Stanford is looking at putting AI into a smarter electric grid so power can flow both ways and be distributed dynamically according to changing demand - an area she believes offers "a dramatic opportunity for AI".
"This is one of those cases where people can try and engineer this in advance but I think that just like in other applications it will be demonstrated that a learning adapted system just does a much better job than something that people could engineer with their own perceptions of what's going to happen," she added.
But what is meant by "a learning adapted system"? What's so special about systems with a sprinkling of AI compared to more common-or-garden software?
Machine learning is a core concept that crops up time and again within AI - and is sometimes even used interchangeably to mean the same thing. The premise is instead of the programmer trying to define an explicit and comprehensive set of instructions that a system must slavishly follow in order to achieve its goal, it is fed lots of labelled examples of relevant data so that it can then develop its own rules of recognition or operation.
An example would be a character recognition system that is shown thousands of images of letters so it can build enough knowledge of their shapes to be able to recognise letters in specific typesets or handwriting it hasn't seen before - or at least do a better job of recognising than a system that hasn't been fed on a diet of relevant data.
Spam filters are also learning adapted systems, as they ask email users to label what's spam and what's not in order to (hopefully) improve their performance. Machine learning is powerful because it offers a way for systems to better manage the complexity of the environments they are being asked to function in and also to adapt, evolve and improve as they operate. The bottom line is machines can teach themselves strategies for operating more successfully than humans are able to explain the world to machines.
"If you try and hand-engineer sensory - anything that interacts with the physical world, whether it's on the sensing side or on the manipulation and activation side - hand engineering such systems successfully in a way that really deals with the myriad possibilities that one can encounter turns out to be essentially impossible," according to Stanford's Koller. "Of course you can never prove that's impossible but no one's succeeded in doing this successfully in unconstrained environments and machine learning is a way for the machine to adapt itself to its environment."
A recent example of an AI breakthrough that was made possible in large part because of developments in machine learning is the Darpa Grand Challenge driverless car competition. None of the robot cars completed the first desert race course in 2004 but the second year of the competition saw five cars make it over the finish line. A third challenge, held in 2007, moved from the desert to a mock city where the cars were required to behave like traffic. Six teams were able to successfully complete this more challenging Urban Challenge.
"People are just remarkably evolved to interpret the kind of sensory inputs around us whereas for a computer system even to look at an image and say there is a person in this image - and I better be careful not to run over them - is just a tremendously difficult challenge," Koller says. "And the fact that we have come as far as we have is, I think, a dramatic step forward relative to where AI was 10 years ago."
While it may be a dramatic step forward, the days of AGI - even the early days of AGI - look to be a long way off. What seems more certain in the short term is that AI techniques will continue to ramp up the sophistication of our business processes and transform our infrastructures and industries from the inside out.
For more exclusive content on the subject of Artificial Intelligence check out the links below: