The world has come a long way since 1955 but has AI? What are AI researchers and their machine learning systems up to these days? And will there ever be a truly intelligent machine?
It's more than half a century since US computer scientist John McCarthy came up with the term "artificial intelligence" while working as an assistant professor of mathematics at Dartmouth College in New Hampshire.
It was 1955 and Dwight Eisenhower was the 34th President of the United States, while Anthony Eden had replaced an ailing Winston Churchill as Prime Minister just months earlier. 1955 was also the year when physicist and Nobel Prize laureate Albert Einstein died and the year Rosa Parks' refusal to give up her seat to a white man galvanised the US civil rights movement. Times were a-changing and technology was too.
McCarthy coined the term artificial intelligence in August that year in a proposal for a conference that would firmly establish AI as a research field. He was not alone in laying the groundwork for the Dartmouth Summer Research Conference on Artificial Intelligence - his fellow proposers for the two-month brainstorm were Marvin Minsky, at the time Harvard junior fellow in mathematics and neurology; Nathaniel Rochester, manager of information research at IBM; and Claude Shannon, mathematician at Bell Telephone Laboratories.
"We propose that a two month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire," the proposal begins. "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
How far has McCarthy's bold conjecture become reality? And how far has the field come since the 1950s?
In today's popular imagination AI still conjures up ideas of intelligent robots - Data in Star Trek, or a disembodied and often malevolent super-intelligence of the kind seen in The Matrix. These incarnations of AI project an image of machine intelligence that is superior to man's, at least when it comes to things like reasoning and problem solving. Emotionally, of course, fictional AI has always been a bit simplistic - if not downright psychopathic, and keen to do away with the human competition.
Needless to say, the reality of today's AI technology is not even close to achieving the far-reaching visions of sci-fi. But could scientists one day - albeit at some distant far off point - create an artificial general intelligence (AGI), a machine that possesses human-level smarts?
"Ultimately I think so," Daphne Koller, a professor in the Stanford AI Lab at the Computer Science Department of Stanford University in California, tells silicon.com. "Yes, I think ultimately it is possible. Ultimately, we will get machine learning technology to the point where the machine can adapt itself sufficiently that it's actually learning from lifelong experience, and in all realms, and I think that would eventually drive us towards that goal but it's going to take a very, very, very, very long time."
Koller's caution is a recurring sentiment among scientists when talk turns to human-level AI. And little wonder - the field feels like it's still suffering from the hangover of being proved wildly over-optimistic in some of its past predictions.
"People who are much wiser than myself have made predictions about the future that have turned out to be ridiculously false," says Koller. "I think it's not because they were stupid, it's because such predictions are impossible to make."