Certain individuals, such as futurist Ray Kurzweil, remain vocally optimistic about an AGI being created sooner rather than later. Kurzweil, for instance, believes fully human-level AI, capable of passing the Turing Test, will have arrived by 2029 - a mere two decades' time.
Kevin Warwick, professor of cybernetics at Reading University, is another believer in super-intelligent AI arriving in the next few decades and ushering in an accelerated technology change - something that's been called the Singularity.
"I feel that by 2050 that we will have gone through the Singularity and it will either be intelligent machines actually dominant - The Terminator scenario - or it will be cyborgs, upgraded humans. I really by 2050 can't see humans still being the dominant species. I just cannot believe that the development of machine intelligence would have been so slow as to not bring that about."
However there are many scientists who are far more circumspect on the short and long term prospects of creating an AGI.
Eric Horvitz, president of the Association for the Advancement of Artificial Intelligence (AAAI) and a principal researcher at Microsoft Research, has a relatively measured view of the pace of progress towards this goal.
"I do believe that we might one day understand enough about intelligence to create intelligences that are as rich and nuanced as human intelligence. However, I don't believe that we will be able to come to this competency for a very long time. Such a competency may take hundreds of years," he said.
Professor Alan Winfield, the Hewlett Packard Professor of electronic engineering at the University of the West of England, who conducts research at the Bristol Robotics Lab, also believes we may be waiting some time.
"I certainly think human-level artificial intelligence is a long way into the future," he told silicon.com. "There's a lot of nonsense written about it and people say 'yes, but you know computing power is increasing - Moore's Law and all of that'. Well, that's true, but just having a lot of raw material doesn't mean you can build a thing - having lots and lots of steel doesn't mean you can build a suspension bridge. You need the design."
Arguably, the best design for intelligence created to date remains the biological brain - and scientists are already looking into whether it will one day play a part in AI.
Reading's Warwick's current work combines AI, robotics, electronics and neuroscience by using cultivated brain cells taken from rats as the controlling mechanism for a robot body - a hybrid AI.
"My brain research project at the moment is putting brain cells into a physical robot body - so this is actually taking brain cells initially from a rat brain, separating them, growing them within an incubator and then linking them up to the robot body so the only brain of the robot is this biological brain and the physical body is a robot body - which is tremendously exciting," he tells silicon.com.
By doing this Warwick is able to directly compare the performance of the rat brained robots to other robots that have purely software for brains - flagging up differences in biological and silicon components. "What we find over a period of time is that the habit of doing a particular action strengthens the neural pathways and [the rat brain robot] gets better at doing it and is more reliable at doing it," he said. "Because the biological brain is changing its physical makeup - the connections, the strength of the connections are changing. And that takes a while for them to change."
One imagined far-future application for such work might be a mechanical robot with a biological brain - and Warwick says using human brain cells to power robots is the project's next step.
But if AI is to create intelligence that can outstrip humanity's own, why use the humble human brain as a template? For all its failings, it's still thrashing the competition.
"Depending on what assumptions you make you might think that the most powerful supercomputers today are just beginning to reach the lower end of the range of estimates of the human brain's processing power," Nick Bostrom, director of the Future of Humanity Institute (FHI) at Oxford University, notes. "But it might be that they still have two, three orders of magnitude to go before we match the kind of computation power of the human brain."
It seems even trying to decide how the current generation of computer hardware compares to the processing power of the human brain is a matter of conjecture - and uncertainty is very much a recurring theme in the world of AI.