What happened to Turing's thinking machines?

What happened to Turing's thinking machines?

Summary: Artificial intelligence is still a long way from delivering the human intelligence in robot form that has long been common in science fiction.

SHARE:
TOPICS: Hardware
58
Alan Turing <br /> Image credit: Wikipedia

Alan Turing Image credit: Wikipedia

Less than a decade after the first electronic computer, British mathematician Alan Turing posed the question, ‘Can machines think?’. To answer that point Turing devised a test in which machines conducted conversations with human judges. If the machine's written responses fooled the judges into believing it was a person, it could be said to be a thinking machine. See also: Alan Turing - The computing pioneer's life and works, in photos More than 60 years after his seminal 1950 paper, and following decades of exponential growth in the power of computers, Turing’s thinking machine has yet to be realised outside the realms of science fiction, where the intelligent robot – from Hal 9000 to C3P0 – is common. Instead, modern AI posses a very different sort of intelligence to our own, one that is narrow and focused on specialised tasks such as helping to fly planes or screening loan applications. In carrying out these tasks, machines can make sound judgements faster and more consistently than people, but lack the versatility of human thought. So where are the thinking machines, and will computers ever match the general intelligence of an individual?

Why what robots find hard, we find easy – and vice versa

Since the field of AI got underway in the 1950s researchers have realised there is a gulf between what humans and what computers find difficult, says Dr Peter Norvig, director of research at Google and co-author of one of the standard works on AI. ”What we thought was hard, things like playing chess, turned out to be easy, and what we thought was easy, things like recognising faces or objects, which a child can do, that turned out to be much harder,” he says. Computers excel at working out the best move in a game like chess because it has well-defined rules and established patterns of play that can be rapidly checked by a machine. The problem with getting computers to emulate human intelligence is that they need to be capable of interacting with the world, of tasks such as recognising people or spoken language, which requires them to handle variables that are constantly changing and hard to predict. In the 1980s AI researchers realised they needed to take a different approach if machines were going to understand that real world complexity, he says. “Part of that shift was from [focusing on] the abstract and formal rules of a game like chess to the messy real world, and going along with that is a shift from formal logic where everything is boolean, true or false, to probability where everything is uncertain,” Norvig says.
Judea Pearl

Judea Pearl

Judea Pearl, professor in the computer science department at University of California, Los Angeles says that computers are now able to handle aspects of real-world uncertainty through mathematical models that apply Bayesian probability theory to as many as 100,000 variables. ”Now we understand how to cope with uncertainty, because we are armed with all the guarantees and the warning signs that mathematics gives you,” he says. Perhaps the most high profile example of how well modern computers can handle the messy environment of the real-world is how Google's driverless cars have safely navigated more than 20,000 miles of rural and city roads or the wide-ranging speech and language recognition capabilities of Apple's virtual assistant Siri. Future breakthroughs in handling uncertainty, says Pearl, will afford AI routines a far greater understanding of context, for instance providing with the next generation of virtual assistants with the ability to recognise speech in noisy environments and to understand how the position of a phrase in a sentence can change its meaning. But it is true that progress in the field of AI has been more reliant on advances in theory than increases in computer processing power and storage. Norvig says: “There's a minimum floor that if you don't have the computing power you're not going to succeed, but just having more doesn't mean you're going to get somewhere. ”There are a couple of billion computers in the world and we do have enough computing power today if we pooled all of our resources to far outstrip a brain, but we don't know how to organise it to do anything with it. It's not just having the power, it's knowing what to do with it.” And without insightful mathematical modelling, Pearl says, certain tasks would be impossible for AI to carry out, as the amount of data generated would rapidly scale to a point where it became unmanageable for any forseeable computing technology. The importance of theoretical breakthroughs, to some extent, undermines the theory put forward by Ray Kurzweil and others that mankind is decades away from creating a technological singularity, an AI whose general intelligence surpasses our own. Exponents of the theory use the exponential growth of computer processing power as an indicator of the rate of progress towards human-level AI. Norvig is sceptical about predictions that a technological singularity will be created before 2050: “I really object to the precision of nailing it down to a decade or two. I'd be hard pressed to nail it down to a century or two. I think it's farther off.”

AI and robots

While we may not be on the cusp of developing intelligences greater than our own, we are likely not far off an explosion in robotics driven by advances in AI, similar to way home PCs suddenly took off in the 1980s. ”Look at progress in speech recognition, machine translation, computer vision, computer planning and operations," says Norvig, adding that the error rate in these areas is roughly halving every decade.
Peter Norvig

Peter Norvig

Thanks to progress in these sub-fields of AI, autonomous systems are being built that can interact with and learn about their environment, as well as making decisions that aid both themselves and humans. Professor Nick Jennings is a chief scientific advisor to the UK government and heads the Agents, Interaction and Complexity Group in the computer science department of Southampton University. “It [the field of AI] has really come back again to the idea of constructing whole intelligent things. Not in the general intelligence area, as in constructing something of human-like intelligence, but as in autonomous systems,” says Jennings, pointing to the Google Driverless Car, which brings together research in many areas such as sensors, information processing, reasoning and planning. Norvig predicts a boom in the popularity of robots and portable virtual assistants that utilise new UIs, such as the speech recognition of Siri and the augmented reality display of Google's Project Glass. “We will see a lot happen in the next couple of decades. Now everybody is carrying a computer with them that has a phone, has a camera and interacts with the real world. People are going to want to have more of a partnership where they tell the phone something and the phone tells them something back, and they treat it more of a personality. ”In terms of robotics we're probably where the world of PCs were in the early 1970s, where you could buy a PC kit and if you were an enthusiast you could have a lot of fun with that. But it wasn't a worthwhile investment for the average person. There wasn't enough you could do that was useful. Within a decade that changed, your grandmother needed word processing or email and we rapidly went from a very small number of hobbyists to pervasive technology throughout society in one or two decades. ”I expect a similar sort of timescale for robotic technology to take off, starting roughly now.”

AI and humans – working together?

The collaboration between humans and intelligent computer systems, which can interact with the real-world and make decisions for themselves, is also what Jennings sees as the most likely future for AI. At Southampton he is working on a research project called Orchid, where networked AI systems called agents work with humans to help make decisions. Researchers working on the project, which runs until 2015, are examining how this approach could be used in a number of scenarios, including disaster response. Orchid will test how AI agents capable of planning, reasoning and acting can help emergency services and others on the ground to react to a rapidly changing situation. The AI agents will scrutinise data from a number of sources and negotiate with other agents and humans to decide the best response, for instance which fire should be extinguished first. Jennings feels this collaborative approach is the best way forward for AI, as while machines already outperform humans in certain areas, such as their ability to parse a trillion web pages in a blink of an eye, he says they will never surpass humans in every field of endeavour. ”As to constructing a system that is generally better than humans in all dimensions, I don't think that it's going to happen. I just think that there are things that humans are innately good at, like creativity. As a human being I feel reassured by that,” he says.

Barriers to the creation of thinking machines?

There could also be limitations to the abilities of AI that people are unaware of due to our limited understanding of how the human brain works. For instance, as we discover more about the brain we could discover that some of its critical faculties are tied to its structure and size, limiting our ability to create a superior artificial alternative. “We don't know what level of understanding is going to come out and what are the limitations,” says Norvig. ”If the human brain could have 1,000 times more brain cells, would it work at all? Would it work completely differently? Or work the same but have more?” Norvig doesn't rule out the possibility of developing a human-level general intelligence, and says, while he wouldn't care to guess when that might happen, it is right to first focus on mastering individual tasks, such as machine vision. He draws a parallel between this step by step approach and the Space Race in the 20th century. ”It wouldn't have made sense in 1930 to say 'We haven't got to the moon yet, we've really got to get started on the whole moon project', instead we said 'Aviation as a whole has to develop a bunch of components and then when they're ready we can start putting them together',” he says. ”At some point there will be a consensus that 'Now is the time, we have all of what we need, let's work really hard to put it together', but we're not there yet. At the moment we can't put the pieces together and get to the moon, we're still flying around in biplanes.” Yet while man may still be a long way from creating a thinking machine, Pearl believes just pursuing that ideal has inspired some of AI's most impactful discoveries. ”The mere aspiration and striving to that goal has had a tremendous positive effect. It has yielded progress that could not have been achieved without that drive,” he says. See also:
Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

Topic: Hardware

About

Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

58 comments
Log in or register to join the discussion
  • We don't even know what

    intelligence is, so how can we hope to emulate it? Many times humanity's ignorance is only exceeded by its arrogance.
    baggins_z
    • Why try to emulate intelligence?

      That is, odd as it may sound, precisely the reason to do it. Attempting to emulate intelligence is probably the best way we have available to learn what it is and, possibly more importantly, what it is not. Remember, in science a negative result is still useful, and in fact may be a better clue than a positive result.
      rocket ride
      • Science

        In science, a negative result is really the only thing that is useful. Positive results are all but worthless. Science is, by definition, a negative proof system, or, to put it another way, a disproof system. It works by disproving hypotheses. Science does not, nor does it claim to, prove anything. It just disproves all the other alternatives until a threshold is reached where you decide it is safer to assume something is true rather than that it is not.
        .DeusExMachina.
      • Disagree. It's ridiculous to try and emulate something without even

        knowing what it is you are trying to emulate. It would be like trying to make a heart-lung machine without knowing what a heart or lung even does.
        baggins_z
    • We already have them

      Turing's machine relies more on the reactions of the human participants than any thinking on the part of the machine. In the 60s, many thought ELIZA and PARRY were thinking machines according to some people who found their replies relevant. Today we have speech recognition in lots of devices and platforms and even the bumbling Sirii would be treated as intelligent by the vast majority of users if they weren't told otherwise. In fact, Sirii and its sisters provide more intelligent conversation than most of our acquaintances (and a significant percentage of ZDNet bloggers and posters).

      AI itself has fractured between those who think intelligence is an infinite number of IF..THEN statements and those who believe intelligence is an emergent property of networks. Luckily, we have those more concerned with function and use than pet theories, so we do get lots of advancement in what we would see as some of the characteristics of intelligence, such as speech recognition, pattern recognition, gaming, learning with neural networks etc.

      As to creating thinking, self-aware computers, I'm a little wary of a human mind with all its foibles and limitations running at light speed with its own emerging goals and priorities. We may get there, but I see lots of problems, danger and moral conundrums in the future.

      In fact, some of our favourite ZDNet bloggers and trolls would have a hard time passing a Turing test, as they prefer programmed responses to analysis ;-)
      tonymcs@...
    • not what, how

      @baggins_z
      You are right, it doesn't make sense to emulate something without knowing what it does (that would be pointless). But we do know WHAT the brain does, it stores memories, enables our speech, controls our body, ... The question is HOW it does it. But knowing HOW it is done is not strictly necessary when emulating something, because then you're just copying it...
      belli_bettens@...
  • Missing photo credit

    You're missing a photo credit for the picture of Robbie the Robot. The picture included appears to be the same as the one on Wikipedia from the 2006 Comic Con in San Diego.
    mheartwood
  • Yet another version

    I am amused at how often ZDNet and CNet writers make mistakes with the word "there". So far it has only been a mix up between the two words "there" and "their". Now we have an innovation "they're" in this article.
    Some day we will have enough AI to sort it out "but we???re not they???re yet".
    alfred@...
    • Fixed

      Good spot, thanks!
      zwhittaker
    • And more spelling problems...

      The article states 'Instead, modern AI posses a very different sort of intelligence to our own,...'
      Surely this should read 'poses'... another hope for an AI 'spellchequer'!
      johnkelly1949@...
      • Maybe you're new here?

        If you waste time pointing out grammar and spelling errors in the writing of bloggers at ZDNet you'll be here all day.
        @ZWhittaker
        No, it is NOT a good spot. It is glaringly obvious to anyone with a reasonable facility with the language. It should have jumped out of the page to anyone who read it. That it, and all the other errors that abound here, don't, is not a testament to the eagle eyes or grammar pedantry of the readers, it is an indictment of what passes for journalism and editing here at ZDNet.
        .DeusExMachina.
  • No Mention of SpiNNaker???

    This project seems to be a step in the right direction for AI, but no mention of it at all. I find this ODD!
    bmandery
  • Brains are NOT digital computers, and there is no software

    I hate to say it, but brains are analog computers without software per se. AI research and theories are structured around human perception of how brains develop mind rather than how mind is actually developed as a consequence of evolved analog biological machinery. Take for example an analog circuit that compares levels and an ADC that provides input to a CPU that must then deal with the input value. The analog comparator circuit simply compares the input levels whereas the CPU must be properly programmed to understand what to do with an out that is supposed to be the result of the input to the ADC.

    Evolution has produced an analog biological circuit that changes its connections in response to its environment and innate nature. this doesn't mean that AI will not produce something that is an analog to mind, but it will be fundamentally different than mind.
    jeff@...
    • Intelligence can be independent of the brain

      There are documented cases of humans with normal intelligence who have essentially no functioning brain tissue or extremely compressed and distressed brain tissue. See here for one example: http://www.foxnews.com/story/0,2933,290610,00.html

      Other examples exist of extremely small humans with brain sizes equal to that of a newborn who have normal adult intelligence. See here for an example: http://today.msnbc.msn.com/id/45695607/ns/today-today_news/t/meet-worlds-shortest-woman-shes-inches-tall/#.T-S1ZY4zJUM
      baggins_z
      • ...

        Yes, it is possible that you have no brain.
        MadDonkey
      • No, it can't

        You are being mislead and misinformed. The first link you posted is a testament to the poor reliability of the news media at large on science issues, and Fox "News" in specific. In general, they can't be trusted to relay even basic science information with any reasonable degree of accuracy. To wit:
        Your first citation was to a story on a man with Dandy-Walker Syndrome.
        DWS is a congenital malformation that effects the cerebellum. This area of the brain is NOT used for cognition. Contrary to the sensationalist and incorrect claims made here, people with DWS do NOT have essentially no functioning brain tissue. In fact, many individuals with DWS have reasonably normal cerebral brain size and function.
        The image published with the article is purposely misleading. It is a view of a two dimensional slice of the left ventricle, which is indeed enlarged. But it only appears to take up the whole skull because it is a sagittal (front to back) slice directly through the enlarged ventricle. Move the slice a few centimeters (or even millimeters) to the right and you would most likely see reasonably normal brain tissue.
        Sensationalist news stories aside, DWS is in no way proof that intelligence exists independent of the brain.
        The second link is even more absurd. It relates to a woman who is abnormally small. Of course her brain is also commensurately small. This in no way, shape, or form, supports the claimed hypothesis.
        First, newborn babies' brains are HUGE. They are significantly oversized relative to body size. It takes several years for infants to "grow into" their heads. Second, most major brain development (tissue development as opposed to structural development) takes place well before birth. Essentially, human infants are born with fully intact brains. In fact, humans have had to undergo a long period of evolution to accomodate this trait. And these adaptations are many and pervasive. Infants are born with loose, unfused skull bones so they can compress as they go through the vaginal canal. Human females have abnormally (relative to the animal world) wide hips, to accommodate the extra large head. This affects walking and vertebral alignment, leading to an increased likelihood of upper back pain relative to males. Human infants cannot walk soon after birth, like virtually every other species, partly because the muscles in the neck necessary to support the head have not fully developed, and won't until the body grows a bit more relative to skull size. Etc., etc..
        Perhaps more to the point, there is NO evidence that brain size is directly proportional to intelligence, but rather body surface area. This can be seen in the relative brain sizes of various species. This has an obvious explanation. Brains are essentially input output devices. At their most basic, they receive input signals (sensations) and output responses (motion). The largest contributor to this sensory input are the touch and proprioceptive sensors. The lesser the surface area, the fewer the number of receptors, and thus the fewer brain cells needed to process the input.
        This is why larger animals can have larger brains than humans, but are not necessarily smarter than us. It is also why woman on average have smaller brains than men, but can be just as smart (if not smarter).
        Within this continuum, however, brain size is relevant to function. So brain size above the body surface area to necessary brain size ratio is directly related to intelligence of the species (as opposed to individual members of that species).
        Also, different neural layouts can make better or worse use of available volume. This can also be seen in DWS, where areas of the brain may be compressed, and thus smaller, but still relatively functional.
        But even in the case of the first link you posted, the subjects IQ is NOT normal, because he has had SOME cerebral degradation.
        As such, the links posted, far from bearing out the stated hypothesis that brains are not necessary for cognition, instead only serve to support the opposite.
        .DeusExMachina.
  • Be careful what you wish for...

    Before thinking that we should want to emulate the way the brain works, we should consider the brain's "side effects" -- all the insanity, perversion, corruption and general evil in the world.

    The first men trying to fly emulated the way that birds fly (with flapping wings). How did that work out? The wrong goal is to say "I want to fly like a bird does." The correct goal is "I want to fly." Likewise, don't say "I want an AI to think like a human does." Instead: "I want an AI which is at least as intelligent as a human." Saying that we *must* emulate the way the brain works to do that is just hubris.

    An AI/computer has (essentially) perfect memory with unlimited capacity, the ability to work on a problem 24/7 with no loss of efficiency and without ever even getting distracted, the ability to back up its memory off site and pass it whole to the next generation of computers, the ability to share its memory with other AIs and work on problems in perfect partnership.

    This is not wanting to fly to the moon in the 1930s. The main roadblock to a full AI is the lack of a well-developed knowledge base. (See aeyec.com) You cannot have a full AI *without* a good KB, so to say we are not going to work on it now is to say that we are postponing achieving a full AI until we *do* decide to work on it.
    nfordzdn
  • What happened to Turing's thinking machines?

    just like hardware, where software was used to advance logic design that enable us to manufacture chips with billions of active elements, software can be leverage to design a more complex AI too. the only problem seems to be the theoretical basis for the advancement, i.e. we have limited knowledge of how our brains work.
    kc63092@...
  • It's like reaching for the stars, and not understanding how to get there.

    It's the same with AI, which shouldn't even be called that. It should be called something like, "AE" for Artifical Emulators, because, that's all that's been attained thus far, even after 50 years of Turing and the millions of pretenders who've been hyping the efforts of "AI".

    No researchers or scientists, in any industry, have even come close to developing a machine which can emulate the intelligence of even the smaller life forms which exhibit "intelligence", like, for example, a mouse. Heck, they can't even emulate the intelligence of a worm effectively. And, they're going to try for the most advanced intelligence known in the universe? That's a lot of chutzpah for humanity at this point. Perhaps if they had gone with baby steps first, and then moved on to the more advanced animal forms, they might have made a bit more progress by now.

    However, a lot of the intelligence that humans and every animal life form exhibits, is "built-in" before birth, and it's predetermined at "conception", with the blueprint known as DNA. Perhaps the scientists and researchers need to take many steps back and start looking at the most minute, before trying for the "stars".

    My prediction? No hint of real intelligence will ever be achieved, until research moves on to creating "organic" computers, because, that's the only mechanism that achieves "recognition" or "awareness" and "reactive and proactive" actions.
    adornoe
    • Why is my ID appearing as "User name not displayed"?

      n/t
      adornoe