The great Singularity debate

The great Singularity debate

Summary: Saturday morning at the Singularity Summit at Stanford University. All 12 panelists for the day are seated in order of their scheduled presentations, with an audience of at least a thousand seated in the Memorial Auditorium on campus.

SHARE:
TOPICS: Emerging Tech
70

Saturday morning at the Singularity Summit at Stanford University. All 12 panelists for the day are seated in order of their scheduled presentations, with an audience of at least a thousand seated in the Memorial Auditorium on campus. Very orderly and probably not very comfortable for the panelists who don't present for hours.

 singpanel11.jpg

 See image gallery for a closer look at event's participants.

 

If you aren’t familiar with the concept of singularity, here is the elevator pitch:

Sometime in the next few years or decades, humanity will become capable of surpassing the upper limit on intelligence that has held since the rise of the human species. We will become capable of technologically creating smarter-than-human intelligence, perhaps through enhancement of the human brain, direct links between computers and the brain, or Artificial Intelligence. This event is called the "Singularity" by analogy with the singularity at the center of a black hole - just as our current model of physics breaks down when it attempts to describe the center of a black hole, our model of the future breaks down once the future contains smarter-than-human minds. Since technology is the product of cognition, the Singularity is an effect that snowballs once it occurs - the first smart minds can create smarter minds, and smarter minds can produce still smarter minds.—Singularity Institute for Artificial Intelligence

The first speaker was Ray Kurzweil (pictured below), the progenitor of the Singularity, who reprised his recent 672-page book, The Singularity Is Near : When Humans Transcend Biology. He whizzed through the charts from the book, showing how law of accelerating returns is leading to the transformation of  humanity. Kurzweil has concluded that intelligence will become more nonbiological and increase by the trillions. He writes, "In this new world, there will be no clear distinction between human and machine, real reality and virtual reality. We will be able to assume different bodies and take on a range of personae at will. In practical terms, human aging and illness will be reversed; pollution will be stopped; world hunger and poverty will be solved. Nanotechnology will make it possible to create virtually any physical product using inexpensive information processes and will ultimately turn even death into a soluble problem."

 kurzweilmay1.jpg

Here's Kurzweil's take on the impact of accelerating returns:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won't experience 100 years of progress in the 21st century -- it will be more like 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity -- technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.

By reverse engineering the brain and leveraging pattern recognition, Kurzweil expects to develop artificial intelligence far beyond the human mind in a few decades. "The bulk of human intelligence is based on pattern recognition...it's the quintessential example of self organization," Kurzweil said. He gave an example of pattern recognition applied to large databases, out symbolic rules, to self discover real-time language translation, which he expects to be available in cell phones in the next few years.   

Reverse engineering is not thoughtlessly putting brain software on a computational substrate, Kurzweil said, but getting 'hints from reverse engineering. He said the brain's genome, which describes its design, could be compressed to about 20 megabytes of data. "It's a level of complexity we can handle," Kurzweil said. "The cerebellum has trillions of incredibly tangled bundles, but only tens of thousand of bytes in the genome."

He gave an example of reverse engineering the auditory cortex to derive principles of operation that can be expressed as mathematics and simulated. He demonstrated a reader for blind people that took advantage of speech and vision research, distinguishing between cats and dogs and reading from his book. Biotech is also key to Kurzweil's vision. He cited efforts to create artificial blood 'resprocites' by the late 2020s that would allow people to sit at the bottom of a swimming pool for an hour or sprint for 15 minutes without getting winded. By 2020, you should be able to have the power of the human brain in a personal computer for $1,000.

Kurzweil acknowledged that Singularity could lead to an unappealing or cataclysmic future, but he believes his vision will have a soft landing. If the technologies were considered too dangerous, it would require a totalitarian society, would deprive people of the benefits of technology innovation and drive it underground. In his view, narrow relinquishment of dangerous information and investing in defenses is a morely likely, or hopeful, outcome.

kurzhostader_1.jpg

Kurzweil and Hofstader 

Douglas Hofstader followed Kurzweil, offering his critique of Singularity. Hostader, professor of Cognitive Science and Computer Science Adjunct Professor of History and Philosophy of Science, Philosophy, Comparative Literature, and Psychology at the University of Indiana and the author of Gödel, Escher, Bach: An Eternal Golden Braid, doesn't buy into the whole Singularity vision.

He expressed the 'human' concern of uploading ourselves into cyberspace, becoming software entities inside of computing hardware as our destiny. "If that’s the case how will the entire world, enviroment in which we live be modeled," he asked. "What does it mean for humans to survive in cyberspace, and what is the core of a person. It's not clear what a human being would be in such an environment."

Hofstader said he asked many of his friends,  "highly informed intellectual people," and their reactions to Singularity were from it's "nutty" to "scary"  to 'I don’t know." It could be reasonable or probable, but none of the people he queried had read the book. "You get the feeling the scientific world not taking this seriously. I don’t see serious discussions among physicists when they get together, and most are skeptical," he said.

Hofstader proclaimed that he was less skeptical than those he discussed the topic with, but said that the ideas said in book marred by blurring with too much science fiction, calling it "wild beyond any speculation I am willing to accept."

"I see large a number of things that are partially true, blurry," Hofstader continued. "I can't put a finger on where it's wrong, but when multiply them together, you get down to small number...maybe 1 in 1000 of what Ray is talking about taking place." 

"When listening to Ray, I feel like I am listening to one side of a divorce...I would like to hear serious scientist giving it a serious response. It's all to Ray's credit...he raised important issues. We are about to be transformed in incredible ways, and have to take these ideas seriously."

Hofstader illustrated his points with some of his own cartoons.

hoststader1.jpg

These are big ideas and so far in this conference there hasn't been any further discussion or debate to bring different viewpoints on Singularity into focus. My own take is that capturing the mechanisms of the human brain is inevitable. The question is whether the mechansms are the enough to replicate the range of human behavior, and how that man-machine relationship will play out.

As my friend futurist Paul Saffo said, "If we have superintelligent robots, the good news is that they will view us a pets; the bad news is they will view us food."

Kurzweil's response came near the end of the day long event...

Check out Renee Blodgett's coverage of the Singularity Summit...also Mike Treder...and views of all the participants in our image gallery.

Topic: Emerging Tech

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

70 comments
Log in or register to join the discussion
  • The Singularity Debate

    It is fascinating to consider the ramifications of greater then human level processing. In thinking about the place a human being will hold in a world of virtual reality one very disturbing question arises.
    How can we maintain our core individuality when the distance between our neurons is not much greater then the distance between our individual minds?
    I think we tend to forget that with more interconnected computers comes more interconnected people. Are we really ready to work together?
    mcscom
    • While we may reverse engineer the brain

      We won't ever be able to teach a computer to think. If you're curious about the human being's place in the virtual world it's that of thought. I don't believe that people will upload themselves into cyberspace so much as use cyberspace and the computing power located there to enhance our cognitive abilities. After all while a computer and add numbers really fast it can't make any real decisions other than true or false. It's the nature of electricity and it's limitation it has only 2 states on and off.
      maldain
      • Isn't it more

        That we are capable of making moral judgements and are judged on the basis of them.

        Maybe the Turing test should be replaced with a test that demands that a judge and jury would deem a computer responsible and therefore morally culpable for its actions?
        jorwell
        • Moral Judgements

          Hi, jonwell.

          You said:

          [i]That we are capable of making moral judgements and are judged on the basis of them.[/i]

          Aren't moral judgements just rules? If we look at it that way, computers are excellent at following rules. As a matter of fact, the real trick would be developing a computer that could [i]disobey[/i] rules.
          bhartman36
  • Digital beer

    Like Hofstadter I remain somewhat sceptical about Kurzweil's claims.

    Perhaps it may be true that you could "download" someone's "personality" and represent it digitally. Perhaps this "personality" might continue to be fed sensory data via visible radiation, sound and touch.

    However it seems fairly clear that if the brain is a computer then it is a biochemical computer not a digital one. It is clear that all sorts of factors like diet and exercise have an effect on the functioning of the brain and therefore on the personality.

    If I drink some beer then clearly this has a direct effect on my brain. It is also worth thinking about what effect the nutritional value of the beer has too.

    I don't think Kurzweil takes enough account of the complexity of the biochemical environment in which we live and the influence this has on our minds and personality.

    So the question is, what will the digital equivalent of beer be? One might also ask the question if Ray Kurzweil's brave new world is a world without beer, is it one we really want to live in?
    jorwell
    • They all make the same mistake

      The human brain is ELECTRO-CHEMICAL. neurons fire electrically - influenced by concentrations of chemicals. Just how you model this in an ALL electrical world is not very clear - AND the reason for the miserable failure of AI.

      Making the next step into smarter-than-human intelligence will require that we UNDERSTAND what "regular" human intelligence really is.
      Roger Ramjet
  • No debate - just bluster

    Scientologists arguing science. Bioethicists arguing biology. Astrologists arguing astronomy. i.e. non-qualified, yet interesting people talking and making money.
    Roger Ramjet
    • Whoa there!

      The real issue these days is biologists with no foundation in systematic ethics. Not bioethicists without knowledge of biology. This is pretty evident to even the casual observer.

      Oh well - at least your point is valid for the astrologists and scientologists. As dad says, even a broken clock is right twice a day.
      Techboy_z
  • Of course SOME of it will happen.

    I have no doubt at all we will see bio-chip implants. I have no doubt it would raise the measurable IQ of those doing it. Where we go from there is pretty much up to us.
    No_Ax_to_Grind
    • But how many will accept

      having "Intel inside" tattooed on their foreheads? ;-)
      jorwell
      • Star-belly Sneetches

        I can REMOVE that tattoo for you . . .
        Roger Ramjet
    • re: of course, SOME of it will happen

      If you look at the whole of human history, you'll see that ethical restraint has had little to no effect on technological advancement or people's willingness to embrace it. There used to be a time when invasive surgery was considered heresy and blasphemous.

      When people realize that something can make their lives just that much easier (or at least present the illusion of such), they will buy into it and it will prosper. The goal of any species is ultimate efficiency: maximum output through minimum input, and the "singularity", in some for or other, will eventually happen if it proves to be an advancement, regardless of how we feel about it.
      celluloid3119
  • What is the nature of humanity?

    The real question is'what are we?' If we are aware of ourselves due to our synapses, Ray's hypontesis, digital intelligence, both within us and outside us, is here, now. No, not at human levels, but it does manage the speed of my car much better than I do. After reading Ray's book, I am convinced that the singularity is near, and most of us will be there, even though the totalitarian Luddites will attempt to derail it.
    EmergencyMan
    • Ludddites?

      Hi, EmergencyMan.

      You said:

      [i]After reading Ray's book, I am convinced that the singularity is near, and most of us will be there, even though the totalitarian Luddites will attempt to derail it.[/i]

      While I agree with you that it's on its way, I think it's a mistake to look at it as an unambiguously good thing. If you think about creating intelligent machines that are smarter than us, you can see a lot of room for problems -- particularly with all the control we give to computers in our daily lives. I'm [i]not[/i] saying we should screech the brakes on. I do think it's unfair, however, to brand anyone who opposes any part of the technological change this will cause as a Luddite.
      bhartman36
    • Off by 5 million years

      The singularity occurred 5 million years ago in evolution. He has the misconception that the limitation of intelligence is biological. Simply looking at the tremendous variation in individuals and the progress that has been made over the last 10 millenia show that no such limitation exists.

      Sure we will be able to produce machines as intelligent as humans, but only they will be able to make themselves more intelligent. The key won't be intelligence but the desire to do so and the aims they would have for it. These are at the same time more basic and more difficult. It is not at all apparent they could be considered human in any sense of the word, but more likely a new species.
      MyLord
  • Some of the needed ingredients are already in place

    We already have neurons interacting with computer chips. It's not too far ahead that we'll see people enhancing their mental abilities electronically. I would argue that the invention of the computer itself made this inevitable. What is a computer, other than a device to enhance human intellect?

    As far as creating an [i]entirely[/i] artificial intelligence: That's also inevitable. Anything that physically exists can be artificially created, given the time and resources. We human beings flatter ourselves that there's something unique about our brains that cannot be replicated or improved-upon. I don't see the evidence of that. Our brains are electro-chemical systems transmitting signals. The fact that the chemicals are necessary in [i]our[/i] bodies doesn't make them necessary for [i]any[/i] system (and even if they were, those chemicals can be synthesized).

    Creating a personality, in the way that we think of it, is a different problem. A personality is a human construct that describes behavior. I don't think we'll be able to see personalities in machines until they get to the point of programming themselves (which is when we might start to see unpredictable behavior).

    I think we'll see intelligent computers probably in the next 20 years or so, maybe sooner, if quantuum computing really takes off.
    bhartman36
    • The trouble is

      "Hard" AI guys were saying twenty years ago that we would have machines that were more intelligent than human beings "within 20 years". I suspect that they will still be saying the same thing 20 years from now.

      A computer is just manipulating symbols, not being intelligent. You might just as well say that a room full of people blindly following set rules to do translation from English to Chinese "understand" Chinese.

      My point about the electro-chemical nature of our brains is that "hard" AI guys routinely underestimate wildly the complexity of the task at hand. Witness the way that supposedly "hard" tasks like playing chess have been solved while image identification outside of very controlled and constrained environments is nowhere near to matching human abilities.

      "Anything that physically exists can be artificially created, given the time and resources." Does this mean that if I am standing next to a supercomputer simulating a cyclone I should put my raincoat on? Surely a computer is only simulating the way the human mind works and this is completely different from actually performing the same task.
      jorwell
      • Complexity

        Hi, jonwell.

        You said:

        [i]A computer is just manipulating symbols, not being intelligent. You might just as well say that a room full of people blindly following set rules to do translation from English to Chinese "understand" Chinese.[/i]

        That's all language [i]is[/i]: The application of rules to symbols. (In this case, the sounds or writing are symbolically representing objects or ideas.) Language is all about rules and symbols. The reason automatic computer translation doesn't always work is because not all the rules and symbolic meanings have been completely spelled out for the system, so that sometimes context is lost.

        [i]My point about the electro-chemical nature of our brains is that "hard" AI guys routinely underestimate wildly the complexity of the task at hand. Witness the way that supposedly "hard" tasks like playing chess have been solved while image identification outside of very controlled and constrained environments is nowhere near to matching human abilities.[/i]

        I'm not sure what you're refering to when you talk about image identification. Facial recognition is already a reality. Basically, any kind of comparison work can be done by a computer, if there are reference points in the image.

        [i]"Anything that physically exists can be artificially created, given the time and resources." Does this mean that if I am standing next to a supercomputer simulating a cyclone I should put my raincoat on?[/i]

        Of course not. A computer unconnected to any machinery doesn't have the ability to impact its environment. But we're not talking about the same thing here. To take your analogy: I'm not just talking about creating a simulated cyclone. I'm talking about creating the cyclone itself. Just like a cyclone, or lifting a weight, thought is a [i]physical[/i] process, not a mystical one. And physical processes can be engineered. It's no different (although much more complex) than the way we copied the design of the bird's wing to create human flight.
        bhartman36
        • Thought as physical process

          I agree thought is a physical process, but we still understand very little about how the brain actually works.

          If you look at facial recognition you will find such systems can only work under the very constrained circumstances: face on and in good light; nowhere near the ability of human beings to recognise faces under very complex conditions.

          Also you cannot forget that we are not isolated intellects, we interact with our environment and with each other. This is why language is more than symbol manipulation, meaning comes out of social agreement on the meaning of words.

          Also although I would agree that the process of thought is essentially algorithmic it is a very complex algorithm and non-deterministic, this means that very small variations in the input parameters can have drastic implications regarding the results (and make make the difference between a digitial "download" resulting in you or a blithering half-wit).

          We simply don't know enough about the brain to "reverse engineer" it, and the research necessary to pursue this might prove to be highly unethical in nature.

          This is why, in 2026 Ray Kurzweil will still be talking about intelligent machines being available 20 years time. Many of us will forgotten what he was saying in 2006 by then, but I haven't forgotten the claims AI guys were making in the 1980s.

          And regarding flight, recent research shows that birds succeed in being far more efficient flyers (in terms of energy use) than planes because they fly "intelligently" making constant subtle adjustments to account for varying conditions. This is being actively researched but as far as I know we are still a long way from reproducing it.
          jorwell
        • fundamental study

          "I'm not just talking about creating a simulated cyclone. I'm talking about creating the cyclone itself. Just like a cyclone, or lifting a weight, thought is a physical process, not a mystical one. And physical processes can be engineered."

          In man-made systems such as mechanical machines and electronics, there are many manufacturing/engineering processes that can be used to arrive sy the same end-product. In other words, we can map different creation processes to create that particular product/effect - a many-to-one mapping. The question is, are there systems that exist whose mapping is strictly one-to-one (i.e. one process to one product). If we can answer no, then success is only a matter of time. If however, biology is a one-to-one product/process, then the human desire to create their likeness in the form of wood, metal, silicon (what have you) will be in vain. It is fundamental.

          In science we automatically begin with the assumption that materials can be substituted, timings can be modified, processes altered - and yet still arrive at the desired outcome. This because it is automatically assumed that these different aspects (materials, timing, process steps) can be adjusted independently without affecting other aspects (or other aspects adjusted to compensate). This is reductionist thinking, and is the underlying principle of western philosophy It underlies the western development of technology, society, health and medicine (in particular), and spirituality.

          -Jon
          jonho