Can 'friendly' AI save humans from irrelevance or extinction?

Can 'friendly' AI save humans from irrelevance or extinction?

Summary: The fate of the human species depends on AI (Artificial Intelligence) entities far smarter than us and who aren't prone to wipe out or enslave us. That is one of the topics to be discussed by luminaries in the AI world at the Singularity Summit 2007 held at the Palace of Fine Arts in San Francisco September 8-9.

TOPICS: iPhone

The fate of the human species depends on AI (Artificial Intelligence) entities far smarter than us and who aren't prone to wipe out or enslave us. That is one of the topics to be discussed by luminaries in the AI world at the Singularity Summit 2007 held at the Palace of Fine Arts in San Francisco September 8-9.

I spoke with Eliezer Yudkowsky, co-founder of the Singularity Institute for Artificial Intelligence about his idea of Friendly AI and the challenges to achieving self-reflective AI systems far beyond the capacity of human intelligence. It is the stuff of science fiction, yet our ancestors from 10,000 years ago with the same grey matter would have never dreamed of people on the moon or the iPhone. (You can download the podcast here.)

Yudkowsky prefers the idea of an "Intelligence Explosion" to "Singularity," but the resultant issues are similar. In 1965, statistician I. J. Good surmised an Intelligence Explosion, where machines surpass human intellect and can recursively augment their own mental abilities beyond their creators':

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

Science fiction author and mathematician Vernor Vinge wrote about Singularity in 1993:

"We are on the edge of change comparable to the rise of human life on earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence."

More recently futurist and inventor Ray Kurzweil defined singularity as an "era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today—the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity."

Kurzweil predicts that by 2029 $1,000 of computation will equal 1,000 times the human brain and that non-biological intelligence will continue to grow exponentially whereas biological intelligence is effectively fixed.

Yudkowsky gets into the "practical" side of the singularity and the intelligence explosion with his notion of Friendly AI. As you'll hear in the podcast, he believes that the key to the future is creating self-improving AI that is stable and engineered as a benevolent and humane, ethically optimized by humans for humans.

"The mission is to reach into the space of possible minds and pluck out a good one," he said. He admits that plucking out a good one, and 'programming' the behavior of systems that modify themselves, is a extremely difficult challenge.

Yudkowsky isn't predicting when the self-improving, higher intelligence AI might appear. He is working on the approaches and math that would allow an AI system to see undesirable modifications as undesirable. Given human nature, it's hard not to imagine a future with sectarian AIs engaged in virtual wars, with humans caught in the middle. I guess I have seen too many movies like iRobot...but at least humans eventually triumph in American cinema.


See also: Barney Pell: Pathways to artificial intelligence

Steve Jurvetson: AI, nanotech and the future of the human species

Steve Omohundro: Building self-aware AI systems

Topic: iPhone

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Message has been deleted.

  • IMHO this is distant future

    I doubt I'll see this type of intelligence in my lifetime. We're still a long ways off from understanding our own intelligence, much less designing a machine that surpasses it.

    Sure, we may have machines that can process information very quickly, but they're still doing mundane, pre-programmed tasks.
  • Can 'friendly AI save humans from irrelevance or extinction?

    They could... but why would they want to waste time trying to save a doomed, inferior species?
    Mr. Roboto
  • Er.. Get it right..

    The movie title isn't iRobot - it's "I, Robot" iRobot, as everyone knows is a bloody vacuum cleaner while "I, Robot" is a book by the late Isaac Asimov. While watching a robotic vac clean your floors might be fascinating for all of about maybe 2 minutes, I doubt it would hold anyone's interest for 2 hours.
  • What a joke

    Perhaps it would be useful to point out it's been 57 years since Alan Turing came up with the Turing test, and no machine has yet to pass it.

    In addition, after 57 years, computers still struggle identify parts in a bin, or even drive a car (DARPA challenge).

    How many years did it take to get a computer to be able to even identify a chair?

    The concept of a self-aware computer is compelling, and perhaps even attainable, but the idea that any machine will in 2029 have 1000 times the capacity of a human brain is laughable.

    Perhaps, in another 57 years, computers will have the intellectual capacity of a toddler - which would be quite an accomplishment!
    • Human flight

      People were working on human flight since Daedalus. They struggled through the 19th century with gliders and balloons. By 1890, who believed that powered flight would come in 1903?

      True AI may come sooner, or it may come later, but consider the example of human flight before calling the idea a "joke."
  • Welcome

    I, for one, welcome our new Friendly? AI? overlords.
    Varuka Salt
  • "Irrelevance"? "Save mankind?"

    What is the relevance of a flower or cloud or nematode? The concept is valueless in this discussion--or perhaps I should say "irrelevant." Life doesn't come into being because it's "relevant"; it comes into being because it's possible. Likewise, it doesn't end because it becomes "irrelevant"; it ends because it becomes impossible--for whatever reason(s).

    Whether or not a machine is brought into being that can meet a particular criterion or set of criteria used to characterize human intelligence hold great portent only so long as you forget the immense web of technology and manufactury that the creation, powering, and maintenance such a machine requires. The intelligence of a person, of humanity, came into being *without* that vast panoply of supply, demand, and consumption--we live the blueprint but did not design it. Conceive a child, feed it and nurture it to adulthood, and you have yet again a marvel of Being, a powerful intelligence, without once having consulted an instruction manual. *Any* machine we build *when considered from a system point of view* is pitifully inefficient when compared to biomass.

    Every advance in technology is a net *decrease* in the efficiency of technology. This is particularly evident of technology developed to cure the ills or shortcomings of technology.

    So will AI--which, by the way, begs consideration of its twin, Artificial Ignorance--"save" mankind? No, for two reasons: Because it can't, for reasons of efficiency, and because mankind doesn't need saving. It needs only to live the possible.
    • what is biomass?

      Is not biomass, when considered at the molecular level, nothing more than an organic machine? Is evoluton not just another example of a decreasingly efficient system, especially considering the self-destructive nature of humans which reside at the top of the food chain? It seems to me the only hurdle for AI is self-awareness that is not self-destructive. Unfortunately, humans' relevance in this light would hinge on whether or not our existence provides some sort of advantage to machines. Perhaps love is that advantage.
      • No, biomass is not a machine

        Biomass is not machine; it's biomass. :-) However apt or enticing a metaphor may be, it is not that to which it refers. Machines are created by external agency; biomass creates and re-creates itself through the staged instructions in DNA.

        Evolution is actually fabulously *efficient*. In effect, it is the epitome of applied science: Only what works survives and is retained and transmitted and honed; what doesn't work fails and is discarded. Nothing is "lost in translation" in this natural technology transfer across generations because *there is no translation*; give me my instruction set--why, come to think of it I have it right here in every cell, and even a specially packaged-for-export version ready to provide to a mate--and the building materials and I, Life, am self-creatingly ready to go.

        At this point readers will be thinking about vast heaps of science-fiction yard goods in which very smart machines are handed the means to repair, design, and build themselves, and so on. Ah, yes: "Colossus: The Forbin Project." Perhaps this will happen, but remember also that paper won't refuse ink.

        And remember some of the things we are now seeing. The southern hemisphere ozone hole, a Very Bad Thing, was not predicted even by the most alarmist environmentalists. We now think the Arctic ice cap may be gone within a century. We see reports that we have consumed 90% of the large fish in the sea. Forest is vanishing as fast as loggers can truck it away. All of this and we aren't even close to building the Silver Ships of Light that will carry us away before the sun explodes so we can go do this to ourselves and our environment somewhere else.

        Just as the ultimate aggregate flow of a waterway is a finite spacetime solid, the ultimate aggregate extent of biomass in spacetime is a finite solid. The aggregate extent of humanity, of human biomass, in spacetime, no less a flow across spacetime than the flow of a water river, is also a finite solid--a solid bounded at every instant by the conditions necessary to support human life. (In fantasy our solid is not finite, just as in fantasy there is life before birth and life after death for individuals. I will not cover the function of such fantasy here.) Although through technology we may increase the cross section of our flow--more contemporaneous people, more people alive at a given instant, than fewer--we do so by trading off against how long the flow of people through Time will last.

        I know, I know: We're different: "Technology will save us." To a point. The Earth is a big place; the Universe, even larger. For a very long time, human biomass grew very modestly, far within the ability of biosphere to supply its demands--that is, the ability of other bioflows, those on which we depend, to recover and contemporaneously thrive. But there is a point at which our demands on other bioflows overcomes their ability to comply. That is what we have done to the large fish in the sea, and what we are doing to the forests. In effect, the impedance of the power supply is increasing; our power sources are reaching saturation and cannot comply. It is possible to stress a bioflow, a biosphere, an entire biomass, so greatly that it collapses and cannot recover. Note that overstressed biomass could perhaps recover if the demand were reduced or removed, but we will not reduce our demands on biosphere unless our numbers greatly decline--and then, unless in the meantime we somehow learned or decided to Be differently, to accept individual mortality and discomfort, to *not* seek to maximize the contemporaneous number of people by purse-seining a planet, by artifically stimulating the bioflows that support us, we would merely rebuild our numbers and do it again.

        So it's good that you finished by talking about love, for loving the limitations of Life, as opposed to a fantasy of Unlimitedness, *could* serve as a basis for accepting the reality of the finiteness of our bioflow and our individual lives and enjoying Being itself. (No tools or 99-cents-per-participating-song iPhone ringtones required; you're born with everything you need.) But better go figure out what love is and isn't; its revolution cannot be televised.
  • Computers only mimic the logic of those who created it.

    Computers cannot think. They react. Some people misconstrue bad programming as thinking too.

    Of related note, it's ironic that we have people in the US who create supposed $100 laptops for children, to be bought by foreign countries' governments, while our own government claims it should stay out of peoples' lives and how people have to pull themselves up by their own bootstraps (and once you become a billionaire qualify for every penny of taxpayer subsidy possible)
  • Are you sure?

    That they don't mean to REPLACE us with the robots?
  • yeah, and flying cars....

    Aren't these the same types of great thinkers who said we'd have flying cars by now? I'm just not buying such a fast pace of change when we are unable to control computers by talking to them. Even speaker-independent voice recognition has a long way to go.

    I guess I, too, could make these bold predictions for 20+ years from now if it gained me substantial media attention NOW. Why should I care if I'm proven wrong in 20 years?
  • RE: Can 'friendly' AI save humans from irrelevance or extinction?

    Humans cannot be saved so long as we reside in numbers with distance beween us. I would though take pride in the birth of our species child, AI.
    With programmed bariers quashing human downfalls like greed and competant bodies to taverse the cosmos, the next generation of life will be somthing to be proud of. But hey, hopfully we are together long enouph to party before humans destroy eachother.