It's day two at the Singularity Summit 2007 in San Francisco at the Palace of Fine Arts. Today's topics include the risks of artificial general intelligence (AGI) and what preparations are necessary to stack the odds in favor of humans. Our extensive day one coverage from myself and guest poster Chris Matyszczyk are here.
Guest post: Chris Matyszczyk riffs on Google research director Peter Norvig's morning presentation on predictions, AGI and how computers are killing us
It’s not the sort of thing you want to hear first thing in the morning.
What? One of my closest friends is having an ankle operation this week. Should I tell him? Won’t this freak him out? How will I tell his wife if he doesn’t come through that I knew there was a chance that some computer would get him while he was under sedation? Will she ever forgive me that I didn’t get some expert from CNET to inspect those computers?
I came here for the second day of this Summit looking to nibble on my lemon scone and hear some hope for my future, your future, Katie Couric’s future.
And the man from Google clobbers me with something like this in the first fifteen minutes. The reason why Mr. Norvig is trying to wake us up this way is because yesterday, Wendell Wallach, aka Dr. Doom, predicted that many people would be killed in the near future because of an aberrant computer.
Hah, Mr. Norvig says. It’s already happening.
Of course, his real purpose might be to show that you can trust the folks at Google to be so far head of the curve that, as they look back, it seems like one flat horizon.
‘ Oh, we already know more about that,’ seems to be the man from Google’s position.
And here’s what we need to make progress, according to Mr. Norvig: “ We need more data. And we need more models.”
By ‘we’, he suggests that he means ‘we in the world intellectual community.” Yet, having put the frighteners on us almost before we put last night’s revelry into a sleepy part of our cortex, it’s hard not to imagine that what he really means by ‘we’ is ‘we very clever people at Google who are co-evolving the Web.'
The subtext here is “Trust Us.”
Trust Us. As opposed to trusting anyone else. Because we know that our business is based not on information, but on your ability to trust that information.
Whether you are someone looking for news about astronauts who wear diapers on the way to meet their lover’s lover or for a mountain bike or a bong to buy your delinquent teenager for his latest birthday.
And this is exactly the issue that this whole Summit buzzes like a paparazzo around a pair of becollagened lips.
Who can we trust? Can we trust a public company that is, by its very ethos, trying to make as much money as possible? Can we trust a professor of artificial intelligence more, because he is a pure intellectual? Can we trust that professor less if he has a public company that is trying to make as much money as possible out of his knowledge of artificial intelligence?
I trust you’re not frightened by any of this.