X
Tech

Why robots are scary--and cool

Theologian Anne Foerst says we're a lonely species, and robots have a lot to teach us about who we are.
Written by Jonathan Skillings, Contributor
For early researchers in artificial intelligence who were out to play God, it turned out the devil was in the details.

Their efforts to re-create human intelligence in hardware and software have led to some very smart machines--just think of IBM's Deep Blue beating chess grandmaster Garry Kasparov, whose genius for the game couldn't match the computer's high-speed calculations. But aside from that rarified skill, the machine would be no match for the average 3-year-old in figuring out how to get the best of a grown-up human.

special report
Invasion of the robots
Mobile, intelligent robots that can perform tasks usually reserved for humans are starting to creep into mainstream society.
The newer generation of AI researchers is taking a more humble approach to the cognitive conundrum, according to Anne Foerst, who's a rare combination of computer scientist and theologian--two types that don't always see eye to eye. They recognize, she says, that it is impossible to rebuild human intelligence in machine form even as they labor to build robots and other devices that mimic real-world skills.

In her new book, "God in the Machine: What Robots Teach Us About God and Humanity," Foerst draws on her experience at MIT's Artificial Intelligence Laboratory to paint a picture of how people and robots can and should interact--and whether, at some point down the road from today's Aibo and Asimo contraptions, the human community might confer "personhood" on robots.

Foerst spent six years at MIT, where she broke ground with her class, "God and Computers," and now teaches at St. Bonaventure University in Oleans, N.Y. She spoke recently with CNET News.com about changes in the field of AI, social learning for robots and the need for embodied intelligence--that is, the ability for thinking creatures, and machines, to interact with and survive in the real world.

Q: How does a theologian end up at the MIT AI Labs?
Foerst: Even as a small child I was always fascinated with machines and building stuff, but then I got hooked on theology because I just think this is the most interesting field when you want to learn about human ambiguity and human frailty--the fun stuff to being human.

I see the whole AI endeavor very much as an attempt to overcome sin.

So I studied theology, but I had space to do something else and so I thought well, why not do a little bit of computer science?...I went to MIT basically just to do research because that is where AI was founded. I met Rod Brooks (head of MIT's AI Lab and co-founder of iRobot) and a lot of other people--they really liked my research and they were surprised that I was not critical--I didn't attack them. But I could offer a very unique perspective because I was really studying why people are interested in AI, what they get out of that for themselves.

What did you find out about the people who study AI--what makes somebody want to study AI?
Foerst: What I found out was that there is this big wish to have a unified, coherent world view in which everything fits together, which is a desire you find a lot in science. In AI it's particularly strong because they include human nature, the whole idea that humans are actually logical--if we just can understand them. That there is a way to deal with our ambiguities and paradoxes and miscommunications, that ultimately those paradoxes and ambiguities can be overcome, which for a Lutheran theologian, for me, is kind of interesting because I define sin not traditionally, (as) guilt, but sin is really the living in ambiguity, the very fact that humans are not logical. I see the whole AI, and the classical AI, endeavor very much as an attempt to overcome sin.

Rod and other people...kind of criticized that classical camp, (which is) concentrated on high intellectual powers, on math and logic as the pinnacle of intelligence. They kind of embrace the whole embodiment stuff. I shared their critique of the classical approach.

I found out that they were much more tolerant toward religion, even though they weren't religious themselves--they were very supportive of me being religious and of me describing them in religious terms because they realize they don't know everything. What I really like about that--there was inherent modesty in them. They didn't think they would solve the world's problems, but they really realize it's so hard to build a humanoid robot and that actually made them appreciate human nature more.

In the book you described AI as a spiritual quest.

I think that particular research at MIT suffered from a general problem in science, and that is that highly expensive basic research is not very well funded.
Foerst: Yes, yes. You don't reduce humans to their logic and math capabilities, but really describe them as social mammals. We are so incredibly complex and so good at (handling complexity). And to rebuild that is just basically impossible. AI really makes us modest.

(There was a notion in) the more traditional approaches, "Oh! It's fun to play God"--that was completely gone in this embodiment camp...We really have undergone, not only in AI but in the general cognitive science in the last five to six, perhaps 10, years--slowly we're undergoing a paradigm shift where the understanding of humans goes toward more modesty because it is so complex, because we have to include the body and social interaction.

So Marvin Minsky's notion of a human as a "meat machine," is that a minority view now in AI?
Foerst: Basically Marvin Minsky says, "That is what we are, and we are nothing but that," while modern AI research says it makes sense in the context of AI to talk about us as meat machines--it just makes sense, but that doesn't mean we are. If you try to build artificial humans, you have to assume we are nothing but machines, otherwise you can give up your (effort), you can give up your hopes. But it's a pragmatic assumption and I think in the beginning of AI, it was an ontological assumption.

What does it take for robots to be like us, to make a robot that functions like a human being?
Foerst: I think the robot would have to have the capability to interact, to form meaningful relationships and to understand the value of those relationships, to understand the difference between me and other, to have empathy. Those would be the things I would describe as most crucial, and I do believe that we can build something like that. But I also do believe that if we cannot build it already ready-made, we have to build them in the way that they, like human babies, go through a process of social learning, and probably for the first critter to be built, that social process will take years and years and years, much longer than for a human baby.

At MIT, there were the robots Cog and Kismet that you wrote about in your book. Were they a first step in robots learning what it is to be like a person?
Foerst: I think they were. The whole ideology behind those robots was not, we can rebuild grown-up intelligence, but it was an acknowledgment of the fact that babies--even though obviously they are not born as blank slates, they don't have self-awareness, they don't have intentionality, they don't have all those things that we consider part of being intelligent--but they get all those capabilities through interaction with their caregivers. And so Cog and Kismet were really the first robotic models that were built in analogy to a human and learn through interaction, and I thought that was a very, very powerful approach.

The problem was, it is so hard to do it...I think what the robots could do is fascinating, but the underlying technology, as novel and as wonderful as it is, is still kind of primitive (compared to popular notions of what robots can do).

I think that for a lot of people the only thing they know about robots is what they see in the movies, whether it's "I, Robot" or R2-D2 and C-3PO, the Terminator, things like that.
Foerst: And so compared to that obviously Cog and Kismet are hideously primitive. People don't know how difficult it is to

The robot shares with us our world. With computers, we have to enter their world.
build those critters, and this is why I concentrated more on our reaction toward those robots, because I thought this is a more interesting thing--for instance, the fact that Kismet really didn't learn. I think there is much more ground to build on, I think that particular research at MIT suffered from a general problem in science, and that is that highly expensive basic research is not very well funded, especially in engineering where it's so expensive to build that stuff--there always needs to be an application.

Are there classes of robots? Are there, if you will, social strata--you have things like Roomba the vacuum cleaner, and you have assembly line machines, but then you also have at least the goal of creating something that's more like a person, a humanoid.
Foerst: Roughly you can distinguish between autonomous robots and nonautonomous robots. Autonomous robots are those who kind of decide for themselves what action they're going to take, and that is definitely the robotic stuff that I'm interested in, and the nonautonomous robots are the ones on assembly lines that just do their thing. Roomba is a highly efficient and highly autonomous robot--it just follows its plan, it just vacuums the floor and that's pretty powerful. You could just switch it on and there is no danger that it will do something to anything, and so we can build probably grass-cutting machines, lawnmowers, the same way and--

I'm little scared by the idea of a lawnmower going off by itself.
Foerst: Well, I think you need probably a very good fence. (Laughs.)

But the idea is, all that research comes out of the study of human intelligence or insect intelligence or animal intelligence, because what makes us all intelligent in a way is that we can cope autonomously with our environment and with its requirements. That is why we survive--if we weren't autonomous, we wouldn't survive--and so for me there is only a gradual difference between a robot like Roomba and the humanlike robot. I mean the difference is huge, obviously, but it's a difference in complexity but not a real qualitative difference.

Another common image people have of AI, and robots, I suppose, is HAL in the movie "2001"--a very smart machine, smarter than people perhaps, but it's a computer. What's the distinction between computers and robots?
Foerst: Because I focus so strongly on the body, that distinction is actually extremely important because the robot shares with us our world. With computers, we have to enter their world, via keyboard, we have to speak their language, we have to do their command. (With) robots on the other hand, and that's a big part of this whole autonomous robot research, you have machines that share our world and enter our space, understand our signs, understand things like pointing and gesturing, understand natural language...I think the reason why HAL is so powerful is because of the physical attributes--the eye and the voice. Whenever HAL speaks we hear this gorgeous voice and see this glaring red eye, and so we have physical attributes to anthropomorphize.

I think people are more comfortable with the idea of something like a C-3PO or an R2-D2--cuter sounds, the face is a little easier on the eyes, perhaps.
Foerst: The funny thing is that people like R2-D2 more than C-3PO. And I think that is because R2-D2 has emotions and C-3PO just comes across as this annoying, British-accented sort of constantly complaining being. It's not cute. C-3PO is not cute and therefore we love R2-D2 much more.

I want to ask you about the ethics of people working with robots, using robots. Should we build robots to do our dirty work? If we're going to think about according them personhood, are we ready to send them into combat to do mine sweeping and things like that?
Foerst: I think the attempt to build autonomous robots that are like us, that are self-aware and interactive and all that kind of stuff, should be very distinct from the question, for what should we use them. What I mean is, if you have a non-aware mine-searching robot who is just autonomous and does its job, I think that's fabulous, and that's a very, very good thing. But I think if we ever reach a kind of robotic skill where the robots are actually like us, aware of their surroundings and in social relationships and all that, we would have to treat them as an intelligent co-species. And that means we couldn't expect them to do anything that we couldn't expect from other human beings.

Are we really ready for humanoid robots, to have something that's not us, it's not what we've known for centuries, living with us?

Foerst: I do not think so and that is why I wrote my book now. I really didn't expect that, but when I started on my first chapter and suddenly started talking about war, I realized that we are not even capable to assign each other personhood--we are lousy at that. I think as long as we are not capable of assigning (all) humans personhood, we obviously will not be capable of assigning robots personhood. But I think the whole question about whether or not we should helps us then to consider the question of human personhood.

So why do we build robots then? I mean, if it's so hard, so expensive and we're not sure we're really ready for them, why do we build them?

(Robots) are scary because they are potentially threatening to our complacency and to our superiority, but...they make us think about ourselves in a different way, and that's cool.
Foerst: There's really a lot of motivation. First of all, simply, it's fun. It's fun to build cool machines. I just love to build something that moves and that just does something.

The second thing is, the whole idea of building artificial counterparts to humans is a very old one. You know, you find that in Greek mythology, you find that in the golem tradition, you find that even in Egypt. The idea to construct a counterpart, a mechanical counterpart, I think is a very fascinating one.

The third one is the whole idea of trying to understand ourselves by rebuilding ourselves. Especially through that building of embodied machines, we have learned so much about the body and so much about our capability of empathy and social interactions and stuff. So, that's pretty powerful. Then I think really we have lost--that is now the theologian speaking, obviously--but we have lost our connection to God and we have become a very lonely species, we don't have any partners with whom we can interact, because we stopped interacting with God, and stopped being close to God. So I think building robots in our image is kind of on the same page as searching for extraterrestrial intelligence and trying to understand dolphins and chimps. The whole idea that we want to--at least some of us want to--understand other beings and otherness.

Humans have two tendencies. The one tendency is only to be with people who are like-minded and reject everything new and different, but on the other hand, we are a very curious species and we want to know how people from different worlds and beings who are different from us feel.

(Robots) are scary because they are potentially threatening to our complacency and to our superiority, but at the same time they are never-ending fascinating because they make us think about ourselves in a different way, and that's cool.  

Editorial standards