When researchers funded by DARPA, the pentagon's grant-funding arm for cutting edge tech, start talking about machine social intelligence, I, for one, get nervous. Needless to say, it's been a restless week.
Led by Carnegie Mellon University, which is a robotics and AI powerhouse, a team of researchers is working to build artificially intelligent agents with a masterful social skill: the ability to interpret a person's thoughts from their actions.
Okay, so this may not be as sinister as it sounds, at least at this incipient stage. Humans, after all, are phenomenally delicate instruments when it comes to interpreting moods and thoughts from subtle cues, such as body language, speech patterns and word choice, and eye movement. The emotionally intelligent among us are pretty good at knowing when we should shut up because the person we're speaking with has lost interest, for example.
But machines need some help. Social intelligence is a huge area of research in a variety of engineering fields. Socially intelligent robots like Softbank's Pepper have entertained reporters and customers with witty banter and an eerie-feeling ability to respond appropriately to emotional cues like sadness and boredom. More annoyingly, there's a footrace underway to bring emotional intelligence to advertising by monitoring sample audience reactions via web cams. Perhaps one day soon the surveillance state and the billboard business will team up, leading to ads tailored to your mood.
The defense industrial complex has different priorities when it comes to x-raying what's happening inside the heads of humans. While your mind might race to use cases like interrogation, the stated purpose of the CMU-led DARPA research is to use machine social intelligence to help human-and-machine teams work together safely, efficiently and productively. The idea is to develop software agents with emotional intelligence, autonomous programs that can perceive their environment and make decisions. In addition to the CMU team, the $6.6 million project includes human factors experts and neuroscientists at the University of Pittsburgh and Northrop Grumman.
"The idea is for the machine to try to infer what people are thinking based on their behavior," said Katia Sycara of CMU's Robotics Institute, a research professor who has spent decades developing software agents. "Is the person confused? Are they paying attention to what is needed? Are they overloaded?" In some cases, the software agent might even be able to determine that a teammate is making mistakes because of misinformation or lack of training, she went on to say.
In addition to Sycara, the team includes three co-principal investigators: Changliu Liu, an assistant professor in the Robotics Institute; Michael Lewis, a professor at Pitt's School of Computing and Information; and Ryan McKendrick, a cognitive scientist at Northrop Grumman.
To test their socially intelligent agents, the team will send them into a search-and-rescue scenario within Minecraft. The testbed wad developed by researchers at Arizona State. As the project progresses, the agents will try to infer the mind states first of individual players and then of teams of players.
DARPA is sponsoring the project through its Artificial Social Intelligence for Successful Teams (ASIST) program.