If the brains behind a scientific initiative known as Russia 2045 are to be believed, life is about to get very, very interesting.
The promotional video for the group, which aims to create technology that can "download" the knowledge in a human brain, is like a trailer for a Hollywood sci-fi blockbuster -- the booming intonations of a British announcer, dramatic, synthesized music and shots of the cosmos that make you feel like you're entering hyperspace in the Millennium Falcon.
It is, in other words, not the type of thing you'd expect from a group that hopes to get the world comfortable with a future of synthetic brains and of "thought-controlled avatars" that would make your next business trip to Milwaukee or Tokyo wholly unnecessary. Instead of a "chicken in every pot," they promise an "android robot servant for every home."
In an e-mail, the project's founder, Dmitry Itskov, described this vision in detail: "The creation of avatars will change everything in our societies: politics, economics, medicine, health care, food industry, construction methods, transportation, trade, banking, etc. The whole architecture of society will be transformed, there will be an increase in its self-organization, people will unite to fight the biggest and most universal problem of humankind -- that of death."
Whatever the viability of such claims, there's little doubt that the pace of innovation is going to lead us into interesting places, and perhaps sooner than we think. The cost of high-powered computing drops ever lower, video games grow increasingly realistic, and, thanks largely to Apple's voice-activated personal assistant Siri, people find more reasons to consult their mobile devices before the person sitting next to them.
Many have lamented that these communication breakthroughs have made us isolated. Texting is the new talking, or so the theory goes. The prospect of a robot that can take over the brain of your wife or best friend upon death? That takes fears of human social isolation to a whole new level.
So what happens when we don't even have to get off the couch to go to a parent-teacher conference or have lunch with a client living 6,000 miles away? What if we can "transfer" our brains to an avatar before we die? What about robots that possess human-level intelligence?
Intelligence: the new frontier
So far, the widely held social-isolation theory has proved false. We may have reason to worry, but we're worrying about the wrong thing: it's not isolation, but intelligence, that is likely to change our world in fundamental ways.
"Almost every study I've ever seen has shown a neutral to positive effect [of connected devices on social interaction]," said Keith Hampton, a professor of communications at Rutgers University. "It doesn't minimize the exceptions, but all the data suggests that people who use these things are more engaged in public life than others."
Consider the following. If, a decade ago, someone asked you what would happen if we could all share information, photos and personal revelations with all of our friends, in real time, the answers might tend toward the negative -- if not apocalyptic.
The end of privacy. The end of intimacy. The end of the world as we know it.
The reality of Facebook, of course, has demonstrated otherwise. There are downsides to any technology, Facebook included. But its convenience and utility have overtaken other concerns. We've adapted, and adapted quickly.
"It's like in medicine," said Nick Bostrom, the director of The Future of Humanity Institute at Oxford University. "Anesthesia was once seen as moral corruption. A heart transplant seemed obscene. We tend to think about things in a different mode, a different frame of mind, before we are actually using it. The future is often a projection screen where we cast our hopes and fears."
If history is any guide, it's reasonable to think that the shock of major technological breakthroughs will be mitigated by the assimilation of all the incremental advances that came before it. The more valid question before us, then, is how to prepare for a day when machine intelligence becomes so sophisticated that its knowledge is used against us.
And "against us" doesn't mean some Orwellian, Terminator-type reality. It's far more subtle, and far less sexy, than that. If a device can learn and has far greater memory capacity and recall than we do, it could process huge stores of data to better predict our behavior. It could then tailor its own behavior to achieve a desired result. And that's even before we get to so-called super-intelligence, a theoretical reality where computers use their processing power to learn more quickly, and think bigger thoughts, than the humans that created them.
The very beginnings of such technology are beginning to appear in daily life. The Port Authority of New York recently announced plans to install hologram-like avatars at New York airports. The "female" avatars are expected to be motion-activated and give travelers basic information like the location of a bathroom. In their current form, the avatars aren't interactive, but the Port Authority hopes that someday they will be able to answer a range of questions.
Are we ready?
It's impossible to know how we'll all react, but history does provide some clues.
To get a sense of the potential hazards and dilemmas of more advanced technology, Charles Isbell, a professor of interactive computing at Georgia Tech, pointed to the "Media Equation," a communication theory developed by two Stanford researchers in the 1990s. The research found that people interact with technology in ways similar to how they interact with other people.
In one test, subjects were "tutored" by a computer and were then asked to evaluate the computer's performance as they would a human tutor. Those who filled out the evaluation on the computer that "tutored" them were more positive than those who completed it on paper or at a different computer. As crazy as it sounds, people were less likely to hurt that computer's feelings. Take a computer that's as witty and brilliant as your best friend and the potential outcomes become more consequential.
"In the future, when your 'best friend' Siri suggests that you buy something, and it turns out not to be the right thing, do you get to sue Apple?" Isbell asked.
In the not-so-distant future, such scenarios are possible. "The ability of those things things to read facial expression and speak in a certain tone -- it will be orders and orders of magnitude greater," Isbell said. "[As with Facebook], the impact will be both profound and mundane."
The implications go beyond commerce. Today, "social search" -- providing search results based on data from others in your social networks -- is in its infancy. Rutgers' Hampton fears that social search could roll back some of the biggest social benefits born of the Internet.
"People who do more online have more diverse social networks and broader access to information," Hampton said. "It facilities trust, tolerance and access. If your search for unique information is constrained by your social interaction, the access to unique information declines. People we are close to are very much like us. We have a greater risk of creating silos of information."
Technology of increasing intelligence only makes that possibility more real. "We are all snowflakes, but we're pretty predictable snowflakes once you figure out what type of snowflake you are," Isbell said. As computer-aided predictive analysis gets more and more refined, a robot or device could use it to push us toward a pre-determined outcome, one that may not be in our best interest. Think about the computerized bartender that, once you hand over your credit card, mines Internet data and learns that you just lost your job. "Would you like another?" could become more calculated than convivial.
Resistance is futile
Ray Kurzweil, a futurist and creator of optical recognition technology -- the type that converts scanned documents to editable text -- predicts that we'll have "strong" artificial intelligence by 2029. He believes that "singularity," or the point where technology transcends human intelligence, is not some science fiction dream. His "law of accelerated returns" posits that because computing power expands exponentially, advances in fields that rely on computing power -- like biotechnology and materials science -- will also rapidly increase.
It's the theory behind the "2045" date in Itskov's ambitious project. Based on his own understanding of technological advancement, Itskov said that "at about 2045, humanity must enter a certain mode of evolutionary singularity, beyond which it becomes difficult to make predictions. In short, many exciting developments await us in the middle of this century, and all of them, inevitably, will be linked to the developments of new technology."
Kurzweil said we have nothing to fear by it. "This is not an alien invasion from Mars. This is just expanding our intelligence. We have outsourced our personal and historical memories to the 'cloud.' It's expanding already."
It will have its downsides -- "Fire cooks our food and also can burn down your house," he said -- but those can be addressed by devising "rapid response" systems that can counteract those who use technology for nefarious purposes.
Trying to prevent, or "opting out" of, such advancements is a misguided, and futile, strategy.
"Yes, people opt-out today," Kurzweil said. "They're called the Amish."
This post was originally published on Smartplanet.com