Everyone would agree that Tupac Shakur's performances were highlights of last month's Coachella music festival, partly because of his star power and partly because he has been dead for almost 16 years. The audience was stunned when a ghostly life-size hologram of the late rapper appeared to take the stage to do "Hail Mary," then to join Snoop Dogg in a duet on "2 of Amerikaz Most Wanted" (YouTube link). They were not recycled concert clips. They were brand new performances by a computer simulation of the late artist.
We should all prepare ourselves for similarly shocking encounters with the digitized departed. Not only is this new Tupac simulation likely to become a fixture on the concert circuit, but the technology that makes it possible is becoming more widespread and could start to play a part in everyday life soon. A few optimists are even hoping that the technology points the way to real immortality, not just deathless celebrity.
So let's look at what brought back Pac. (Note: In this column, yes, I will be taking the position that he is actually dead and has not been in hiding.)
2Pac, not 3D
First, contrary to what was initially reported, the echo of Tupac at Coachella was not really a hologram. A true traditional hologram is a three-dimensional view of a scene or object recreated by shining laser light on a recorded interference pattern. Such holograms are actually flat but, thanks to Princess Leia's "Help us, Obi-Wan" message in Star Wars, the term has also come to mean a kind of volume-filling projection for the same purpose.
The Coachella Tupac was neither of those. It was in fact a high-definition two-dimensional projection cunningly devised to look like a life-size, moving figure, as executed by AV Concepts of San Diego with a patented technology belonging to Musion Systems Limited of London. The basic idea behind the projection is an old stage magician's trick known as Pepper's ghost. In Musion's version of it, a bright projector above the stage casts the moving image onto a fully reflective surface on the stage floor. The reflection then bounces onto a long sheet of semi-reflective metalized film angled over the floor. To an audience, the projected image appearing on the invisible film seems to be standing on stage.
It's an effective illusion, but the more intriguing technology was the one involved in creating the new footage of Tupac that was projected. That was the brainchild of the special effects production house Digital Domain of Venice, Calif., reportedly at the behest of rapper and producer Dr. Dre. Digital Domain previously won attention for the effects it used to artificially age and rejuvenate actor Brad Pitt in The Curious Case of Benjamin Button.
Unsurprisingly, Digital Domain is keeping mum about the details of the process by which it recreated Tupac except to emphasize that what showed at Coachella were newly generated performances, not old footage. (The claim's credibility is certainly helped by 2Pac 2.0's onscreen yell of "What the f___ is up, Coachella?" since the festival didn't start until three years after his death.) As Ed Ulbrich, the company's chief creative officer, told The Wall Street Journal, "To create a completely synthetic human being is the most complicated thing that can be done."
Nevertheless, it's not hard to guess what some of Digital Domain's process must have been, at least conceptually. It would have started by collecting every detailed image of Tupac it could find to create a perfect CGI likeness of him. Then it would have gone over every bit of available video of the man, particularly during performances, to do the equivalent of motion capture and build a kinetic model of precisely how his body moved when walking, dancing, and so on. That model might have been further refined by doing actual motion capture studies of people with his build and physicality.
Recreating his voice would have been an additional big challenge, assuming that they couldn't simply find somebody with a voice close enough to his. It's possible that by sampling from a variety of Tupac's performances and augmenting with input from a living singer to prevent the sound from being too canned, they could engineer some convincing tracks.
Easy to say; hard to do. The "uncanny valley" problem makes human beings acutely sensitive to even tiny departures from lifelike reality, particularly for replicas of people they know. Die-hard Tupac fans can judge for themselves how well the producers succeeded.
Roger Ebert's virtual voice
No doubt the idea of using computers to virtually resurrect a dead celebrity strikes many people as frivolous or crass, if not ghoulish. If nothing else, it raises ticklish questions about the complexities of celebrities' right to posthumous publicity and the ability of their estates to control the use of their images. Could Justin Bieber theoretically buy a virtual Tupac to sing with him? (Ilene Farkas has a good discussion of the legal issues over at The Wrap.)
But forms of this technology could also serve life-affirming purposes, and in some cases already have. One beneficiary two years ago was the noted film critic and author Roger Ebert: cancer stole his voice; computer technology gave it back.
After the surgeries that removed his jaw and larynx, Ebert began using text-to-speech programs for situations in which he wanted to speak aloud, but they were unsatisfactory. Not only did the available voices sound unnatural, but none of them was his own.
The catch is that the preferred way for CereProc to assemble that database involved having someone read prepared texts in a studio for 15 hours. Ebert's voice was already gone, which made that impossible. His voice was featured in hundreds of hours of film commentary programs but his voice recordings were frequently marred by interruptions, background noises, varying microphone set-ups, and vocal changes with age. Nevertheless, the company edited together about four hours worth of Ebert's voice that it could use.
Ebert publicly demonstrated the voice that CereProc built for him on Oprah in March 2010. His speech synthesizer isn't perfect by any means but it's a welcome reminder of the voice that sparred with Gene Siskel weekly. And he has issued an interesting challenge to technologists for raising the state of the art: he has proposed the "Ebert test" to see whether they can build a synthesized voice that could tell a joke and make people laugh. (Ebert, of course, tells the story of his voice better than anyone, and I recommend his blog essay about it.)
Cylons and robot immortality
The techniques that resurrected Ebert's voice and Tupac's onstage presence all depend on the availability of enough adequate recordings for modelers to sample for their simulations. In some cases, though, that hurdle may be lowering. For example, CereProc has started using another method for building voices based on a statistical technique called hidden Markov modeling speech synthesis that works better with smaller, imperfect libraries of voice recordings.
Moreover, ubiquitous smartphones, voicemail, Microsoft Kinect interfaces, video chats, security cameras, and many other modern devices constantly mediate streams of our personal data. We commit more and more of our lives and thoughts to social media. All of these are potentially sources of data for personal simulations.
If any of this rings a bell for TV science fiction fans, it might be because such human simulations were a major plot point of Caprica, the short-lived prequel series to Battlestar Galactica. In it, a technological genius who had lost his daughter in an accident programmed a perfect virtual simulation of her, based on the totality of digital information about her on the net. That simulation was so perfect and complete that it turned sentient (an ever-present danger with advanced computers in fiction) and became the progenitor of the robotic Cylon race.
Some techno-utopian transhumanists aspire to something like that Cylon fate. (Among them is the futurist and inventor Ray Kurzweil, who is singularly responsible for developing text-to-voice technology in the first place.) With exponentially advancing computing power, they argue, it should be possible to simulate a complete human brain in silico by 2029 or so. Shortly thereafter, it ought to be a relatively simple matter to transfer all the synaptic values from their brains into computers so that they can enjoy eternity as immortal digital intelligences.
Again, easy to say; hard to do. Published estimates of the information capacity of the human brain can range from around 100 terabytesup to 2.5 petabytes, but none of those numbers actually carries much authority. Fundamentally, neuroscientists still don't know how memories and thoughts are encoded in the brain, so those estimates just fall out of whatever simplifying assumptions their authors wish to impress on the problem. And I won't even get into the philosophical debate over whether a perfect cybernetic simulation of your mind would actually be you.
Even if that transhumanist scenario seems too far out to imagine, much more modest versions of human simulation could prove highly useful in many situations -- and relatively easy to implement.
As digital assistants become more commonplace, for example, it may be good to endow them with some thin versions of our characteristics or even our personalities so that they can represent us and our wishes more accurately. Future video conferences might be based on telepresence technology: rather than sharing actual video and audio of themselves, the participants could share models of virtual avatars that reflected accurate (if slightly idealized) versions of themselves. The opportunities for gaming and other forms of entertainment are obvious. Of course, if we're not careful, so too will be the opportunities for identity theft.