I tend not to pay a lot of attention to the gaming community. It's not that I'm disinterested in computer and video games, it's just that it's a huge field outside of my normal technology interests, and while I play some games, particularly on my tablets, I don't put in the kind of personal time and technology investment at the level that I could legitimately ever call myself a "gamer".
While I own a lot more technology than the average person, I don't have a powerful PC gaming rig, nor do I even own a gaming console.
Much of my entertainment comes from reading, and watching movies and TV, not to mention my passion for cooking and wine, among other things. I have a really busy lifestyle, between working in the technology sector and writing about the tech industry. So I tend to choose diversions that have the least technology involved as possible.
This is not to say I don't have a fine appreciation for what game developers as well as what computer modellers do. In fact, you could say that computer generated imagery is something of a family business.
My brother, Brandon, is a CGI modeler that has worked extensively in the entertainment and film industry, so I know exactly what kind of technology is used to create the sophisticated creatures, effects and scenery.
It's a labor-intensive process, requiring some very expensive software and hardware, not to mention rendering infrastructure that requires a level of parallelization rivalling the levels of supercomputers.
Heck, if you're WETA, which produced the Lord of the Rings trilogy and recently The Hobbit, you really do have supercomputers in order to produce these films.
Still, as impressive as these movies and animations are, computer-generated imagery for the most part, as least as it is applied in game design and filmmaking, still looks computer-generated.
It's also easy for us to accept something like a Dragon or an Orc or some alien creature as an "actor" in a film, because it's not something we have a natural frame of reference with.
Realistic depictions of human beings, however, have always been the holy grail of CGI. To date, nobody has really been able to create CGI that fools human beings into believing that they are looking at another real human being and not something that is rendered and appears completely artificial.
Part of this has to do with tradeoffs in sheer computer horsepower and the time it takes to render and model the "actor".
You could model a very realistic human being in CGI, using hundreds of thousands of sampled textures and millions of colors, model the precise lighting and level of complexity in sheer mind-boggingly high polygon count that makes us look "real", but we'd be talking about a truly massive level of effort and use of computer resources that would be needed to pull this off.
Historically, it just hasn't been worth trying to do in a feature-length film, because the resources, be it rendering time or the human labor needed to achieve this would be astronomically expensive.
And at best, what you would have is a frame-by-frame rendered movie. You could never generate these kinds of images in real-time, in something like a computer game or a simulation. And it would take many years to produce. It just wasn't feasible.
At last week's 2013 Game Developers Conference (GDC) in San Francisco, game developer Activision showed a preview of the kind of real-time generated human character technology we might find avaliable on commodity computer hardware within the next five years.
And when I mean commodity, I'm talking about cheap, entry-level PCs and game consoles. Perhaps even the next generation of tablets and smartphones. Not the elite "gamer" PCs and CGI production workstations that cost thousands of dollars.
At GDC Activision showed demos like this using two year old laptops with very entry-level GPU technology.
Activision's demo is eerie. Creepy even. Yes, you can still tell that this is computer-generated, and there are more than enough flaws in the rendering that the suspension of disbelief goes away. There's no facial hair, and it's only a single disembodied head, and detail is lost in certain areas, particularly around the mouth.
But this is much more sophisticated, and far more realistic than anything that we have seen in a computer game before.
The implications are far-reaching, and concern me greatly.
Imagine that within ten years time, the ability to visualize highly-realistic, computer-generated actors in real-time can be accomplished not only by using commodity consumer tablet and gaming hardware, but can also be produced by anyone that can hire a modest-sized team of computer animators and can afford a million dollars (or less) worth of server rendering and workstation budget.
TV and movie studios could produce films that bring beloved and long-dead actors back from the grave.
But this also means that entities willing to do harm to someone's personal reputation, or even engage in acts of CGI terrorism could, with enough biometric data and texture sampling (in the form of high-resolution photographs and enough audio to produce convincing "voice fonts") could make well-known public figures appear to do and say things that they did not actually do, in videos posted on the Internet and on live televison.
Once the model of the actor was created, the time to produce videos that could respond to current events would be fairly swift.
Think about what North Korea or Iran could do with this. Or China. Or what certain interest groups in our own country could do. Or corporations. Or single individuals with private agendas.
Are we on the verge of being able to produce highly-realistic depictions of human beings using real-time computer-generated imagery? Talk Back and Let Me Know.