It has always been difficult to generate computer-generated crowd scenes in movies and video games. Until recently, an animated character moved almost like his its neighbors. But according to Technology Review, a computer scientist working at UCLA has designed software which gives personal behaviors to animated characters. For example, he simulated how 1,400 commuters might interact inside a virtual representation of Pennsylvania Station in New York City. The characters include law-enforcement officers, tourists and regular commuters. The article states that 'computer-generated crowds in movies and video games could soon appear much more realistic.' But I have some doubts. Have you seen a railway commuter without luggage?
As an example, you can see above a large-scale simulation of a virtual representation of Pennsylvania Station in New York City populated by self-animated virtual humans. On the left is a rendered image of the main waiting room and on the right is the main concourse. (Credit: Demetri Terzopoulos) You'll find more details about Demetri Terzopoulos by visiting the Computer Graphics & Animation section of his website.
So how Terzopoulos and graduate student Wei Shao designed their 'autonomous pedestrians'? They "are governed by three different layers of behavior. A motion layer handles basic movement, such as walking, running, standing, and sitting. On top of this sits a reactive layer, which allows the characters to respond to obstacles or other characters they encounter; it also enables them to perform simple behaviors that people normally take for granted, such as walking around a bench in order to sit on it."
But how the 1,400 characters which can interact in real time with the current version of the software can be able to walk through a crowd and decide what to do next? "For example, a character may be charged with the simple task of catching a train. But it knows that, in order to perform this task, it must carry out a number of subgoals, such as purchasing a ticket and finding the train platform. In fact, even these subgoals can have further subgoals, such as finding the ticket office and choosing the shortest ticket line to stand in."
You can see above what Terzopoulos means by goals assigned to an individual commuter. "He enters the station (a), proceeds to the ticket booths in the main waiting room (b), and waits in a queue to purchase a ticket at the first open booth (c). Having obtained a ticket, he then (d) proceeds to the concourses through a congested portal, avoiding collisions." (Credit: Demetri Terzopoulos)
If you want to learn more about this research work, you can read Autonomous Pedestrians (PDF format, 10 pages, 2.38 MB), a paper presented at the 2005 Eurographics/ACM SIGGRAPH Symposium on Computer Animation. The top images in this post come from this paper. You also can read an 2007 updated version of this article published in Graphical Models, an Elsevier journal Volume 69, Issues 5-6, Pages 246-274, September-November 2007. Here are two links to the abstract and to the full updated paper (PDF format, 30 pages, 3.07 MB).
But for more fun, here are the links to several videos available on Terzopoulos website.
- Following an individual pedestrian (2 minutes and 3 seconds), from which the images shown just above have been extracted
- Autonomous pedestrian activity in Penn Station (2 minutes and 18 seconds), which is pretty entertaining
- < Autonomous pedestrian simulation in the Petra Great Temple (2 minutes and 55 seconds)/li>Sources: Duncan Graham-Rowe, Technology Review, December 19, 2007; and various websites
You'll find related stories by following the links below.