It takes years of work to become a cinematographer. Unless you're a drone in Pittsburgh.
Researchers at Carnegie Mellon University are developing a system for aerial cinematography that learns from human visual preferences to enable drones to make artsy filmmaking choices while autonomously filming scenes. The system does not require GPS tags to localize targets or prior maps of an environment.
Drones have been a boon to filmmakers, significantly lowering costs for aerial shots, which previously required chartering manned helicopters or airplanes. But the ease of access also comes with a downside. Lots of filmmakers are using drones in their work, and oftentimes the shots look cookie-cutter, particularly if they were captured with an autonomous drone.
"We're putting the power of a director inside the drone," says Rogerio Bonatti, a Ph.D. student in CMU's Robotics Institute. "The drone positions itself to record the most important aspects in a scene. It autonomously understands the context of the scene — where obstacles are, where actors are — and it actively reasons about which viewpoints are going to make a more visually interesting scene. It also reasons about remaining safe and not crashing."
As a CMU spokesperson points out, "artistically interesting" is a moving target. Instead of trying for artistically satisfying shots, the system was trained using a technique called deep reinforcement learning. In a user study, subjects viewed scenes on a photo-realistic simulator that changed between frontal, back, left, and right perspectives. Users scored scenes based on how visually appealing they were and how artistically interesting they found them. The system learned those preferences and distilled them into a signature style optimized for mass appeal.
For example, the system learned that a constant backshot, far and away from the dominant shot scene captured with drones, becomes boring for viewers after a while. Human filmmakers switch angles often to keep the shot interesting. Switching shots too often, however, quickly leads to viewer fatigue. The trick is to get it just right.
The CMU team generalized specific behaviors, allowing the drone to apply lessons learned in one scenario to different shooting scenarios. For example, lessons learned for shots of an actor walking a narrow corridor might be applied to similar ambulatory scenes, such as a person walking a forest path.
"Future work could explore many different parameters or create customized artistic preferences based on a director's style or genre," said Sebastian Scherer, an associate research professor in the Robotics Institute.
The aerial system is also skilled at maintaining a clear view of the actor, avoiding what's known as occlusions. "We were the first group to come up with new ways of dealing with occlusion that aren't just binary but can quantify how bad the occlusion is," Bonatti said.
According to the CMU researchers, the system might be used outside of filmmaking, including in entertainment and live sports. It's also possible that governments and police departments currently using manually flown drones to monitor crowds and understanding traffic patterns might benefit from the team's machine vision breakthroughs.
"The goal of the research is not to replace humans. We will still have a market for highly trained professional experts," said Bonatti. "The goal is to democratize drone cinematography and allow people to really focus on what matters to them."