Did you see the video of the sorority girls taking selfies at the D-backs game this year? It went viral, in part because it's funny, in part because it serves as a commentary on our, ahem, solipsistic, self-absorbed, life-by-proxy culture.
But it's also instructive. In the video, the selfie girls take multiple shots. After each snap they check the result to determine if it's a keeper. Most aren't, after all.
The process looks vacuous and, from a decontextualized vantage, borderline insane, but what the girls are doing is actually quite complex. Through long study, they have become experts at judging which photos will be most memorable when they blast them to friends and followers. In some contexts, such as advertising, people with that refined sense make very good money.
But those days may be numbered. Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have created an algorithm that can predict how memorable or forgettable an image is almost as accurately as humans. They plan to turn it into an app that subtly tweaks photos to make them more memorable.
For each photo, "MemNet" (which you can try out online by uploading your own photos) creates a "heat map" that identifies which parts of your image are most memorable. It's like an instant focus group.
CSAIL members picture a variety of potential applications, from improving the content of ads and social-media posts, to developing more effective teaching resources. And yeah, it can be applied to selfies.
The algorithm uses techniques from "deep-learning," a field of artificial intelligence that uses systems called "neural networks" to teach computers to sift through massive amounts of data to find patterns. Deep-learning drives Apple's Siri, Google's auto-complete, and Facebook's photo-tagging, to name a few applications.
Neural networks correlate data without any human guidance. They are organized in layers of processing units that perform random computations on data in succession. As the network receives more data, it readjusts to produce more accurate predictions.
The CSAIL team fed its algorithm tens of thousands of images from several different datasets. The images had each received a "memorability score" based on the ability of human subjects to remember them in online experiments. The team then pitted its algorithm against human subjects by having the model predict how memorable a group of people would find a new never-before-seen image. It performed 30 percent better than existing algorithms and was within a few percentage points of the average human performance.
For each image, the algorithm produces a heat map showing which parts of the image are most memorable. By emphasizing different regions, they can potentially increase the image's memorability.
Good news for sorority girls. Also for the sport of baseball.