MIT has created an artificial intelligence algorithm which can accurately tell you the recipe behind a dish after being shown no more than a picture.
With the emergence of social media, it is not only the spread of information which has grown but also the popularity of image sharing.
Everything from cat pictures to cupcakes bombards the internet every day, but there may now be a use for the latest delicious meal your friend has shared on their social network accounts -- as you may be able to cook it yourself just by having access to the picture.
On Thursday, MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) said that a new artificial intelligence-based algorithm has been developed which can analyze still images of food in order to detect the likely ingredients and suggest a recipe to create the dish.
The average recipe has nine ingredients and the most common ingredients found in today's dishes are salt, butter, sugar, olive oil, water, eggs, garlic cloves, milk, flour, and onion.
However, it takes more than the basics to create a masterpiece and this is where the algorithm comes in.
The deep-learning AI system, dubbed Pic2Recipe, has been trained by researchers to predict the ingredients and suggest similar recipes, and 65 percent of the time, the AI was correct.
This kind of invention isn't just for foodies, however. MIT hopes the AI could also be used to better understand our eating habits, which in turn could provide information for researchers and healthy eating initiatives in the future.
In a paper presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu, lead author and CSAIL graduate student Nick Hynes, alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain, Javier Marin, Ferda Ofli and research director Ingmar Weber of QCRI said another aim is to modernize and increase the scope of the "Food-101" dataset, a 2014 project to create an algorithm capable of detecting images of food.
Food-101 was only ever able to detect food in photos with an accuracy of 50 percent. MIT researchers believe the accuracy limit is due to there not being a large enough dataset for the algorithm to draw from.
As Pic2Recipe also draws on similar database infrastructure, the bigger the dataset, the better.
"In computer vision, food is mostly neglected because we don't have the large-scale datasets needed to make predictions," says Yusuf Aytar, an MIT postdoctoral associate. "But seemingly useless photos on social media can actually provide valuable insight into healthy habits and dietary preferences."
Pic2Recipe does particularly well with desserts, but when it comes to more ambiguous or complicated foods, such as smoothies or sushi, refinement is needed.
You can try out the demo here.
The researchers hope to improve the system to understand how food is prepared -- such as stewing or dicing -- as well as making the AI able to act as a dinner aide which can figure out what to cook based on ingredients in the fridge and dietary requirements.
"This could potentially help people figure out what's in their food when they don't have explicit nutritional information," says Hynes. "For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal."
The project was funded in part by QCRI, as well as the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry, and Competitiveness.
Last year, MIT scientists discovered that graphene can be used to slow light down below the speed of electrons to create an intense beam of light, an event called an "optic boom." It is hoped this discovery could potentially one day make chips a million times faster than they are today.