During the iPhone 11 launch event -- an event that was heavily focused on the camera tech -- Apple teased a new feature that it called Deep Fusion. But what is Deep Fusion, how does it work, and how useful will it be for the average user?
Phil Schiller, Apple's Senior Vice President of Worldwide Marketing took to the stage to offer a preview of Deep Fusion, which he said would arrive on the iPhone 11 via an iOS software update later this fall.
So what is Deep Fusion?
What happens is that even before the shutter is pressed, the iPhone shoots nine images, four long exposures, four short exposures, and uses the neural engine and machine learning to combine these with the image taken when the shutter is pressed to stitch together, "pixel-by-pixel", a new image with plenty of detail and very little noise.
Schiller called it "computational photography mad science."
While Apple didn't specifically mention it, it's clear that this is the company's answer to Google's Night Sight feature.