Samsung has applied approximately 60 new AI models run by the neural processing unit (NPU) to optimise the functions of the Galaxy S22 Ultra smartphone camera, a company executive said.
This has allowed the South Korean tech giant to offer camera experiences that can satisfy casual users with the best photographs possible and professional users with RAW files equivalent to those taken on DSLR cameras, said Joshua Sungdae Cho, vice president and head of visual software R&D at Samsung's MX Business, in an interview with ZDNet.
"We've applied NPUs to our smartphones for the first time three years ago," said Cho. "At the time, these NPU ran approximately 10 AI models. On the Galaxy S22 Ultra, there are now 60 AI models. Basically, the NPU is involved in nearly all functions of the cameras."
Also: How to use the Samsung Galaxy S23's Photoshop-like feature
AI models are decision-making algorithms that are trained through a dataset to apply to certain tasks, while NPUs are chips designed specifically to process such algorithms.
One particular AI model Samsung used for the Galaxy S22 Ultra is for the portrait mode, in which recognising the depth between the person and the background to separate them -- also known as segmentation -- is crucial.
"It is difficult to judge the depth with just the cameras alone. Some cameras use depth sensors, but instead of that, we had the AI models study a tremendous amount of depth-related images with various backgrounds and objects," Cho said.
"As we have a diverse range of cameras on the S22, we have the AI model also train using two cameras, and it studies images with wiring, cups, glasses in a variety of conditions," the VP said.
Galaxy S22 Ultra's cameras support 10x optical zoom and 100x digital zoom. It isn't the first model to do so as Samsung introduced 100x digital zoom for the first time two years ago with the Galaxy S20 Ultra, albeit to mixed reception at the time.
But with the development of the new AI models, coupled with the computational power offered by the 4nm mobile processor on the latest Galaxy S22 Ultra, Samsung was confident that the latest iteration of 100x digital zoom will impress users, the VP said.
"Processing time is key for a feature like 100x zoom and we have the computational resources to handle it. Some of the work is done by the NPU and others by the GPU. And this is all done in a manageable time frame because the zoom is essentially a specific area of the sensor -- for instance, a zone of 12MP zone out of a 108MP sensor. So a zoom snap receives numerous inputs of this smaller 12MP zone.
"For images taken above 30x digital zoom, the camera takes between ten to twenty background images in one snap. Then it proceeds to synthesise the multiple frames. Afterwards, it conducts another post-processing to determine which details it wants to improve," Cho said.
The 108MP camera sensor on the Galaxy S22 Ultra can also group up to nine pixels into one to better absorb light at night. This technology is coupled with AI models trained to reduce the noise and make sure the images have "the feel of the night" while the brightness is upped, the VP said.
"Signal to noise ratio is badly affected when it is dark. So we came up with a new AI model to reduce noise. We had this AI model train on various photographs taken at night, from those taken by DSLR camera to synthesised images we created that intentionally had the brightness or noise heightened."
Samsung also changed the way the cameras approach night photography in general: "In prior Galaxy S phones with a 108MP camera, we reduced their pixel count to 12MP, then upscaled the image to 108MP. For the Galaxy S22 Ultra, when photos are taken at night, the shot is taken by the 108MP camera and the 12MP camera. The rough details of the image are provided by the 12MP camera, while the fine details are provided by the 108MP."
All three models in the Galaxy S22 series offer auto and pro modes on its regular camera app as well as the downloadable Expert RAW app that provides multi-frame RAW files that can be edited with Adobe Lightroom.
"We really wanted the cameras on the Galaxy S22 series to cover the casual user and the professional user," Cho said. "Customers today are very knowledgeable on smartphone cameras. So with auto mode, which is aimed at the majority of users, we have the scene optimiser turned on by default to give the best colours for commonly taken scenes in photos. We also wanted the screen to be clear without icons, while the camera values are now displayed numerically. We wanted the user interface to be simple as possible."
While Pro mode can take RAW riles, the Expert RAW app offers 16-bit multi-frame RAW files.
"We fully utilised the NPU and AI models for the 16-bit RAW files. A snap will collect twenty shots of the background and users can use up to four cameras for the Ultra model simultaneously," the VP said.
DSLR cameras have large sensors that allow them to take RAW images without noise, but up to now, for smartphones cameras, this has been difficult to emulate in low-light settings as they pack smaller sensors. But with the computational power offered from the NPU on the Galaxy S22 series, their 16-bit RAW files match those taken by DSLR cameras, according to Cho.
"Professionals today want RAW files from their smartphones. A smartphone is a very convenient tool for them as one smartphone can give them four lenses in the palm of their hands. With DSLR cameras you have to carry the four lenses around. We really wanted to deliver to this group the same matching experience on our smartphones as they would on a DSLR. Every function on a DSLR is offered on the Galaxy S22 series including post-editing."
Every year, the processing power of NPUs is doubled and this trend is expected to continue for at least the next five years, the Samsung VP said.
The NPUs today can handle 16-bit arithmetic and floating-point arithmetic, with the speed and precision of how it is executed expected to continue increasing in the coming years, Cho said.
Coupled with this increase in computational resources, the evolution of computational photography -- or digital image capture and processing techniques -- cameras on smartphones are expected to continue to improve.
"The most talked about topic in computational photography is multi-camera technology. Researchers and companies are still researching what camera setup is best. We could add more cameras or fewer cameras. We are still looking for the optimum multi-camera setup," the VP said.
"Cameras are the centerpiece of smartphones today and we expect this trend to continue for the next decade. While we currently apply 60 AI models, we expect all functions of the camera, which is run by hundreds of algorithms, to be run by the NPU.
"16-bit RAW files currently have limitations in terms of resolution. This will need continuous improvement. When it comes to 4K and 8K videos, to improve the real-time resolution, we need even more computational power than today. This will take two to three years from now."