The iPhone 11, 11 Pro and 11 Pro Max feature Deep Fusion support, released by Apple from iOS 13.2. Overall, the technology allows smartphones to apply the processing power of the A13 Bionic chip to compose images with less noise and higher quality, forming photos from the combination of a series of extra photos that the device generates without the user even noticing. Apple’s image processing technology is not the only one in the iOS arsenal, which also offers Night Mode and Smart HDR.
Deep Fusion works from nine images that are combined to form the final picture. Even before the user clicks the shutter button on the camera app, the iPhone has already generated at least four of these photos. After the click, four more photos are formed, complemented by a last longer exposure time shot.
All images are analyzed immediately so that the A13 Bionic’s AI processing capabilities identify the best samples among each one. In the sequence, the iPhone combines these best excerpts from each photo to form the final image.
The result is that the photos formed with Deep Fusion tend to have better contrast and sharpness, and benefit from processing that reduces the noise level. In photos, noise is that distortion formed by multicolored dots scattered on the photo, common especially in photos made in places with low lighting.
Because noise won’t be the same from photo to photo, iPhone can isolate the best clippings between the images captured in the process, so that everything condenses to a final photo that has a lower noise rate.
Beyond Noise Elimination
Using several samples of the same scene to form the final photo is not only to make the photo look cleaner, with less noise intensity. An advantage of the technology is that the processing by the AI ends up highlighting traces and elements that tend to be in less evidence in the photos generated by smartphones.
As the analysis that checks each of the images to assemble the final photo analyzes the results pixel by pixel, the system tends not only to isolate noise and other artifacts but also to highlight minor details, such as the aspect of fabrics, skin, hair, etc.
Other more noticeable examples are textures, such as fabrics, skin, or even the covering of a wall or floor. With Deep Fusion, the final picture exposes more granularity and reveals details of these elements.
Long exposure photo and the HDR
Long exposure, in photography, means a photo formed with the shutter – the camera shot – open for a longer time. In the Deep Fusion process, the iPhone generates a photo with long exposure to compensate for lighting deficiencies that the environment can offer. Also, the extra click plays a key role in creating HDR images.
At the beginning of the process, among that first group of photos, the iPhone generates one with a negative exposure value, something that creates a much darker photo.
The combination of these two extremes is essential for the system to create the HDR of the photo. The technology, which increases the image quality because of the amplitude between the darkest and lightest points, benefits from this approach because of the presence of two photos that define these two extremes.
Deep Fusion is a powerful tool that is designed to work without the user even noticing (although it cannot be used in continuous shooting mode because the system cannot capture the extra photos) and indicates a different strategy from Apple when the subject is photography.
Instead of getting involved in the race for more and more megapixels, the apple seems more interested in circumventing the limitations of digital photography on smartphones through processing and software, a path that has already been successfully followed by Google in the Pixel line.
This post may contain affiliate links, which means that I may receive a commission if you make a purchase using these links. As an Amazon Associate, I earn from qualifying purchases.