DeNoise AI findings

First of all, I think DeNoise AI is a game changer. It really does reduce or eliminate grain without reducing sharpness. But it has some quirks.

It doesn’t work with all files. I’ve posted elsewhere that Panasonic RW2 files process up with wildly skewed color and very low contrast. Saving in DNG format helps a lot but doesn’t completely solve the problem. RayC on this format has posted that it can’t read his Canon CR2 files. On the other hand, my Nikon NEF files process up beautifully. Here are before and after images at ISO 12,800 from NEF files:

The algorithm doesn’t reduce noise equally. It removes noise from highlights before shadows, and it leaves patches of noise near high-contrast edges. This can be handled by cranking up the strength, at the cost of less effective detail restoration.

Finally, and amazingly, DeNoise subtly distorts the image and reveals image outside the frame of the original capture. I have no idea how it does this. See the face mask on the left and the skate at the bottom.


The pixel size increases about 4% when it does it. The DeNoised image is distorted, somewhat like a Spherize filter on a very low setting. To demonstrate this, I used Align Layers in Photoshop, then changed the blend mode of the top layer to Exclusion. A perfect match would turn the image black. Where you see color is where the DeNoised image departs from the original.

This prevents you from layering the DeNoised image with a copy of the original, to brush in a bit of noise in the shadows, for instance.

In conclusion, DeNoise AI is a very useful app (or filter) for the right file type but you need to understand what it’s doing.

6 Likes

Interesting analysis of the DeNoise program. With respect to the ‘reveals image outside the frame…’ comment - could that be in part due to lens correction on your original photo versus the denoised photo? It would seem odd to me that DeNoise could/would create pixels that did not exist in the original photo.

Most, if not all, camera manufacturers have what they call a “safe area” which basically means that not all pixels recorded on the sensor are reproduced in your image processing software. Affinity Photo, and perhaps others that I don’t know about, actually shows all of those pixels - any image I process in Affinity is a few hundred pixels longer in both dimensions than it is in any other software I use with extra edge details shown. Perhaps we’re seeing something similar here? Or perhaps a combination of that and lens correction being added/removed?

1 Like

Thanks, Paul, that sounds like the most likely explanation. Perhaps I can use DeNoise with corrections at zero to recover a few extra pixels if needed.