Temporal Noise Reduction using Photo Bursts

As we’ve seen with VEAI, temporal coherence makes it possible to get much better results when there are multiple frames/images to draw from vs using a single image. This got me thinking about using VEAI to denoise “bursts” of photos as though they were short video clips. Turns out, it works great! But as I played with it, I realized that things like the RAW denoise model were only available in DeNoise AI.

The request: It would be great for DeNoise AI to have the ability to use VEAI-like temporal denoising whenever a photo was taken as part of a short burst. This is currently possible in VEAI, but requires a bit of legwork as it’s not the primary use case for the tool, unlike DeNoise AI. I imagine the ability to import a burst of images, choose a “primary image”, then receive a processed “primary” image as output.

I can imagine this would work for not only DeNoise AI, but also Gigapixel AI and Sharpen AI, as well.

" The request: It would be great for DeNoise AI to have the ability to use VEAI-like temporal denoising whenever a photo was taken as part of a short burst."

I don’t know if there is any particular benefit for you personally if this was part of DeNoise AI, but I think it is possible to do this in Photoshop for a long time now. You can simply stack your photos in Photoshop and use median or some other filter on the stack of photos to do do this temporal denoising thing. It also works for crowds in public places, that can be removed in same way.

I suppose DeNoise could improve on this by using AI to detect only noise and remove it, but keep the other moving objects intact like crowd of people for example. Although I don’t know how much would that help, considering that whatever AI is doing in DeNoise its very close to that already.

It sounds like that would work great for static scenes or photos shot on a tripod, but temporal coherence ML techniques aren’t limited to that, thankfully

Perhaps. But DeNoise already is using prediction by AI to deal with moving scenes and anything with noise. Temporal noise reduction as used in video applications is little bit better than Static noise reduction, but with no AI to predict what is what, there is little advantage. Temporal noise reduction in video is just a bit better than static noise with lot more processing required. But both Temporal and static noise, lag behind the AI noise reduction. To get real benefit of what you suggest with multiple burst shots you have to use static or near static scene and try to align it later in post production. Otherwise you are likely to get simply better results with less shooting time by using existing AI methods.

Temporal noise reduction as used in video applications is little bit better than Static noise reduction, but with no AI to predict what is what, there is little advantage.

Sorry, perhaps I should be more clear. I’m referring to AI temporal noise reduction, which is akin to techniques for GAN-based temporally coherent upscaling like TecoGAN and VEAI currently provide. These techniques are based in AI.

The VEAI interface is geared toward video input, so the request is to bring this technique to their photography apps.

oh, I see. That might be different than.