Trying to enhance very noisy video

Nyx HQ Denoise
Manual, all 100, except add noise 0, dehalo 0, anti-alias 0
not bad, acceptable.
Other model gave worse results

I know I’m probably going to get some stick for this, but Topaz really isn’t good at this sort of repair and restoration. It’s a very good upscaler and frame interpolator that tries to do a lunch buffet of other things - usually rather poorly. There are other tools that are much better suited for this type of denoising. Personally, I would use an AVISynth denoising function like TemporalDegrain or SMDegrain before trying to do anything with Topaz here.


TemporalDegrain looks very good, now trying avisynth to work and open avs files with Topaz via virtual filesystem

Picture after TemporalDegrain2

I think i will try Iris on it

PS: oh no, saving with Xvid4psp to FFV1(pcie 4.0 x4 Samsung 980 pro ssd) and speed 0.7 fps

Looks a lot better. Might even serve as an input for conservative upscaling in TVAI.
The problem with all Topaz products I’ve found is that they have a complete blind-spot in terms of over-sharpening (producing often halos and enhancing artifacts instead of image detail). Gigapixel, PhotoAI, Denoise, TVAI etc all suffer from the exact same problem, requiring one to basically export images in all possible combination and then afterward look through them to see if any of the resulting ones better than even the original, and if so pick the best one. Where best one typically tend to be the sharpest one that’s not over-sharpened.

So hit-and-miss is how I’d summarize the entire product portfolio.

I think Topaz has a huge opportunity here. If they were to spend some R&D time on oversharpening detection, then it’d save their customers a tonne of time, and it’d elevate their entire product line to a professional level.

The problem doesn’t seem intractable since detecting halos and over-sharpening is a solved problem (hint: frequency analysis)

That looks like the settings might be a little heavy because it still looks a little oversmoothed to my eye. Try something just a bit lighter like (degrain=1,ov=2,hq=1) and see how that works. And yes, it’s terribly slow. You can try a couple of different things to get a little more out of it. You can either use (prefetch=x) at the end of your script to optimize multicore usage, or try setting gpu=true in your TDG parameters - but not both. My experience is that gpu=true is the better option, but ymmv.

Also take a look at SMDegrain and see if you can get a comparable result at a better speed. The guts of both functions are built on the same mvtools base.

Thanks, will try smdegrain.
Anyways 0.7 fps is too slow, 1.42 seconds for 1080p picture processing is too much. And it also process with this slow speed black fadeout.

I have 10 cores cpu and prefetch (20) gave me the same speed. It use GPU, not CPU.

I’m not sure I understand exctly what you mean there, but just to be clear, using prefetch and gpu together will kill your performance. It has to be one or the othe other. Also, I wouldn’t use prefetch(20) if you have 10 cores. YMMV but I get better performance sticking to the number of physical cores instead of logical cores. I have 12 cores and prefetch(12) is my sweet spot. But no matter what you do, temporaldegrain will be painful on 1080p material.

10 cores, 20 threads. however prefetch(2) (here 2 in semicolon, forum change it) gave me same speed.

gave me 2.2 fps and i dont like results.

hqdn3d gave me 75 fps and results very close to
and probably hqdn3d not a bottleneck here, i saving to ffv1

TemporalDegrain2(degrainTR=2… the best, but very very slow
hqdn3d (20… very good and extremly fast
TemporalDegrain2(degrainTR=1… not good and very slow
does not try SMDegrain, read doom9 and it says very slow too, faster than TemporalDegrain2, but still very slow.