Tip: Three step tuning method, to recover bad quality videos

Proteus is a general usable model. What you must know is, in auto mode it’s smoothing (Dehalo) and encreasing “recovering cpmpression” automaticaly depends on detected video quality more than other models.

This two or three pass method scribed above could result in strongly defined, exaggerated contours and is sometime great sometimes not optimal.

A other methode I use long time ago and is and still gives overall great results is render with two different models from source to endresolution in one step, then overlap both video in your video cutter software, means melt two video into one by making the bottom video track semi-transparent

Upscale with Iris leads to more detail than proteus. But Proteus gives you often a softer result because edges are a little more frayed and I like the noise removal a little better than Iris. So both have their strengths and weaknesses.


That makes sense to me that grainy source material would fare much better other models than Gaia. I only use Gaia on a video that looks close to perfect for enlarging and love the way it doesn’t change the look.

1 Like

Thanks again Mayday. Your explanations on what’s going on behind the scenes are really helpful when navigating and choosing the different models.

Why do you adjust them in tandem with the same value? What’s the correlation you’ve found between those two?

Hi Mayday,
thanks for sharing your experience. I will test it the weekend.
May I ask, what your experience is with an old, color washed out 480p video, originally shot for VHS and digitalized later?
I denoise the video prior to step 1 (Iris) using e.g. neat video, correct?
I run the color correction with e.g. Davinchi resove after the last step of Topaz or better before upscaling with Iris?
Thanks for your time

I would bet / guess that using the upscale portion the last would probably give you the best results

1 Like

Sorry I mean above “Improve Detail” not “Recover Detail”

Same value for “Improve Detail” as “Anti-Alias/Deblur” is just a rough guideline that I can recommend for starting with “Proteus”. For Iris set “Deblur” to half the value of “Improve Detail”, for example if you have Improve Detail at 30 set Deblur to 15

Best result you get by exhaust the potential of all parameters. For example you can do simillar effect with “Improve Details” only, but then you have to go to high and get artefacts, but when you use both “Improve Detail” and “Deblur” you get simillar effect but lower values and artefacts.

Sure all parameters have they own limits depending of the source. This are just starting parameters, then lower a bit each and encrease and watch if it’s getting worser or better.

P.S. Opponent of “Deblur” is “Dehalo”. Although dehalo doesn’t cancels out 1:1 the artefacts, but a simillar effect is still there. Use low values of Dehalo (often from 1 to 5 fits) this can be used to smooth out “edges”, so that the image becomes softer, you loose sharpness a little bit but well dosed you gain more plasticity!


You need to remove noise from Neat Video, not TVAI

I have no experience with analog VHS material, means I can’t say Iris is the right model, but give it a try. I revided and think better not denoising in a seperate task when you can do this in one wish. Seperate denoise task only when you have hard noisy video that needs more than 100% denoise.

On Iris model set upscale to desired endresolution, after that set denoise level in Iris first until it fits and then encrease other enhancer parameters.

I think skip the Proteus thing, try a second pass; import the result and use “Artemis HQ” as finisher.

What I see is it doesen’t matter doing the color correction first or at the end. I did both many times and can’t see any difference, it gives me the same result.

I get good results with:

Step 1:
Enhancement to 800p
IRIS (mostly MQ)
15% fix compression, 25% improve detail, 5% reduce noise, 100% recover detail, 0% everything else
Second Enhancement
scale 1x

Step 2:
Enhancement to 1080p
25% improve detail, 25% sharpen, 50% deblur, 100% recover detail; 0% everything else,
Amount 5, Size 2

1 Like

I’ve used your process (only with Gaia added to Step1 as a second enhancement), and I’m quite impressed so far. :+1: :slightly_smiling_face:


1 Like

Did you add artificial noise?
I’m noticing a significant color-shift, as if the wrong color matrix was being used (Rec601 vs 709 mixup). Other than that, yeah this software can do wonders sometimes :slight_smile:

Hi Mayday, I must check this 3-step method on my files:

If you or anybody else could help to create much nicer upscale 2x and 4x, I would be grateful.

I usually add grain to the picture (I’m not sure I’ve used noise in this example, but I have toyed with that setting); the colour shift nearly always happens with many videos I upscale.

I wish this wouldn’t happen.

I didn’t spend to much time on this but what do you think of those?

P.S. what are the 3-Steps you took?

I’ve noticed that separate runs with different models often produce different results compared to a single run with a second enhancement added. I would have thought that the reduced FPS from adding a second enhancement was the result of TVAI doing multiple passes on each frame, but apparently that’s not what’s happening.

1 Like

Yeah, I’ve noticed that as well. I can’t really form a hypothesis why TVAI fails so consistently at interpreting colors correctly when source clip dimensions hover around the common SD resolutions. It used to be in old FFMPEG versions (and most players) years ago that it would completely ignore any color meta-data in the clip and instead decide on its own what input color space to use using this very naive and poor function: “if rez < 720p set input color to REC.601, else 709”.

This was fixed years ago though in ffmpeg, and now it will use whatever color metadata exists in the source clip, and only fall back to the old behavior if the clip doesn’t have any meta-data. I always convert to REC709 any non-HDR clip first before starting any video-processing work. Using vanilla ffmpeg this way never has any color related problem as a consequence. However TVAI seems to not always respect the color metadata for some strange reason. The only hypotheses I have is that they’re either using a very old version of ffmpeg (discounted since the version information indicates they’re using pretty much bleeding-edge versions from the ffmpeg source trunk) or they’re not using ffmpeg/av*-libs to load the clip, but does the loading on their own. Only the latter would seem plausible in this case as an explanation to the observation.

Anyway, one workaround I’ve found is to take the TVAI output and do another encode pass with ffmpeg where I force the input color space to e.g. REC601 and output to 709 (again!), or the inverse (since sometimes it seems they’re adding metadata claiming the TVAI output clip is 709 when in fact they’ve used the 601 coefficients, completely messing everything up for further processing).

PS. Side-note, I’m working on a model to detect original color spaces via heuristics (image features) instead of meta-data, so I can automate the correction of the TVAI output. It’s a real PITA and time sink to have to double check every single clip produced by TVAI manually :frowning:

1 Like

The odd thing is they had this colour-shift-issue fixed a few version-points ago. (I complained about it often enough…)

Thank you for the effort. I have compared the files, especially the footage of the marmot, and would like to share my comments.
NXF1.ddv3 file and cleaned ddv3 show some strange patterns on the grass and the cleaned ghg5 is even much worse. Cleaned dtv4 is rather soft and to be honest none of them is good…:frowning:
I cleaned the footage then I upscaled as you did to 1440x1080 using NNEDI3 and the result is quite good, lacking artifacts but I still look for a better final output.
Below I attach the file.


I find that ddv1 does a better job of deinterlacing without creating patterns out of things like grass and gravel that should remain more random-appearing. A bit too soft for a final enhancement, but a good base for additional steps.

I’m working one right now in which parts of the video look best with ddv1+proteus and others are best with ddv1+artemis. So I’m doing both and will be editing the better-looking parts of each together later.