Proteus is a general usable model. What you must know is, in auto mode it’s smoothing (Dehalo) and encreasing “recovering cpmpression” automaticaly depends on detected video quality more than other models.
This two or three pass method scribed above could result in strongly defined, exaggerated contours and is sometime great sometimes not optimal.
A other methode I use long time ago and is and still gives overall great results is render with two different models from source to endresolution in one step, then overlap both video in your video cutter software, means melt two video into one by making the bottom video track semi-transparent
Upscale with Iris leads to more detail than proteus. But Proteus gives you often a softer result because edges are a little more frayed and I like the noise removal a little better than Iris. So both have their strengths and weaknesses.
That makes sense to me that grainy source material would fare much better other models than Gaia. I only use Gaia on a video that looks close to perfect for enlarging and love the way it doesn’t change the look.
thanks for sharing your experience. I will test it the weekend.
May I ask, what your experience is with an old, color washed out 480p video, originally shot for VHS and digitalized later?
I denoise the video prior to step 1 (Iris) using e.g. neat video, correct?
I run the color correction with e.g. Davinchi resove after the last step of Topaz or better before upscaling with Iris?
Thanks for your time
Sorry I mean above “Improve Detail” not “Recover Detail”
Same value for “Improve Detail” as “Anti-Alias/Deblur” is just a rough guideline that I can recommend for starting with “Proteus”. For Iris set “Deblur” to half the value of “Improve Detail”, for example if you have Improve Detail at 30 set Deblur to 15
Best result you get by exhaust the potential of all parameters. For example you can do simillar effect with “Improve Details” only, but then you have to go to high and get artefacts, but when you use both “Improve Detail” and “Deblur” you get simillar effect but lower values and artefacts.
Sure all parameters have they own limits depending of the source. This are just starting parameters, then lower a bit each and encrease and watch if it’s getting worser or better.
P.S. Opponent of “Deblur” is “Dehalo”. Although dehalo doesn’t cancels out 1:1 the artefacts, but a simillar effect is still there. Use low values of Dehalo (often from 1 to 5 fits) this can be used to smooth out “edges”, so that the image becomes softer, you loose sharpness a little bit but well dosed you gain more plasticity!
I have no experience with analog VHS material, means I can’t say Iris is the right model, but give it a try. I revided and think better not denoising in a seperate task when you can do this in one wish. Seperate denoise task only when you have hard noisy video that needs more than 100% denoise.
On Iris model set upscale to desired endresolution, after that set denoise level in Iris first until it fits and then encrease other enhancer parameters.
I think skip the Proteus thing, try a second pass; import the result and use “Artemis HQ” as finisher.
What I see is it doesen’t matter doing the color correction first or at the end. I did both many times and can’t see any difference, it gives me the same result.
I’ve noticed that separate runs with different models often produce different results compared to a single run with a second enhancement added. I would have thought that the reduced FPS from adding a second enhancement was the result of TVAI doing multiple passes on each frame, but apparently that’s not what’s happening.
Yeah, I’ve noticed that as well. I can’t really form a hypothesis why TVAI fails so consistently at interpreting colors correctly when source clip dimensions hover around the common SD resolutions. It used to be in old FFMPEG versions (and most players) years ago that it would completely ignore any color meta-data in the clip and instead decide on its own what input color space to use using this very naive and poor function: “if rez < 720p set input color to REC.601, else 709”.
This was fixed years ago though in ffmpeg, and now it will use whatever color metadata exists in the source clip, and only fall back to the old behavior if the clip doesn’t have any meta-data. I always convert to REC709 any non-HDR clip first before starting any video-processing work. Using vanilla ffmpeg this way never has any color related problem as a consequence. However TVAI seems to not always respect the color metadata for some strange reason. The only hypotheses I have is that they’re either using a very old version of ffmpeg (discounted since the version information indicates they’re using pretty much bleeding-edge versions from the ffmpeg source trunk) or they’re not using ffmpeg/av*-libs to load the clip, but does the loading on their own. Only the latter would seem plausible in this case as an explanation to the observation.
Anyway, one workaround I’ve found is to take the TVAI output and do another encode pass with ffmpeg where I force the input color space to e.g. REC601 and output to 709 (again!), or the inverse (since sometimes it seems they’re adding metadata claiming the TVAI output clip is 709 when in fact they’ve used the 601 coefficients, completely messing everything up for further processing).
PS. Side-note, I’m working on a model to detect original color spaces via heuristics (image features) instead of meta-data, so I can automate the correction of the TVAI output. It’s a real PITA and time sink to have to double check every single clip produced by TVAI manually