Hi all!
In my experience when upscaling a blurry signal, it is beneficial to downscale it first (as this makes Video Enhance think that the original footage for crisp).
My question is… how to work with it?
Because downscaling the footage externally first clearly leads to loss of information, and this approach feels rather hamfisted and convoluted tbh.
How to do this the proper way?
Is there a “blurryness” slider in one of the algos, etc?
I have noticed this too, hate to do it for the same reason you do. This also introduces the variable of how much intermediate downscale and sharpening is right to get the best final result while “throwing away” the least data?
The “deblur” slider in Proteus works ok sometimes, but I still get better results often by doing as you described.