In my experience when upscaling a blurry signal, it is beneficial to downscale it first (as this makes Video Enhance think that the original footage for crisp).
My question is… how to work with it?
Because downscaling the footage externally first clearly leads to loss of information, and this approach feels rather hamfisted and convoluted tbh.
How to do this the proper way?
Is there a “blurryness” slider in one of the algos, etc?
Hello, understanding pixels in depth and their manipulation will provide you with the necessary concepts for their application in Topaz AI.
Tamaño y resolución de imágenes en Photoshop.
szabo i know this feeling. 2008 i capture vhs in 720p to get HD
I am not sure the best way to export it to a lossless file around 300 pix. 16bit tif would be good but topaz don’t like folder inputs.
Got some stuff in my fail folder… waiting.
Are you sure? I use them a lot. Open the first image in the folder and it automatically opens it as a video.
I have noticed this too, hate to do it for the same reason you do. This also introduces the variable of how much intermediate downscale and sharpening is right to get the best final result while “throwing away” the least data?
The “deblur” slider in Proteus works ok sometimes, but I still get better results often by doing as you described.
Thank you! This was golden, I was too deep in drag and drop workflow.