Discussion | Recover | Results Showcase

I’ve tried using Recover on different types of images… generally higher than 1024 pixels (which I know it likes lower res images better), People… landscapes… And edges of objects have very hard aliased edges. Even in hair. It’s really unusable, unfortunately. Anyone else seeing this??

You did answer your question by yourself.

Its made for garbage images, to make them really nice.

Recovery processed images sometimes have large magenta halos and blobs:

I upload a picture 1000x1000 px 72 dpi and try to use the recover v2 model but it doesn’t work, always gets stuck and I have to force quit the app, whether I want to preview the image or just export.
Is anyone experiencing the same issue?

No, i am not seeing that but i am seeing it turn the whole picture BLACK!!!

Same here - V2 turned the entire photo black.

Same Here - V2 it is not working, it just generates blurry images. i have tried several times uninstalling and reinstalling everything, but nothing. Macbook Pro M2

1 Like

Hi.

Please contact Topaz Support there’s an issue with the M2 here’s the link to contact Topaz Support directly

Topazlabs Support
support@topazlabs.com

1 Like

I also just get black. New W11 asus laptop with Intel(R) Iris(R) Xe Graphics

Getting weird banding and just overall bad looking clouds and anything sky related when using the Recover model v2 New. When you zoom in on clouds its almost like it things clouds are skin and really scaly looking. I really like the Recover model and need to use it but this is unusable right now please fix this! I also downloaded just now the latest 8.3.4 version and still the same from last version.

1 Like

Hello!

For issues with the recently released Recover V2, if you’re willing to report and help us troubleshoot, we’d like to hear from you and collect details for investigation.

If you generate an unexpected result, please share your logs with us and attach your system profile to a message addressed to help@topazlabs.com.

Providing a screen capture of the before and after also helps a lot.

Thanks :nerd_face:

I have just installed a GeForce RTX 5070Ti GPU yesterday and am experiencing various issues since. In addition to the “Sharpening Standard” issue in Photo AI, there is also an issue with Gigapixel AI producing a pronounced “Snakeskin” effect in Recover mode. In Photo AI, the sharpening filter produces lower quality results in general. I am using Topaz apps as Photoshop plug-ins.


Hello!

Recover is ideal for images with pixel dimensions of 1024 or less. Using images larger than this will produce an in-app warning and can result in undesirable over-texturizing.

Depending on the source image, pre-downscaling the image first to fit within 1024×1024 will provide the best experience for processing locally and giving cleaner results.

Depending on your intentions, use our core models to upscale and enhance most images. After importing an image, I recommend enabling Auto mode to start and build from those settings.

Thanks :nerd_face:

1 Like

Tyler, since 1024 (or 1024x1024) is 2 to the power of 10, and computing data is generally best handled in powers of 2, and it is often said on these forums that Recover V1, and Redefine (like the realistic NONE option) works best on images 1024 x 1024 (OR SMALLER) can it work BETTER (with the goal of extracting out more fine detail) on smaller images, of VARIOUS dimensions?

Or, more specifically, can it be maximized (with the same goal) better on images that are powers of 2 (smaller than 1024) such as 512 x 512 (2 to the ninth power = 512)

Lastly, if you have a 1024x1024 image going into Recover V1, does having the resolution set at 72ppi (vs something like maybe 200ppi, or 300ppi) make ANY difference in the results at all? Since the file is still 1024 x 1024…

Thank you!

Sorry, you slightly lost me in the maths talk.

To my knowledge, scale factors of 2x & 4x work best with generative models; versus something like 1.21x or 3x.

I would need to do some tests with 512 × 512 with a 4x upscale, versus 1024 × 1024 with a 2x to see how the results compare. While both result in a 2048 × 2048 image, the 4x would have more room for interpretation and might appear as the better result.

Then there is value in pre-downscaling, or resampling a source into fewer pixels which condenses details and can improve the final output at well.

Depending on the sources, I would approach every image on a case-by-case basis. If the image ratio isn’t 1:1, my personal recommendation would be to resample and test with the shortest edge at 1024px or 512px.

PPI is metadata for printers, it has no effect on image quality and only factors in when your dimensions are set to physical values (in/cm) versus pixel values. Pixel density of an image can be adjusted at any time since it is just metadata.
:folded_hands: