I’m having a strange issue with Gigapixel. In the preview window, everything looks great — especially areas with people. The faces and details look refined and acceptable. But when I export the final image, it looks very different. The people appear oddly deformed — not just softer, but with strange artifacts or changes that don’t resemble what I saw in the preview.
It feels like the final render is using a completely different model or settings than the preview. I’ve tried various models (Standard, High Fidelity, etc.), toggled face recovery, experimented with scaling, and tested different export formats, but nothing seems to help.
As a very hacky workaround, I’ve resorted to taking a screenshot of the preview area, then pasting that into Affinity Photo to composite over the original image. It’s obviously far from ideal, but it shows how much better the preview looks compared to the export.
Has anyone else seen this or figured out a way to fix it?
Yes I’ve seen it in the last few days with old monochrome postcards and could not find any options in Gigapixel AI 8.3.3 or Photo AI 3.6.0 to get close to the OK Previews.
My system is Windows 10 PC RTX 4070 Super
I will look for some examples later this evening
Here’s one example — in theory, we should expect a 1:1 match between the preview and the final output, but as you can see, that’s definitely not happening here.
I’ve never had that issue with the non-generative models, only with the newer generative models (Recover and Redefine). With non-generative models, preview matches output. If your having problems with the non-generative models, then you’ll need raise a ticket with support, so they can work through it with you.
Hi mate, I’ve the same problem in any image. When I do small previews everything is ok, when I calculate full preview I got a lot of deformations. Did you find a solution?
Thank you. They have to understand that it is not a problem of different seed, it is a problem of quality of the ai drawing when doing full preview. There’s a clear problem, their response seem an excuse to me.
I seem to always get the same seed (same exact result) whenever I try regenerating a preview with either of the generative models. Closing and reopening the program doesn’t fix this.
Sometimes with Recover if I only select a small portion of the image rather than regenerating the whole thing, it’ll use a new seed, but when I go to export, or I try to regenerate the whole image, it’ll still use the old seed.
In fact, with regenerate entire image, it’ll get halfway done generating, and then skip to the end and always say it took about the same amount of time, as if it’s copping out and pulling the result from a cache. I don’t know if that’s what it’s doing or if it’s just fast at rendering, but either way the generated image never changes!
It’s good that seeds are saved from preview to export, but I can’t regenerate if I don’t like a result!
I have the same issue. I would expect that an upscaler that I pay for would at least have random seeds the way an open source model such as Flux would. In fact, I am considering using Flux/Comfy UI for upscaling now due to this limitation.
As of writing this (current: v8.3.4), the expectation for seed generations is that there will be differences between cropped previews, local export, and cloud render.
This applies to Recover and Redefine.
For example, the preview sizes of Small, Medium, and Large all provide the models with different sampled areas of the image, and the processing of those areas will generate different results than a preview or render of the entire image.
What is the size of your original image? I just posted a fix that may apply to your issue as well. In short, start with a higher resolution image. I had the problem when my pre render image was 1024x512 but the issue disappeared when my start image was 2048x1024, Good luck.
I have been repeatedly frustrated that the preview and final render bear no resemblance to each other. Preview looks great but the render is over processed.
The issue is consistent but is fixable by changing the original pre-render image to a larger size.
Took me a while to figure it out. Might be useful for others who are having similar problems.
I sent a bug/fix report to Topaz with sample images. Unfortunately I don’t know how to add them here.
I have been repeatedly frustrated that the preview and final render bear no resemblance to each other. Preview looks great but the render is over processed.
The issue is consistent but is fixable by changing the original pre-render image to a larger size.
What is your process for adding pixels to the original image’s dimensions?
If your source image is 1024×512, what are your steps for making it 2048×1024?
A note to all:
We generally do not recommend using Recover or Redefine on images with dimensions larger than 1024×1024.
The differences between preview and export are an expected result.
The preview boxes provide a crop of the image to the AI which it interprets differently than if all of the pixels in the image are being interpreted.
The development team is investigating ways to create different solutions.
My starting image was 3400x1700. I followed your advice and I changed the resolution to 13.600 x 6.800 using photoshop. Then I loaded the image again on Gigapixel (Redefine 1x Creativity 3) and everything worked much better! Thank you for your workaround, it can help until the team find a real fix.