Guidance | [Generative Models] Preview vs Export

Thank you. They have to understand that it is not a problem of different seed, it is a problem of quality of the ai drawing when doing full preview. There’s a clear problem, their response seem an excuse to me.

1 Like

When using recover model the small preview gives a better result then the full preview and export.

Small Preview

Full Preview

I seem to always get the same seed (same exact result) whenever I try regenerating a preview with either of the generative models. Closing and reopening the program doesn’t fix this.

Sometimes with Recover if I only select a small portion of the image rather than regenerating the whole thing, it’ll use a new seed, but when I go to export, or I try to regenerate the whole image, it’ll still use the old seed.

In fact, with regenerate entire image, it’ll get halfway done generating, and then skip to the end and always say it took about the same amount of time, as if it’s copping out and pulling the result from a cache. I don’t know if that’s what it’s doing or if it’s just fast at rendering, but either way the generated image never changes!

It’s good that seeds are saved from preview to export, but I can’t regenerate if I don’t like a result!

Please fix this!

1 Like

I have the same issue. I would expect that an upscaler that I pay for would at least have random seeds the way an open source model such as Flux would. In fact, I am considering using Flux/Comfy UI for upscaling now due to this limitation.

Hello!

As of writing this (current: v8.3.4), the expectation for seed generations is that there will be differences between cropped previews, local export, and cloud render.

This applies to Recover and Redefine.

For example, the preview sizes of Small, Medium, and Large all provide the models with different sampled areas of the image, and the processing of those areas will generate different results than a preview or render of the entire image.

What is the size of your original image? I just posted a fix that may apply to your issue as well. In short, start with a higher resolution image. I had the problem when my pre render image was 1024x512 but the issue disappeared when my start image was 2048x1024, Good luck.

I have been repeatedly frustrated that the preview and final render bear no resemblance to each other. Preview looks great but the render is over processed.

The issue is consistent but is fixable by changing the original pre-render image to a larger size.

Took me a while to figure it out. Might be useful for others who are having similar problems.

I sent a bug/fix report to Topaz with sample images. Unfortunately I don’t know how to add them here.

Not sure if this solves your problem but…

I have been repeatedly frustrated that the preview and final render bear no resemblance to each other. Preview looks great but the render is over processed.

The issue is consistent but is fixable by changing the original pre-render image to a larger size.

1 Like

Hello!

I’d like to explore your methods.

What is your process for adding pixels to the original image’s dimensions?
If your source image is 1024×512, what are your steps for making it 2048×1024?

A note to all:

We generally do not recommend using Recover or Redefine on images with dimensions larger than 1024×1024.

The differences between preview and export are an expected result.
The preview boxes provide a crop of the image to the AI which it interprets differently than if all of the pixels in the image are being interpreted.

The development team is investigating ways to create different solutions.

1 Like

My starting image was 3400x1700. I followed your advice and I changed the resolution to 13.600 x 6.800 using photoshop. Then I loaded the image again on Gigapixel (Redefine 1x Creativity 3) and everything worked much better! Thank you for your workaround, it can help until the team find a real fix.

Not sure if this solves your problem but…

I have been repeatedly frustrated that the preview and final render bear no resemblance to each other. Preview looks great but the render is over processed.

The issue is consistent but is fixable by changing the original pre-render image to a larger size.

My workflow usually involves Ai rendered images at 2048x2048 or close to it. I use Faststone (awesome free image viewer/editor) and reduce image by 50%. I would use the same software for resizing smaller images to larger. Pop the 1024x1024 image into Gigapixel and done.

Honestly though my original images are usually bigger than 1024x1024 unless I am using Recover on a low res or poor quality photo.

Thx for explaining why the preview is different from the final. Makes sense now that I think about it.

1 Like

Hi.

Quick tip; Gigapixel has the ability for downscaling as well as Upscaling images so, no need to use another application

Simply open the image in Gigapixel then, look to the right of the Preview Area you’ll see the Side Panel, at the top of the Side Panel is the Navigation Window.

Now just below the Navigation Window you’ll see Upscale with 1x, 2x, 4x, 6x and Custom and below that is the Scale Factor with a number on the right

Click on the number to highlight it then, type 0.50 and press Enter to scale your image One Half it’s original size or type 0.25 and press Enter to scale your image One Quarter it’s original size then apply Redefine or Recover.

Once you’re done simply click on the 1x Upscale or type the number One in the Scale Factor to enlarge your image back to its original size

Hope this helps

Hello!

When Recover or Redefine is involved, we definitely recommend resampling the image below 1024px. And many times, going even smaller can create even better results as image details are condensed.

On the broader topic of cropped previews vs full exports:
It is expected behavior that a cropped preview (small, medium, large) will produce different results from a full render or export. This is due to a small sample of the image being processed and interpreted versus the whole image being “seen” by the AI to create a result.

so is the upscale option a different process to redefine? I always thought it upscales with the redefine not a separate process?

Hi Richard.

No Redefine doesn’t Upscale Images it quite literally Redefines the image and has been designed for very low resolution, blurry, extremely cropped or heavily compressed images, and AI-Generated images plus anything else you can think of that is of low quality.

The recommended workflow for Redefine is to use or reduce your image to one megapixel 1024 x 1024px or smaller to get the most optimized result.

Then, apply either the Redefine or Recover Enhancements before Upscaling your image using one of the methods below

As I’ve already mentioned, Redefine quite literally Redefine plus add definition and texture back into all types of images.

Redefine also, has a Creativity slider with six levels of enhancement,

Creativity Level’s 1 to 2 are best suited for adding definition and texture back into images that are soft, blurry or low quality and can also, be used with Face Recovery.

With Creativity Level’s 3 to 6 Face Recovery is disabled because, at the higher level’s the image will start to be transformed by, generating and adding information into the image that wasn’t there before and the higher the number the more surreal the image will become hence, the name Creativity.

In addition, from level 2 to 6 you’ll have access to the Advanced Settings where a text prompt box will be provided for adding information about your image or otherwise basically, you can take your imagination to the next level.

Topaz Photo AI’s Super Focus on the other hand, has a similar process when Focus Boost is applied by Downscaling the image then, applying Focus Boost before Upscaling the image back to its original size again.

But, unlike Gigapixel, Photo AI, will Downscale the image apply Focus Boost and Upscale image back to its original size automatically during the Rendering process.

Hope this helps

Andy

would it be better if gigapixel had this option aswell? " But, unlike Gigapixel, Photo AI, will Downscale the image apply Focus Boost and Upscale image back to its original size automatically during the Rendering process."

True, that would be the logical solution and I can’t see any reasons why some sort of automation couldn’t be introduced, similar to Photo AI.

In light of that, Topaz kind off, do this already with the different preview windows because, the main preview window in its self is only a low resolution facsimile of the original image this inturn, would have the advantage of the preview render matching the output rendering.

current UX makes it feel like the cloud models simply doesn’t work at all (the preview doesn’t update in-software at all). The workflow is confusing and will make people discount the cloud models as simply not working at all. The software makes no attempt to explain that this section works differently from all the others in terms of workflow.