Gigapixel v5.0.0

I don’t think that most people realize that this is happening. They see the preview, and assume that the final output will be the same. Unless they actually go and look at it side-by-side at the same scale they wouldn’t notice that the full size image was different than what they saw in the program preview.


For almost a year now I’ve been trying to get developers to fix the increase bug…

Is it a bug, though? To me it looks more like a limitation of the process. The “AI” algorithm is practically guessing, so giving it another input to work on leads to a different result/guess. The preview is just an indication of what your final image looks like, not so much a guarantee.

Personally I take more offense with AI processing being applied to zooming in, that makes things even more confusing and the preview differs a lot more from the final output then.

Do not break my heart!
If the program can show what it can do, and it is proved that its algorithm can do it, but does not want to output it to the final file in fact, then this is a bug. And since it was proved that it produces a result identical to the preview window in small images, it means that it can display it in large images as well.

For seven years I led a group of designers and programmers and a bunch of people strapping, having completed quite a few long-term projects, so I won’t be fooled. To fix this bug according to the recommendations that I left, it takes a week for one good programmer. He almost does not even need to write code, everything is already ready. Cut the image with overlap, send into the algorithm, collect the result by smoothing the mismatches with helps translucent edges of the overlap. And that’s it !!!
Why do I have to torment third-party programs for this?

But no, we pull the rubber, listen to the stories of managers in the requests that we have the wrong file format, and not the program has bugs.

Topaz explained to me that the difference between the preview and the final image is a result of the attempt to deliver a preview soon without having to process the whole image before.

In addition to this the AI has been trained with small images up to 500 * 500 pixels, that might be why the preview becomes even “better” the more you shrink the preview window. :smiley:

Another reason explained was that GigaPixel wants to keep the original look of the whole image. By looking at the small part in the preview window it miss-interprets the image a bit.

1 Like

Beta testers have as much right as anybody else to show their images - whether you agree or not.

That might be your experience, and that of others, but it’s certainly not everybody’s. In my case, close inspection shows the output to be marginally poorer than the preview, but I would never have noticed if others hadn’t drawn my attention to it. I’m certainly not seeing what you’ve shown here where the output looks like the original. I suppose it’s possible that I’m the only one without the problem - but somehow I doubt it.


Let it be a surprise for developers, but users have been complaining about for several years.
You will be surprised, but there are at least four topics on the forum were complaining about this problem.

Read pls.

1 Like

I don’t think I ever had that problem. Here I did a test just now, and if anything it seems in my case the image outputted looks a bit more sharper than preview. Maybe its related to drivers or specific hardware configuration.

Application & Version: Topaz Gigapixel AI Version 5.0.0

Operating System: Windows 10 (10.0)

Graphics Hardware: GeForce GTX 1060 6GB/PCIe/SSE2

OpenGL Driver: 3.3.0 NVIDIA 451.48

CPU RAM: 16266 MB

Video RAM: 6144 MB

Preview Limit: 5968 Pixels


Is the problem now that the requests do not reach the developers? And they never saw complaints from users, if the boss does not know about it!

Where did the problem of not knowing the developers about the problem come from? Did we create requests in vain?

Gigapixel isn’t two years old yet.

No surprise, I’ve read them all. To get a bit of perspective - how many individual users are reporting the issue? In comparison to the total number of users?

1 Like

You are not going into a problem trying to prove that there is no problem. Choose an even lower resolution picture to prove something. Please understand first the topic and what users complain about. The smaller the picture, the less blurriness is visible. Therefore, you and your example are completely off topic.

Not a problem, how many examples do you need in quantity so that you understand the full extent of the gigapixel problem?

Well I tried it before with very small and compressed pictures and never since I started using GigaPixel version one did I notice the problem. And I upscaled probably 10 000 images or more since than. Maybe my configuration is just not affected by it.


I should note that you have similar problems across the development front and in other applications. Everywhere there is some kind of pain and misunderstanding of how and where your applications can use to solve design problems.

I like the ideas of your AI applications, but the sad thing is that they bypass the problems and criticism of the vibrant consumer market outside this forum.

You can cook as much as you like in the soup of this forum, and not get lively criticism because the forum is really dead. I stay here purely for the sake of enthusiasm and the belief that I can achieve the correction of at least some of the errors if I make a long and loud statement about it.

Not me - I’m just a user like yourself.

1 Like

Well, it has been proven that I can do plumbing on single water connections in my appartment, but I cannot do it “in due time” for a whole house. It’s a limitation of my current skillset. I can improve on that and extend my functionality, albeit not within a week.

That does not mean that I do not want GP to improve, especially when major revision numbers are published (v4 vs. v5). But we have to acknowledge that its results are more or less guesswork and include random elements at this stage of development.

Small cut-out vs. larger cut-out is virtually a different image being processed by the algorithm. It is not able to identify the smaller cut-out as part of a larger image. In its current state GP is still hit & miss, the same settings can even lead do very different preview results.




Curiously the GPU preview is sharper than the CPU one, despite the opposite being true for all final renderings I ever tried (including this one). The GPU preview also is the one that differs most from the final result. This is a result of the preview being overly sharp and the rendering being overly blurred (GPU usually is blurred in more areas compared to CPU). So the contrast between expectation and result is most pronounced here.

Last but not least, the proximity to the preview window border has a considerable impact on the sharpness of the preview result. More to the center = more blurred for GPU processing. This also means that a smaller window pushes the window border closer to the center, which may increase sharpness of smaller preview windows. CPU and OpenVIVO preview suffer less from this, but they blur the preview borders in strange streaks instead.


There is one design philosophy issue that would argue against bringing many small areas into sharp detail, and then reintegrating them as a whole again. How would the localized AI know what the larger image intent was? Many of my photographs purposely include areas of nice bokeh. The underlying nature of this look should not be changed during the enlargement. The fact that the AI is primarily meant to create synthetic detail implies that if it’s only working on small patches it will try to do that everywhere, instead of only in the intended area of focus. (That’s what’s currently going on with the preview function.)

I suppose one might pre-scan the entire image for focus/blur, and assign differing amounts of automated Noise Suppression and Blur Removal, but we’re adding complications to the process that would be slower and prone to missing the mark. You could have the user mark off the areas of interest to fully enhance by hand, and tone down the rest, but that feels slow and tedious, and would not be a frequently used option.

I think what you’re asking for is valid as an option for special cases, but is the juice worth the squeeze for most people, and typical images?


Sharpness should be maximum. You can always combine the bokeh from the original and sharpness from the processed image to get the best quality.
You can also use processing masks. So far, with masks in the new plugins, everything is very bad, but after all, they will someday do good interaction with the alpha channel and masks from Photoshop.

The algorithm does not need to know the idea in the next 2-3 years, the technology has not matured before that. He is required to issue clearly from an image taken for example on an old camera, modern computing quality competing with top-end SLR cameras.


How about you open GigaPixel (v5 that is) go to Help-> Graphics info., press Copy and post the info here please.

1 Like

No need to investigate further. The development team already explained well what happens and why. I might paste the original info text here if you like …

What all need to realise is that quality in = quality out, in this case it is a still from a Sony camcorder taken a long time ago and you get out what you put in … 6x: