Gigapixel v5.0.0

I don’t think I ever had that problem. Here I did a test just now, and if anything it seems in my case the image outputted looks a bit more sharper than preview. Maybe its related to drivers or specific hardware configuration.

Application & Version: Topaz Gigapixel AI Version 5.0.0

Operating System: Windows 10 (10.0)

Graphics Hardware: GeForce GTX 1060 6GB/PCIe/SSE2

OpenGL Driver: 3.3.0 NVIDIA 451.48

CPU RAM: 16266 MB

Video RAM: 6144 MB

Preview Limit: 5968 Pixels

2 Likes

Is the problem now that the requests do not reach the developers? And they never saw complaints from users, if the boss does not know about it!

Where did the problem of not knowing the developers about the problem come from? Did we create requests in vain?

Gigapixel isn’t two years old yet.

No surprise, I’ve read them all. To get a bit of perspective - how many individual users are reporting the issue? In comparison to the total number of users?

1 Like

You are not going into a problem trying to prove that there is no problem. Choose an even lower resolution picture to prove something. Please understand first the topic and what users complain about. The smaller the picture, the less blurriness is visible. Therefore, you and your example are completely off topic.

Not a problem, how many examples do you need in quantity so that you understand the full extent of the gigapixel problem?

Well I tried it before with very small and compressed pictures and never since I started using GigaPixel version one did I notice the problem. And I upscaled probably 10 000 images or more since than. Maybe my configuration is just not affected by it.

2 Likes

I should note that you have similar problems across the development front and in other applications. Everywhere there is some kind of pain and misunderstanding of how and where your applications can use to solve design problems.

I like the ideas of your AI applications, but the sad thing is that they bypass the problems and criticism of the vibrant consumer market outside this forum.

You can cook as much as you like in the soup of this forum, and not get lively criticism because the forum is really dead. I stay here purely for the sake of enthusiasm and the belief that I can achieve the correction of at least some of the errors if I make a long and loud statement about it.

Not me - I’m just a user like yourself.

1 Like

Well, it has been proven that I can do plumbing on single water connections in my appartment, but I cannot do it “in due time” for a whole house. It’s a limitation of my current skillset. I can improve on that and extend my functionality, albeit not within a week.

That does not mean that I do not want GP to improve, especially when major revision numbers are published (v4 vs. v5). But we have to acknowledge that its results are more or less guesswork and include random elements at this stage of development.

Small cut-out vs. larger cut-out is virtually a different image being processed by the algorithm. It is not able to identify the smaller cut-out as part of a larger image. In its current state GP is still hit & miss, the same settings can even lead do very different preview results.

CPU:

OpenVINO:

GPU:

Curiously the GPU preview is sharper than the CPU one, despite the opposite being true for all final renderings I ever tried (including this one). The GPU preview also is the one that differs most from the final result. This is a result of the preview being overly sharp and the rendering being overly blurred (GPU usually is blurred in more areas compared to CPU). So the contrast between expectation and result is most pronounced here.

Last but not least, the proximity to the preview window border has a considerable impact on the sharpness of the preview result. More to the center = more blurred for GPU processing. This also means that a smaller window pushes the window border closer to the center, which may increase sharpness of smaller preview windows. CPU and OpenVIVO preview suffer less from this, but they blur the preview borders in strange streaks instead.

2 Likes

There is one design philosophy issue that would argue against bringing many small areas into sharp detail, and then reintegrating them as a whole again. How would the localized AI know what the larger image intent was? Many of my photographs purposely include areas of nice bokeh. The underlying nature of this look should not be changed during the enlargement. The fact that the AI is primarily meant to create synthetic detail implies that if it’s only working on small patches it will try to do that everywhere, instead of only in the intended area of focus. (That’s what’s currently going on with the preview function.)

I suppose one might pre-scan the entire image for focus/blur, and assign differing amounts of automated Noise Suppression and Blur Removal, but we’re adding complications to the process that would be slower and prone to missing the mark. You could have the user mark off the areas of interest to fully enhance by hand, and tone down the rest, but that feels slow and tedious, and would not be a frequently used option.

I think what you’re asking for is valid as an option for special cases, but is the juice worth the squeeze for most people, and typical images?

3 Likes

Sharpness should be maximum. You can always combine the bokeh from the original and sharpness from the processed image to get the best quality.
You can also use processing masks. So far, with masks in the new plugins, everything is very bad, but after all, they will someday do good interaction with the alpha channel and masks from Photoshop.

The algorithm does not need to know the idea in the next 2-3 years, the technology has not matured before that. He is required to issue clearly from an image taken for example on an old camera, modern computing quality competing with top-end SLR cameras.

@profiwork

How about you open GigaPixel (v5 that is) go to Help-> Graphics info., press Copy and post the info here please.

1 Like

No need to investigate further. The development team already explained well what happens and why. I might paste the original info text here if you like …

What all need to realise is that quality in = quality out, in this case it is a still from a Sony camcorder taken a long time ago and you get out what you put in … 6x:

2 Likes

Nothing will change from knowing this data:

Application & Version: Topaz Gigapixel AI Version 5.0.0

Operating System: Windows 10 (10.0)

Graphics Hardware: GeForce GTX 1080/PCIe/SSE2

OpenGL Driver: 3.3.0 NVIDIA 441.22

CPU RAM: 32688 MB

Video RAM: 8192 MB

Preview Limit: 8000 Pixels

For maximum sharpness I found that using Max Quality mode and the Man Made setting in V5.0 gives the best results. When Max Q is off (in preferences) I don’t see any difference in the Natural and Man Made modes and they both look like the less sharp natural version. I also compared a portrait and a scenic picture that was saved, to the preview, and they were both close to identical. I used GPU - high memory for processing and my GPU is a Radeon RX580 4GB on Windows 10 PC.

One other experiment was to take a small jpeg and run the old Dejpeg program first to remove artifacts. After that I used GP but there was almost no sharpening done. Apparently the Dejpeg program removed edges that would be sharpened.

And you, too, show this bug, dear AiDon. The problem is that you do not understand that this is a bug, the correction of which must be requested from the developers.

I already wrote about this, but I will write again. Only developers know what resolution the incoming image must-have for the algorithm to recognize it as satisfactory and adequately increase it.

If you plunge into history, Bayer filters and the like, it turns out that the actual resolution of the incoming image is much less than you think. Therefore, the blurred image stretched by the camera, and the algorithm cannot swallow such a soapy picture.

If you can reduce the image to real resolution, where each pixel carries real information, then the algorithm will work much better.

I understand your pain AiDon. The developers did not know or did not want to know this, that now orders for designers are made by people who are far from graphics. They take pictures on an old iPhone 5 and then send a photo with compression through Viber, after which only 1280 pixels and the wildest noises of the mobile matrix are left from the picture. And they are sure that they sent a beautiful image because on their phone they can still see it, but it doesn’t matter how it looks in Viber after compression. And not a single AI algorithm is ready for this.

For what it’s worth, your NVIDIA driver needs updating. I just updated mine, and while it might just be coincidental, I tried your image again and Gigapixel worked as well as it does on any other image. I’ve removed my earlier comment accordingly.

2 Likes

I hope you did not consider my image special? My processor counts at almost the same speed as a graphics card. So now I only process with the help of the CPU, it creates fewer artifacts in the final image. So updating video card drivers does not play any role for me.

Why on earth would I consider it special? I only used it because you did so to demonstrate the poor output.

2 Likes