Gigapixel AI 4.0.1 maximum quality AI models worse than without

I tried this picture with and without maximum quality and the result seems to be without proper sharping and not nearly like the preview.
Left side maximum quality enable, right side not. Rest of settings unchanged (6x, medium low noise, high sharping)

1 Like

There has been at least one other thread about this, what seems a counter intuitive oddity for “maximum quality” to yield a lower quality than if that setting is not used???

FWIW I have yet to update to v4 so have not seen this for myself…but it does seem odd!

I have a similar problem. The preview is beautiful, but the outcome is lousy!
Version 4.0.2

Probably connected to your GPU, go to Help → Graphics info, then press the copy button and paste the information here please.

I just ran a test using V4.0.2 and an old portrait photo that was 1197 x 1800 px scanned. Setting AIG to Max quality in the preferences it took 30 seconds to process at 3X using med low NR and Medium Blur reduction. Changing max quality to no, which I would call normal, processing took only 14 seconds.

I then compared the two enlarged versions and found that the only difference was more noise in the max quality setting. I actually preferred the normal setting because of less noise but otherwise they looked identical. The normal setting also gave a much faster processing time. I’m using a Gigabyte Aorus Radeon RX580 GPU with 4GB of VRAM.

Ok after update to 4.0.2 here my Graphic card Infos:
Application & Version: Topaz Gigapixel AI Version 4.0.2
Operating System: Windows 10 (10.0)
Graphics Hardware: GeForce GTX 1060 6GB/PCIe/SSE2
OpenGL Driver: 3.3.0 NVIDIA 419.67
CPU RAM: 16321 MB
Video RAM: 6144 MB Total, 5727 MB In Use
Preview Limit: 7967 Pixels

With 4.0.2 no improvement in quality :frowning:
Left side “high” quality right side no “high” quality.

I tried only one picture and you have shown only one as well so results may vary. I am using the “standard” (set to max = no) setting. It is 2X faster and I also like the results better. You can retest on future versions.

I have tried all possible combinations and this AGI has yet to match the preview. I’m running a iMac. I’m also very disappointed…

1 Like

@Artisan-West I hadn’t really thought about changing the AIG setting as my desktop can handle the load. I just ran some tests though and as you noted, the processing time is cut in half and the quality is still quite good.

In one of my tests I also used a portrait. The max setting did not have more noise but did have a more realistic skin texture. I can see using both settings and in a layered workflow to combine and definitely using the normal setting for the first run. I often feel the AI in these apps can be too aggressive for my taste but this gives more flexibility. Gonna try it with SharpenAI (dang it not a choice) and DeNoiseAI (not a choice here either).

1 Like

I’ve been wondering why my results have been grainy, and I think this started after the 4.x upgrade. I turned off Max Quality and things look a lot better, more like what I see in the preview.

Video card is a Radeon RX 570 8GB.

Try Max Quality with CPU vs GPU (max memory) - CPU mode is less grainy. It seams related to memory available for CPU vs GPU.

I just ran a test using a 10 Mpx photo and setting Gigapixel to three settings. The file is 1920 x 2560 and I enlarged it 300%. All cases used reduce noise = medium and Reduce blur = low. Using latest version as of 4-16-2019. GPU is Radeon RX580 with 4GB VRAM. Samples are crops at about 82% enlargement.

Setting preference to GPU and Max quality = no the enlargement took 30 seconds quality was good but some noise still present.

Setting preference to CPU and Max Quality = no the enlargement too 4 min and 36 sec. quality was good, noise seemed identical to GPU but image was a little sharper.

Setting Preference to GPU and Max Quality = yes took 64 sec. Quality was bad with a visible pattern over whole image. I would not use this setting.

Using AI Clear removes remaining noise but takes extra step.

Clearly using the CPU (Mine is a Ryzen 5 2600X running at over 4 GHz) takes too long with only a minor improvement. Max Quality GPU has a pattern issue.

I also am experiencing issues with Max Quality GPU creating artifacts (AI Gigapixel v4.0.3). Surprisingly the results are less noticable if I set the Allowed Graphics Memory Consumption to Medium rather than High. I had assumed that this setting would just affect the time taken to process an image, but it actually affects the quality of the output.

Using CPU (Ryzen 2700X) produces the best results but takes too long. GPU is NVIDIA GTX 1060 with 3GB RAM.

Have you tried Setting Preference to CPU and Max Quality = yes ? - it gives best results in my opinion (not all images but most) - it takes ages I know …

No I didn’t try that because it just takes too long. However, it is designed for GPU so Topaz should make sure it give the best results.

@nhoward I had my memory usage set on high so I will try Medium. Note: I tried setting the GPU memory to medium with max quality =on. the result was the same with a pattern of noise as show in the second photo.

I am not trying defend CPU usage in Gigapixel - just testing to pinpoint the problem. I would also like Gigapixel to output through GPU the same superb quality as through CPU - maybe they will figure out what’s the problem. I personal think the problem is in memory usage and the way Gigapixel cuts the image into parts for upscaling - in GPU mode less memory is used even on high settings (lower settings on GPU yields more noise in my opinion) then in CPU mode - maybe if they would add more settings with even higher memory usage the results would be better on GPU.

I’ve been having the same issue, tried contacting customer service several times and get no response. Issues started after the original V4 update and still remains after every update since. The previews look amazing but the results of the export are not good, as shown in my example. My computer is a high-end workstation built by Pudget Systems, utilizing the 1080ti GPU and components far surpassing the requirements for the program to work. It’s definitely a software issue and would appear to be because the AI technology is in its early stages and not yet fully reliable. Hopefully, the software will get better, but for now, it was a waste of money (A “too good to be true” kind of scenario)

Even using CPU and High quality, more than 20 minutes runtime :frowning:, didn’t give the same quality than low GPU.
More noise and less sharpen with CPU.
So I will stay with low quality GPU setting.
Latest version 4.02


I just realized now that the noisy outputs I could get was because I was saving as PNG. For the same image saving as a TIF and the noise amplification problem disappears.
How can this be ?:thinking:

1 Like