Gigapixel v5.5.2

5.5.x has more soft image and in my opinion it’s better. But some times, GP has very low quality image, especially in eyes.

1 Like

Good to know! Thx for your thoughts on that.

I think there’s something wrong with the version for MacOS. Maybe a memory leak?

I had this folder with ~500 files, all under 700px in width and 1000px in height. My laptop crashed twice while upscaling the files. I noticed that the Gigapixel was using 25GB of memory at one point and it kept increasing after a file was done upscaling.

( Screenshot: https://i.imgur.com/TVP8xsP.png)

The only way to clear the memory was to actually close the app as removing the files from the app did nothing.

Computer: Macbook Pro 16", 2019, Intel, 16 RAM, MacOS Big Sur.
Settings: Standard, auto settings, reduce color bleeding ON, face refinement ON. Model downloading ON, AI Processor AMD Radeon Pro 5500M, Allowed memory consumption Low.

For MacOS, when dragging image files to the Gigapixel app icon in the Dock, it doesn’t open the “Save Image As” window. It just starts saving the image, but it crashes at the end, and nothing is got saved. #photo-beta-testing:gigapixel-beta #topaz-products:product-bugs

Open via the application, dragging from finder to the application opens in plug-in mode so it will only update the currently used file. That means you that you cannot open RAW images and “save as”.

1 Like

@taylor.bishop (Added this as finding more problems and no reason to assume you randomly trawl through older forum threads after a version’s been released.)

Been a while since I tested this and things seem to have gone backwards for my setup in V5.5.2.
I’m one of the few still using this on a Windows 7 PC with an NVidia GTX 980 and the GPU seems to be being ignored.

A 50% Preview of an image that is 684 x 828 pixels takes 87 seconds. I cropped that out of an 8000x4000 360 Panoramic image because I wondered if it was processing the whole thing - or a large section of it - or using more RAM to hold the larger image. It wasn’t.

Setting were Default GPU and Memory Low and ‘Standard’ @ x2 Magnification.
Results were 100% CPU usage and ~18% GPU usage according to MSI Afterburner.

I use those settings because in the past they produced the most usable results. More Memory usage = more artifacts. CPU actually produced the Best results, but was so much slower than GPU, the relatively small difference wasn’t worth the tradeoff for some images. Then again, some do have to be reprocessed with CPU, so not sure of actual total time saved.

Switching to CPU and Memory Low (or Med) gives exactly the same results, so GPU evidently isn’t being used, although it is selectable.

Going back to an earlier version of 5.2.0 and Preview takes only 11 seconds, CPU usage is 25% only on half the threads (6 threads of 12 on a 6 core CPU) and GPU usage is 100%.

Switching to CPU and Low, and GPU usage is ~2% - which is effectively Windows’ background usage, so basically zero. CPU usage is ~43% on 9 of 12 threads and only take 47 seconds.

I always test with Suppress Noise and Reduce Blur sliders set to 100%, because having to reduce a slider to remove ‘artifacts’ isn’t what I want it to do, I want more or less of what the Slider does only. Results are always better than what Auto guesses too, IMO. YMMV.

5.2.0 took exactly 1 minute to export the Image and told me so.
5.5.2 took 2 min 3 secs to export the image and didn’t tell me. I had to time it. I evidently missed the discussion where some prefer not to see the export time? IMO, it’s useful when you’re batch processing because you can plan ahead and pack/prune the batch list accordingly.

This time the Export GPU usage (although GPU was selected) was ~2% (or zero) again, and CPU pegged at 100% on all 12 cores. It’s a mystery what it was using ~18% GPU on while generating the Preview, unless it was constantly updating the screen the whole time it was calculating it?

Overlaying the two images and toggling between them and pixel-peeping, the 5.5.2 version is slightly sharper, but also noisier and with very slightly more of the usual red/green ‘additional’ pixels at boundaries, so - for me (and other Windows 7 users?) anyway - no obvious improvements.

Finally, the Preview Windows isn’t the correct size. E.g at 100% it’s larger by some amount, which means diagonal lines have ‘jaggies’ in them where lines are doubled up when it is stretched. Also there are more red/green pixel artifacts in the Preview than there are in the final image, which makes judging what settings to use a little trickier.

See attached of a Preview and the final image overlaid on a screengrab, and a closeup showing a detail more clearly.

Sorry, can’t be more positive. :slight_smile:

1 Like

Interestingly, since I’ve exported those test images, some internal settings seem to have been ‘updated’.

Now GPU usage is always zero for Previews whether GPU is selected or not, rather than the mystery ~18% during Previews as before and now they only take 27 seconds. Also CPU is always 100% whether GPU is selected or not.

But, selecting CPU or GPU does change the AI model used and the results on the Preview Windows look different.

Exporting is different again: GPU export actually uses ~17% of the GPU and 100% CPU and took 27 seconds (for a different, smaller 391x334 image than before).

CPU export used ~2% GPU and 100% CPU and took slightly longer at 32 seconds.

But both images were pixel identical and also had some new visual artifacts:vertical lines and corruption at the bottom. Maybe the odd width upset it (391 pixels)?

Will play some more with different images.

CPU%20and%20GPU%20artifacts

Please go to Help menu, select Graphics info and press Copy then paste the info here.

Application & Version: Topaz Gigapixel AI Version 5.5.2
Operating System: Windows 7 Version 6.1 (Build 7601: SP 1)
Graphics Hardware: GeForce GTX 980/PCIe/SSE2
OpenGL Driver: 3.3.0 NVIDIA 456.71
CPU RAM: 12279 MB
Video RAM: 4096 MB
Preview Limit: 6188 Pixels

I’ve also did a quick test of GPU/CPU + Hi/Med/Low and found that the settings in Preferences aren’t always being applied on export unless you do a Preview in between.

E.g. I export, go change GPU/CPU or RAM usage settings, re-export and the resulting images are identical. But if do a Preview and export again, the exported image will have changed. I believe that’s how I got the pixel identical images before with the vertical lines/artifacts when I swapped between CPU and GPU. Doing it again with Previews before each export and they are different.

But even doing a Preview isn’t 100% the solution, as I did still get one example of identical results with CPU+Low and CPU+High on one test. They were different on the second attempt when I started to go through them all again.

1 Like

Check your GPU driver on the NVIDIA website as it seems to be out of date, if it is update to the latest Studio driver and see if that helps.

Application & Version: Topaz Gigapixel AI Version 5.5.2
Operating System: Windows 7 Version 6.1 (Build 7601: SP 1)
Graphics Hardware: NVIDIA GeForce GTX 980/PCIe/SSE2
OpenGL Driver: 3.3.0 NVIDIA 471.68
CPU RAM: 12279 MB
Video RAM: 4096 MB
Preview Limit: 6361 Pixels

I’d normally not expect this make a difference, but after a Drivers Update and Reboot, it has made some very slight changes to the outputs; but it’s not clear if it’s fixed something, or broken it more than before.

Having Exported some files previously, it had stopped using the GPU (~18%) during Previews whether or not it was selected. Swapped between GPU and CPU several times and it never uses it, but now it also does not refresh the Preview when you make changes and there is a disconnect between any Preference changes and the Preview window.

Moving the Window position forces the Update button to appear (I have Auto-Update Off) but aborting half way through and putting it back where it was before (I use corners of the small image, of which 95% fits in the window anyway @50%, so easy to put it exactly where it was before) and it will show the previous Preview image for that position, even if it’s GPU now and was CPU before. It was even able to show a Preview of a different corner done before that one; so possibly it’s able to cache several if they’re small? If it is, caching might be the reason for several ‘lags’ when things are changed.

I screengrabbed one as GPU/Low, exited and reloaded the program, swapped GPU to CPU, Previewed the same corner, screengrabbed and pixel-peeped and they’re identical. Which now…the Exports also are, when they were obviously different before the Driver update. So, is it more broken or less?

The Preview of the 684x828 image using GPU/Low at 50%, Standard, x2, Noise and Sharpen Sliders at 100% took 106 seconds with no GPU usage, rather than the previous 87 seconds where GPU usage was ~18%. CPU was at 100% throughout.

But, switching to CPU/Low (GPU usage was ~2%/Zero) and the Preview took only 94 seconds. So…CPU only was faster than CPU+GPU?

Exporting behaviour is variably consistent. Exporting as CPU/Low right after that - so hoping that Previewing would get it to recognise that as the ‘current’ state - and GPU was again a steady ~17%.

I’d accidentally pressed Stop on the stopwatch app, so I aborted at about 95% complete. Then I exported it again and GPU usage then dropped to ~2% (or zero) and it took 2 minutes and 4 seconds.

I saved this image for later comparisons.

Switched to GPU/Low, did a Preview (107 seconds and no GPU usage) then Export and GPU usage fluctuated between 17% and 8%, so a bit less steady than before, and it took 2 min 5 seconds. Exported it again, identical results and times, so no real difference to the previous CPU with GPU ~17%.

I then compared the images and with overlaid GPU/Low and CPU/Low images and pixel-peeping, they were completely identical; when the tests I did yesterday with the previous GPU drivers showed GPU/Low and CPU/Low were different with CPU noticeably sharper.

So after the driver update, it’s not clear if it’s now using GPU for both GPU and CPU, or for neither.

Updated drivers’ CPU/Low is about the same higher sharpness - but considerably noisier - than the previous drivers’ CPU/Low or GPU/Low, so I’m to reinstall the older drivers for now, as we have no way of knowing which drivers this was developed/tuned with and those older drivers worked well with previous versions of GAI.

The ‘Jaggies’ are still there in the Preview windows and on one of the newer drivers’ GPU Previews it did this…

Update: Exited and restarted program, set to CPU/Low, did a Preview, exported and it did the GPU ~17% thing again and the exported image looked like that blue one above in a slightly longer 2 mins 14 seconds. It doesn’t seem to much like these new drivers.

Update: It did it again.

Update: and again and again.

It seems that the first time after loading the program the CPU/Low export will use ~17% GPU and that will provide a ‘Blue’ image.

If I cancel the export in the middle - as I did after accidentally stopping my stopwatch - then start again it will not use the GPU and the resulting export image will be fine.

But…I then do a Preview and export again and it goes back to using the GPU ~17%, but this time the results were fine. Pixel identical to the previous export. Definitely going back to the previous drivers. :wink:

Definitely going back because now 5.2.0 doesn’t use the GPU at all with the newer drivers. :frowning:

Update: had a little panic attack as even with the older graphics driver reinstalled, 5.2.0 refused to use the GPU and Previews were taking 65 seconds whether I turned GPU on or off. :frowning:

But selecting ‘Use Recommended Settings’ and it did the little ‘Calibrating’ dance and started using the GPU (@100%), so super fast Previews again. Phew!

Nothing exists like that in 5.5.2., so I’m stuck with whatever it thinks is the right (wrong) thing to do with the GPU. Why was this option removed from later versions?

Yes, the new preview renderer introduced some time ago produces bad edges and certain lines. I reported this several times too for each version when it appeared.

Yes, I too have the jagged preview issue ever since the 5.5 version was introduced (the new 5.6 beta has it too). I think it relates to Win10 display scaling settings, but SharpenAI and VEAI don’t display this problem. The last beta of DenoiseAI also had the aliased preview, but I never upgraded to the release version to check that.

Same here. Both GigaPixel and Denoise AI suffer from the jagged preview. :neutral_face:

Any info of Gigapixel 5.6? Long time since 5.5.

I see. Thanks.

No upgrade from April ?

Wake up!

Hi all,

A new version has been released. You can view the release thread here.

Thank you!

1 Like