Gigapixel 4.4.0 CPU vs GPU Different Results

I found that using the GPU is three times as fast, but doesn’t produce the same level of detail and the colors of poppies in my photo are different to a degree that is easily noticeable when the CPU and GPU versions are stacked in photoshop. TIFF file showing the differences is here:

Dropbox - File Deleted?

My question: is this a bug or expected behavior? While the GPU enlargement is faster, the CPU enlargement is noticeably better such that the GPU version is of no use to me. I have GTX970 with driver from 2017 that is microsoft hardware compatible.

1 Like

You can’t see nothing from that picture…
Why not uploading here those two pictures.

You have to download the file, open it in Photoshop, zoom to 100% and click the top layer on and off to see the difference. Also, there are several eyedroppers on the photo. You can make sure they are set to Lab color and see the differences in the numbers when the top layer is clicked on and off.

For any who want to look at this, use the download button in the upper right corner of the page. I can see that the CPU image is a little sharper than the GPU and I think they should produce the same result. Using Affinity Photo, there is no difference in color or saturation.

The color differences are definitely there. That is why I put the eyedroppers on to measure the difference in Photoshop. They aren’t earth shattering, but the first thing I noticed when switching layer on and off. Not sure why affinity photo wouldn’t show that. Are you zoomed in to 100%? I don’t think you can see the color differences easily unless you are zoomed all the way in.

I was at 100%. Here are the screen captures of each at 100%

CPU:

GPU:

You have to have them stacked and click top layer on and off to see the difference. The color difference is subtle enough that you can’t see the difference moving your eye from one image to another. That is why I put the eyedroppers in. Here is an example of the color difference for color sampler #2:

L* 57 61
a* 70 68
b* 94 91

Using a delta E calculator that is a delta E of 5.4 (Delta E of 1 is the smallest difference in color the human eye can see)

If you click on one of the images Ron uploaded, you can then click on the open image to switch back and forth, just like turning a layer on and off. Doing that, I can see pixels changing, and it does seem like the CPU version is a bit sharper as Ron mentioned.

1 Like

Sorry I don’t use Adobe Programs… So No Photoshop…

I don’t see any color difference and I don’t know how you can sample the exact same spot with an eyedropper. I understand the sharpness difference even though it is small. Frankly I would use the GPU and if you want additional sharpness just raise the remove blur slider.

You sample the exact same spot by putting an eyedropper on the stacked images. Since the images are exactly aligned, when you click off the top layer you get the reading for the underlying layer.

I had to go into the Affinity keyboard shortcuts to add a way to turn off a layer. So I sampled the two layers with an eyedropper and found a 0.26% difference in Red and a 2.4% difference in green using RGB. Using gray scale there was a difference of 51 vs 53. It is still not visible to me so I would not be very concerned but if you contact Topaz support you might pass along this info.

Thanks for checking. I did file a support request. Due to the color change and subtle loss of detail with the GPU version I will just be using CPU enlargement. The point of the software is to get the best possible enlargement, otherwise I would just use Photoshop upsizing with preserve details. While it would be nice to use faster GPU my volume is low and I can go have a snack while doing the upsizing.

1 Like

Just a quick question?

Though the GTX970 is in GPU chip terms a lower spec than say the 1050Ti it seems to have the same amount @ 4GB of VRAM

So, I wonder on the Gigapixel preferences setting:-
What GPU memory setting did you use Low, Medium or High?
Did you set it to use Max AI models?

The reason I wonder this is because in the past some folk have reported that using Max AI models does not produce the best of results compared to CPU when selected. And though I don’t recall reading any reports, could the GPU memory usage setting have an influence?

Thanks! I turned off Max AI and used GPU memory low or medium and got slightly BETTER results than CPU. Of note, using this method did not change the colors when compared to Preserve Details 2.0 in Photoshop (but essentially produced a cleaner version of Preserve Details 2.0). There was more subtle shading of colors in this GPU version than the CPU version, a subtlety that matched the Preserve Details 2.0. So, I get a better conversion which is faster.

Top image has more vibrance/saturation and perhaps b/c of that also seems to have more contrast/separation from the bkground…

I just ran some speed and quality tests on 4.4.1. High memory, and Max AI was turned on.

GPU______CPU_____Time
Off_______Off______104 sec
Off_______On_______19.5 sec
On_______Off_______13.5 sec
On_______On_______13.0 sec

I then did overlays using layer difference in Photoshop and discovered that the two images with GPU Off were identical, and the two images with GPU On were identical.

The two with GPU Off had more contrast than both the original and the ones with GPU On.

The histograms for the two that had GPU On showed that their darkest blue values were shifted slightly brighter, but otherwise their shapes matched the original.

There was also less micro detail, and some visible horizontal banding you can see on the rock faces with the ones processed using GPU On. The banding is passable in this case because it looks enough like rock layers (though the slope does not match the strata exactly), but this noticable of an artifact might ruin any other image.

Either version is outstandingly better (sharper / more detail) than simply enlarging the original 4x in Photoshop.

GPU Processing Off

GPU Processing On

If you click on one of the images to make it full size, you can use the left/right arrows to quickly compare and contrast the two.

Here are a few more results to supplement the previous timings table.

GPU______CPU_____Max_AI_____Memory_____Time
Off_______On_______On__________N/A_________19.5 sec
On_______Off_______On_________High_________13.5 sec
On_______On_______On_________High_________13.0 sec
Off_______On_______Off_________N/A__________10.5 sec
On_______On_______Off_________High__________7.5 sec
On_______On_______Off_________Medium_______5.9 sec

Switching Max AI Off cured the banding issue when the GPU was turned on.

For both GPU and CPU processing Max AI Off produced an excellent image in about half the time.

I found that in some images turning Max AI quality off worked best, usually for scenic pictures. However, for portraits of people, having it on produced better results. Both for GPU results. It seems odd that setting GPU memory to Medium ran much faster than using high memory setting.

It seems odd that setting GPU memory to Medium ran much faster than using high memory setting.

I agree, but I’ve checked it several times and that’s how Gigapixel AI seems to do it; less memory is faster.

I ran some speed checks on the new version of Sharpen AI that now uses OpenVINO and that behaved the way one would expect.

3456 x 4608 jpg file.
GTX980 GPU
i7-7700K CPU

Method_____Speed
GPU_Low___23.1 sec
CPU_On____19.8 sec
GPU_Med___17.8 sec
GPU_High___13.5 sec

Makes you wonder about the possibility that the GPU memory designations were mislabeled in either Gigapixel AI, or Sharpen AI.

The two programs do handle things a bit differently from each other. Gigapixel AI lets you select both CPU and GPU at the same time. In Sharpen AI it’s either one or the other, but not both.

In both programs the auto selection of the optimum processing technique selects the CPU even though it’s not the fastest method on my system.

While GPU processing produces a different result (at pixel level) than using the CPU in Gigapixel AI, in Sharpen AI the results are identical.