I found that using the GPU is three times as fast, but doesn’t produce the same level of detail and the colors of poppies in my photo are different to a degree that is easily noticeable when the CPU and GPU versions are stacked in photoshop. TIFF file showing the differences is here:
My question: is this a bug or expected behavior? While the GPU enlargement is faster, the CPU enlargement is noticeably better such that the GPU version is of no use to me. I have GTX970 with driver from 2017 that is microsoft hardware compatible.
You have to download the file, open it in Photoshop, zoom to 100% and click the top layer on and off to see the difference. Also, there are several eyedroppers on the photo. You can make sure they are set to Lab color and see the differences in the numbers when the top layer is clicked on and off.
For any who want to look at this, use the download button in the upper right corner of the page. I can see that the CPU image is a little sharper than the GPU and I think they should produce the same result. Using Affinity Photo, there is no difference in color or saturation.
The color differences are definitely there. That is why I put the eyedroppers on to measure the difference in Photoshop. They aren’t earth shattering, but the first thing I noticed when switching layer on and off. Not sure why affinity photo wouldn’t show that. Are you zoomed in to 100%? I don’t think you can see the color differences easily unless you are zoomed all the way in.
You have to have them stacked and click top layer on and off to see the difference. The color difference is subtle enough that you can’t see the difference moving your eye from one image to another. That is why I put the eyedroppers in. Here is an example of the color difference for color sampler #2:
L* 57 61
a* 70 68
b* 94 91
Using a delta E calculator that is a delta E of 5.4 (Delta E of 1 is the smallest difference in color the human eye can see)
If you click on one of the images Ron uploaded, you can then click on the open image to switch back and forth, just like turning a layer on and off. Doing that, I can see pixels changing, and it does seem like the CPU version is a bit sharper as Ron mentioned.
I don’t see any color difference and I don’t know how you can sample the exact same spot with an eyedropper. I understand the sharpness difference even though it is small. Frankly I would use the GPU and if you want additional sharpness just raise the remove blur slider.
I had to go into the Affinity keyboard shortcuts to add a way to turn off a layer. So I sampled the two layers with an eyedropper and found a 0.26% difference in Red and a 2.4% difference in green using RGB. Using gray scale there was a difference of 51 vs 53. It is still not visible to me so I would not be very concerned but if you contact Topaz support you might pass along this info.
Thanks for checking. I did file a support request. Due to the color change and subtle loss of detail with the GPU version I will just be using CPU enlargement. The point of the software is to get the best possible enlargement, otherwise I would just use Photoshop upsizing with preserve details. While it would be nice to use faster GPU my volume is low and I can go have a snack while doing the upsizing.
Though the GTX970 is in GPU chip terms a lower spec than say the 1050Ti it seems to have the same amount @ 4GB of VRAM
So, I wonder on the Gigapixel preferences setting:-
What GPU memory setting did you use Low, Medium or High?
Did you set it to use Max AI models?
The reason I wonder this is because in the past some folk have reported that using Max AI models does not produce the best of results compared to CPU when selected. And though I don’t recall reading any reports, could the GPU memory usage setting have an influence?
Thanks! I turned off Max AI and used GPU memory low or medium and got slightly BETTER results than CPU. Of note, using this method did not change the colors when compared to Preserve Details 2.0 in Photoshop (but essentially produced a cleaner version of Preserve Details 2.0). There was more subtle shading of colors in this GPU version than the CPU version, a subtlety that matched the Preserve Details 2.0. So, I get a better conversion which is faster.
I then did overlays using layer difference in Photoshop and discovered that the two images with GPU Off were identical, and the two images with GPU On were identical.
The two with GPU Off had more contrast than both the original and the ones with GPU On.
The histograms for the two that had GPU On showed that their darkest blue values were shifted slightly brighter, but otherwise their shapes matched the original.
There was also less micro detail, and some visible horizontal banding you can see on the rock faces with the ones processed using GPU On. The banding is passable in this case because it looks enough like rock layers (though the slope does not match the strata exactly), but this noticable of an artifact might ruin any other image.
Either version is outstandingly better (sharper / more detail) than simply enlarging the original 4x in Photoshop.
I found that in some images turning Max AI quality off worked best, usually for scenic pictures. However, for portraits of people, having it on produced better results. Both for GPU results. It seems odd that setting GPU memory to Medium ran much faster than using high memory setting.