I’m on deadline and in desperate need of a 4.5x enlargement of a heavily compressed, lo-res jpg original.
What i see in the preview is exactly what i need, but none of the outputs i’m seeing match what’s in the preview window. I’ve gone through all the options and any variation i can think of to make this work and not a single one outputs like the preview.
Here’s a screenshot showing the preview windows in Giga vs the output tif opened in Ps… it’s a HUGE difference!
Thanks Don, i did actually try the Cloud version earlier this afternoon and the results were basically the same, except that version converted my sRGB to aRGB and added about 20% more saturation to the image. There’s also no NONE setting for both Noise Suppression and Remove Blur, neither of which i want in this case.
With the staged scaling, i had tried that before and while the preview was a little more accurate, the end result was nearly identical to the results i didn’t want. The 4x + enlargement looks remarkably good in the preview and that’s the output i need for this image.
Looks like CPU rendering is the winner! The output is much closer to the preview and the results are 100% better than GPU rendering. Is this working as intended? Shouldn’t the output be identical for both renders? Why isn’t there a warning in the prefs about GPU not matching the preview and having worse quality?
Thank you for your time and advice.
The issue with GPUs is there are NVIDIA, Intel and AMD and then there are Creative and Game drivers for NVIDIA and integrated GPUs which use system RAM so some people do have rendering issues, especially with the older and less powerful GPUs. Also the processing is completely different as GPUs don’t have to manage IO tasks as they are only executing parallel computational tasks …
There is no exact science but the rendering is always a lot faster with a GPU and in some cases the CPU can do a better job because it is slower and system interrupt driven.
I note that you have used “use maximum AI models”, on the nVidia discussion thread I think I recall some posters (it may have been on another AIG thread???) that some folk find Gigapixel gives the better output if you set “No” to “use maximum AI models”.
Odd, because when I use the Cloud AI version, I actually get less saturation. But worse, the colors are all off. I have attached a 6X conversion (low noise reduction, low blur reduction); notice how the blue sky becomes purple, and the foliage is pretty bland.