What has the greatest effect on Sharpen AI speed with nVidia based cards?

I currently run an i7-3770 3.4ghz - 16 gb ram - evga 750ti SC 2 gb system. It is reasonably fast with DeNoise, and GigaPixel, but Sharpen AI is a slog on the higher resolution 5DS-R 50 megapixel images, and to a lesser extent on my 36 megapixel D800 images. It can be anywhere from 6-11 minutes, but is usually 7 or 8 minutes. The GPU is faster than the CPU, and is usually 85-90% utilized. So here is the question.

Is it GPU core, cuda cores, dedicated graphics ram, ram speed, interface bandwidth, or some other spec that directly effects processing speed in Sharpen AI? I would like to upgrade the graphics card, but I am unsure which spec will translate into speeding up Sharpen AI.

*Also - is there a benchmark site that uses Sharpen AI?

I found a setting in nVidia’s control panel that had a major impact on my setup. nVidia control panel > manage 3D settings > power management mode > change to “prefer maximum performance” (mine had been on “power”) I then updated the driver (it had only been a month) and there was an “optimize” app or game feature in Geforce Experience. From what I read I was expecting only a handful of seconds improvement, but it more than cut the time in half. On two photos the times were…

ORIGINAL SETUP / CONTROL PANEL SETTING / OPTIMIZED By GE
4 min 36 sec_____________ 2 min 22 sec___________1 min 59 sec
5min 56 sec______________2 min 55 sec___________2 min 17 sec

1 Like

I never did find out if it was ram, memory bandwidth, cuda cores etc - So, I did the shotgun approach by upgrading to a 1080 gtx mini. The new times are 15 seconds on the first photo and 17 seconds on second photo.