6 Minutes per Photo Processing Time

Trying Topaz AI for the first time. Have used the earlier standalone Denoise and Sharpen apps for some time. I have a set of 170 images all from a set of shots done for a gigapixel wall mural. I am running these images through Tapaz AI to remove noise in the darker areas, sharpen the image and I have enhance resolution on but set to 1x so I am NOT resizing the image. I have “apply same settings to all” set to true.
The images were shot using Canon R5, 45 megapixel camera. Using Camera RAW I converted them to TIF format and stored them onto my scratch SSD drive. It is taking Topaz AI an average of 6 minutes per image to process.
I am running a PC with i7-6800K CPU @ 3.40GHz, 128 GB RAM, GeForce GTX 970 GPU. Resource Monitor shows average CPU usage by Topaz AI at about 2.38%. I have Topaz AI set to Auto for Processor/GPU usage.
My question is, why is it taking so long to process each image?

Do you use the Strong Denoise Model to denoise the image?

No I did not select the Strong setting. However I did move the slider up a bit from what the autopilot originally had set it too. And as I mentioned, I set “apply same settings to all”.

I think the real challenge is that your CPU and GPU are just showing their age a little bit. I’m not trying to just toss that grenade out there and run away like a jerk. I do think though that an 8 year old GPU and 6 year old CPU, and maybe DDR3 (???) RAM do create a challenge for it especially when working with 45 megapixel images and applying machine learning models.

I took my laptop which is a Mac (so yes this is way out of the realm of Apples to Apples comparison, and no pun intended), which is one of the newer systems apple released last year, and loaded up TPAI with a whole bunch of 50 megapixel photos I shot on my Sony a1. I set it up to do Denoise as well as Sharpening on the photos.


As I timed this test and watched the system resources being used, it was converting from RAW (Sony .ARW format) to DNG, then doing Denoise, and the Sharpen and saving. It was using 70% of the CPU cores (10 cores), 90% of the GPU cores (32 cores), and 80% of the Neural Engine cores (16 cores). It was able to process a photo every 15 seconds on average. Pretty freaking fast relatively speaking…BUT there is a reason for it, I believe.

When I read that it was taking 6 minutes per image on your system I was totally dumfounded as well. That seemed impossibly slow. However, the new CPUs and GPU, as well as Apple’s Neural Engines and the equivalent in new Intel and AMD CPUs and GPUs combines with ultra fast memory mean that for tasks like this, more “traditional” CPUs and GPUs will really struggle relatively compared to hardware that’s come out in the past 2 or 3 years that has become highly optimized for executing machine learning models.

What I’d recommend is head down to one of the smaller computer stores where they are more inclined to help out with unique requests. Ask them if on one of their demo machines you can install the demo of TPAI and convert a couple images to test the speed of a new CPU and GPU.

I’m sure they’ll say yes. Take a look and see how fast/slow it does the work and then compare the specs on that machine. I truly wish I had some spare Windows PCs at the moment I could run benchmarks on to show exactly how speed is affected on a few different setups, but I don’t have that right now.

If you do want to explore this more, maybe post a link to a couple of your photos and ask people on here to test on their machines and let’s get some data back that shows fair and accurate speed comparisons across a few different CPUs and GPUs. Now, there could certainly be some setting on your machine that might be holding you back from some better performance as well, but I think getting some comparison performance data from a few other machine might also help show how much hardware might be part of this equation versus some driver or software config.

Good luck @johnfr ! If I can find a few PCs to run this test on I certainly will and will share the results back here when I can.


1 Like

I have a i7 with a GTX1050 and don’t have any issues. Try switching your AI Processor to the GTX970, I am assuming you have 4GB or more vRAM, and see if that makes a difference. Note the GTX970 has DDR5 and a 256bit bus width so it will not be as fast as the GT3060 but should be faster than what you are experiencing.

1 Like

I think that too.

The i7 6800K is definitely capable of handling everything well, it has 4 memory channels and DDR4.

The GTX 970 is a pure FP32 GPU, which is why it has to run everything in FP32, FP16 would be able to do this twice as fast.

Unfortunately, it only has 4GB of VRAM.

It may be that when he uses Enhance and either the standard or Hi Fi model that is swapped out over the memory of the CPU and thus the PCI-E is the bottleneck.

My Threadripper 3960X does become slower overtime when i process a 200 MPX with Standard or Hifi also.

If he would get a 3060, he would be more than 2x as fast.

In my video, he can see how a comparable CPU performs with a GPU comparable to the 3060.

1 Like

Thanks for all your comments. I should have mentioned that the system has a ASRock X99 Taichi MB. Using the i7-6800K CPU it only supports 24 lanes on the PCIe bus. I learned that a year or so after I built the system when I wanted to add a SSD. I regret not purchasing a better CPU at the time that had 32 lanes. Plus my present MB only supports PCIe 3.0 so upgrading the GPU probably would not buy me much improvement in performance.

I have been sitting with my finger on the trigger considering rebuilding the system. In you all’s opinion would the GeForce RTX 3060 12GB yield me much improvement when combined with the below.

I am looking at using:

Intel Core i9-13900KF

G.SKILL Flare X5 64GB (2 x 32GB) 288-Pin PC RAM DDR5 5600 (PC5 44800) Desktop Memory Model F5-5600J3636D32GX2-FX5]

ASUS ProArt Z790-Creator WiFi 6E LGA 1700 (Intel 12th&13th Gen) ATX Content Creator Motherboard (PCIe 5.0, DDR5]

ASUS Dual GeForce RTX 3060 12GB GDDR6 PCI Express 4.0 Video Card DUAL-RTX3060-O12G-V2]

1 Like

If you consider the cost-benefit factor, the system is very good.

Most photo apps have a CPU bottleneck rather than a GPU bottleneck.

The 12 GB of vram can be put to good use, as the apps can require a lot of memory.

64 GB RAM is also enough for most work.

You definitely won’t regret it.

You will hardly need the KF, but the K will also do.

1 Like