Benchmark Denoise AI

Hi,
I’d be interested to know how fast your GPUs are when denoising with AI Denoise.

My Quadro RTX 5000 needs about 8 seconds for my 18 MP pictures of my 1DX images independent of the effort.

How does it look like for you, just post the resolution and the displayed time that Denoise shows you after denoising.

I would be especially interested in the 5700XT or Radeon Pro W5700, although they have very little memory.

Kind Regards
Thomas D.

Alienware Aurora R8 9th Gen Intel® Core™ i9 9900K (8-Core/16-Thread, 16MB Cache, Overclocked up to 4.7GHz on all cores)
NVIDIA® GeForce RTX™ 2080 Ti

24.1mb image. To get a preview takes just under 1 second. To process and save is 15 sec.

Thx.

I would need the Pixel dimensions. The size does say nothing.

@TPX I upscaled a 16 MPx photo to 18 MPx and use that to test in Denoise AI 2.2.4 For the preview update it took less than 2 seconds (hard to time). To save as a jpg file the time was 14 seconds. I have my browser running 25 tabs (no video) in the background but I don’t know what effect that had. Used auto setting in the Denoise mode.

I’m on a Windows 10 PC with a Sapphire Pulse RX 5600 XT GPU (6 GB). It has the updated fast BIOS.

6000 X 4000

Yea, thx, 24 mb is in theory 24 mpx in raw @ Iso 100.

Thank you. The browser has very little impact. But those Radeon Cards since the Radeon 7000 (GCN Architecture) are very effective to handle Async Compute through their ACE Engines. They can handle one Graphic task and 7 Compute tasks. Your Navi Arc GPU can handle even more with low slowdown. Nvidia GPUs can handle Async Compute relativley good since Turing and Ampere (this August or September) will handle it very good.

Using Mac Pro 5,1 with 8 core Xeons and 24Gb RAM and AMD RX580 with 8Gb VRAM.

Olympus PENF image ISO 12800 5184x3888 TIFF image 121.1MB

Preview = 3 secs
Denoise to TIFF file = 37secs