What kind of GPU is the key to speeding up Gigapixel AI?

Texture rate of the GTX1080Ti (stock) is 354.4 Gtexels/s
For the RTX2070 Super (OC’d card) 304.8 Gtexels/s
So both texture rate and Pixel rate are better numbers for the GTX1080 Ti card

The RTX2070 Super has ‘only’ a higher Base, Boost and Memory clock
Base 1605 vs 1481
Boost 1905 vs 1532
Memory clock 1750, 14000Mhz effective vs 1376, 11008 effective.
Note this numbers are correct for the OC’d RTX and a stock GTX

I found this info on the socalled Mrender benchmark (the RTX 2070 Super is 49% faster in multi rendering)

Find out from your web link that the mrender benchmark test is around Direct3D. But Topaz’s AI software is based on OpenGL performance.

1 Like

That’s when noobs try to participate in technical discussions, my apologies.

Well I bought a RTX 2080 Ti, works like a charm.
Denoising and sharpening a Z7 NEF of 45+ MB is processed in a blink of an eye.

2 Likes

I’d like to ask if you can share your 45Mb picture of z7 Nef and the parameter diagram of software settings. I’ll also test the speed. Thank you very much!

@ AiDon By the way, if the key of speed belongs to the trade secret of topaz company, and they can’t tell the customers directly, it seems that they can attach a small tool for running score in the software, so that it is convenient for the video card of the majority of users to do a performance measurement and sorting. I think it’s beneficial for Topaz and users!

I don’t think there is anything propriety about this. Speed tests are done on many websites but you have to remember the following:

  • There is a OS dependency
  • There are 3 different type of GPUs and many different vRAM combinations
  • Topaz Products (some) test for CPU (OpenVINO) processing vs GPU and optimize for the fastest choice.

The primary minimum and recommended requirements are noted in the technical specifications on the topaz labs website.

As for utilities to give a ‘score’ if there are they would typically give a gaming score. There are GPU utilities such as GPU Caps Viewer that will tell you all about your GPU.

Dear AiDon, I can understand you roughly!

Not intended to offend you, but… When the conditions are clear, how can we make the best choice for our budget and needs?

For example, when the operating system is win10 x64 and the CPU is Intel 9900k, topaz does not know in advance what the user’s performance pursuit and how much money will be invested, although it gives the lowest hardware configuration and recommendation.

You know, there are some things that AI can’t guess, like our human purchase decision, as if some people will choose the recommended configuration, and some will choose 2080ti. When people’s budget is limited, they will think about which AMD rx5500 and NV 2060s are faster for Topaz; when the budget is abundant, they will think about who runs faster for 2080ti and radoon 7.

For Topaz products, if the statement “GPU processing efficiency is higher than CPU” is true, then for the class like Topaz sharpen AI, the test tool can make a performance ranking for the video cards in the market, so that we can easily and accurately select the most appropriate hardware according to our inner investment ideas. This is just a small suggestion, not very mature, thank you for your tolerance.

Don’t worry I am not offended.

If you wish Topaz to create a tool like that then would need to put a Support request in but the requirements are quite specific on the website. For example, these are for GigaPixel AI and it clearly states the Optimal requirements:

I see. Thank you very much for your guidance. I’ll try it!

I would like to see Topaz use Gigapixel and one example picture at 4X and test a list of GPUs. Record the process times and post it on the website. The list would range from the Radeon RX580 and Nividia 1060 Ti up to the GeForce RTX 2070 super and the AMD RX5700 XT (all with 8GB of Vram). This would be a list of eight to ten cards and would solve the issue.

1 Like

Neither do I want to offend anybody, but in all honesty I think these requirements are only correct (while neglecting which CPU) for TL used as a standalone application not when you use TL as a plugin in PS or LR
On top of this looking at the requirements I think they are to vague.

What I and I’m sure a lot of folks would like to see is 3 or even 4 listings for required hardware including:
CPU, amount DRAM, GPU, amount and kind of VRAM, Screen Rez and a timing for a 50MB file.
When done for ‘minimal’, ‘recommended’, ‘optimal’ and let’s say ‘superb’ this could be used as an indication for people (like I was) looking for info regarding the required hardware and what to expect from their current setup.

You told me before a GTX1080Ti (11GB VRAM5X) would best the CPU plus Vino ‘engine’ but that’s quite a lot more than ‘a dedicated GPU with 6+ GB Ram’

Sure this could only be done with the same file comparing the results for that very file with the same settings.
Like I said a (revealing) indication for what to expect from ones setup, no more no less.

BTW these data could be collected by providing the forummembers with the same image and have them benchmark it with provided settings. (just an idea LOL)

1 Like

The short of it is the bigger and badder the better. Gigapixel performs trillions of calculations when upscaling an image. Larger images can get into the hundreds of trillions easily. So, as of now, there is no really easy way around it. The bigger, beefier your card is the faster Gigapixel and all of our AI products will run.

2 Likes

I believe that was between a choice of the 1080 and the 2070 but you subsequently purchased the 2080 which is the 1080 replacement as I said to you.

The recommendation was based on the specifications of the 2 second hand items you were looking at …

1 Like

Sorry I misunderstood this, somehow I got the impression this GTX1080ti would be the first card able to outperform the i9-9900K with OpenVino.
I stand corrected, my apologies!

1 Like

According to the techs the 1080 was able to out perform the CPU/OpenVino combination. For example I have a 1050/4GB but my i7/HD630/OpenVINO outperforms that by a long way.

CPU processing will catch up with GPUs but you still need a GPU or WARP to render.

Okay AIDon, thank you for the headsup.

Regarding ‘our’ questions about ‘listings and timings’, could we have a Sticky or better yet two (one for Mac, one for PC) inhere where we could post our setup plus the related timings for various TL apps?
It would be offcourse mandatory to use the same TIF, is it possible to post it in here?
This way people ,willing to ad their results, can download it and process it at given ,again mandatory, settings, using the app standalone. (not like a plugin and solely running the app directly after boot)
The listings should ,I guess, consist of:
Setup: OS plus Graphics driver, Which CPU/IGP, Amount of DRam, GPU and amount of VRam
And:
Which engine is chosen by the TL app:
Discrete GPU or CPU/IGP with OpenVino,

Something like this for example:

Setup: PC
OS: Windows 10 Pro
Graphics driver: R418.U1
CPU: i9-9900K /UHD630 IGP*
Ram: 64GB*
GPU: PNY Quadro P2000, 5GB GDDR5*
Chosen processing engine: CPU/IGP plus OpenVino
Timing:
Denoize AI: 15 secs
Sharpen Stabilize: 16 secs

And…

Setup: PC
OS: Windows 10 Pro
Graphics driver: 411.36 Studio driver
CPU: i9-9900K / UHD630 IGP*
Ram: 64GB*
GPU: Gigabyte Aorus RTX2080Ti extreme Waterforce, 11GB GDDR6*
Chosen processing engine: Discrete GPU
Timing:
Denoize AI: <2 secs
Sharpen Stabilize: <2 secs

  • If applicable OC’d at …

While reading the photogfora and this very one I’m certain people would love to see this.

2 Likes

Sure, no problems, if you get together a test image with a download link start a post in Topaz Products explaining how you want people to test, making sure you specify the process parameters and that they have to use the GPU, and all you will need is an entry like this from each person:

OS … GPU/vRAM … mm:ss time to process

And each entry I will copy into a single post.

Will also make it a sticky.

2 Likes

I like this idea. I’m going to forward it to Russell to see what he thinks.

2 Likes

AIdon, with all respect I really think the listings regarding the system used should list the variables I suggested, since they do have an impact on the final results.
Let’s assume somebody is using a slow GPU with a faster CPU/IGP, publishing the results for only the GPU would be confusing or at least not revealing the results one might achieve irl when the app chooses for the CPU/IGP. (with the amount of DRAM factoring in)
Graphics drivers are known to enhance results especially with the new GPUs aso aso.

TopazJosh,

Perhaps an even better idea, when you guys provide us with a Tiff (100MB or so) which represents a medium or high load on the processing engine?
I’m not familiar with how the apps are ‘working’
I can imagine a high rez pic filled with a lot of details takes more time than ‘just a bird on a branch’ with lots of bokeh?

1 Like