What kind of GPU is the key to speeding up Gigapixel AI?

I found that topaz did not seem to address this issue: what criteria should be used to select graphics cards in order to improve GAI performance?
What are the key indicators to measure performance gains?
I submitted a help ticket but the answer doesn’t seem to be right, so I’m here to ask for help.
When I searched for the keyword “gpu” in the forum, I found that developers had mentioned OpenGL performance as an important measure. But this statement is rather vague. In fact, the performance of graphics card OpenGL depends on a number of indicators.
Here is a picture.


So…GPU Frequency/CUS numbers/Cores numbers/IOPS Performance/Single Precision and Double Precision Floating Point Performance? Or display memory bandwidth/frequency/capacity, etc?
What are the key parameters that affect the processing speed of GAI?
If we can clearly know the answer, then we can relatively easily choose the most cost-effective graphics card according to our own needs!

1 Like

Here are the requirements …

Then as much dedicated vRAM and as many cores as you can. Note that currently AMD and NVIDIA are the right options. It depends on your budget.

1 Like

Thank you very much for your reply.
So I looked at the information of convolutional neural network and opengl, and found that the problem seemed not so simple.
Let’s start with the convolution neural network. The more VRAM you mentioned, the better. So I guess it’s related to the amount of data in convolution calculation.
Question 1: Is the bigger the vram, the better? For example, AMD radeon VII has 16GB vram. Does that mean it is the best choice for VRAM card?
Question 2: If there are multiple graphics cards working together, such as two AMD r7, does that mean that for AIG, we have 32GB VRAM and two GPUs working together?
Then we discuss OpenGL (topaz points out based on v3.3).
In past forum posts, Joe also mentioned that there would be a lot of rendering in the gpu, which might involve light spot, shadow location and depth, etc.
Benefit of using a Pro (non-gaming-optimized) video card
Assuming that the calculation efficiency of these GPU OpenGL instructions depends on the floating-point performance of the gpu, the reference basis for selecting the equipment is much simpler and more accurate. But what operations are actually GPU-based?

For example, add a picture as follows:
key2
From the data of these four devices, which one is better and why? What is the basis of judgment?

I will try to help a little here, in fact the Vega 64 is supposedly the latest flagship of the AMD Radeon line but, with the performance of the Vega 64 amongst the best, it has not been able to exceed the even greater performance from NVIDIA’s GTX 1080 Ti.

The RX 5700XT is the newest flagship of the Gaming line from AMD and is a more than satisfactory GPU for graphics (not CAD/CAE) as it has 40 compute units each of 64 processors.

In graphics the key is computation and render so the best is usually GDDR6 memory and as many processors as you can get.

Not necessarily, fast vRAM and fast computational units are the best choice.

No. But you can allocate GPUs to specific applications.

If you are a gamer as well, then GeForce/RX series is probably your best bet. If you running professional CAD/CAE applications all day long? In that case, you’ll probably want to consider Quadro/RX Vega.

But Quadro/Vega have more processing power than GeForce/RX Vega series so are more suited to rendering as they usually have more compute units and faster compute units that give better floating-point computational performance.

The link to the other discussion has pertinent comments about Desktop/Notebook and general architecture.

I’m glad to see your reply. Allow me to continue my in-depth discussion here.
First of all, I want to restrict the variables discussed so that we can get closer to the core of the problem more quickly and accurately.

A. I’m not a gamer and I don’t care about professional CAD/CAE applications.
B. Suppose my goal is just how to make Topaz’s GAI run faster!
C. We only discuss desktop graphics cards

Therefore, I quantified data once in 2018-2019 using graphics cards with GDDR6/HBM2 and VRAM no less than 8GB.
Relevant data were collected from: techpowerup.com
The results are as follows:


As you said, fast vRAM and fast computing unit are the best choice.
If this criterion is established, we will analyze it according to the chart above.

  1. Fast vRAM,
    Because the architecture of AMD and nvida graphics card is different, we can refer to the column “VRAM Bandwidth (GB/s)”. It intuitively embodies the speed performance of vram.
  2. Fast Computing Unit (Shaders/TMUs/ROPs)
    Regarding this point, no matter how combined, I think it will eventually be reflected in these five data:
    “Pixel Rate (GPixel/s), Texture Rate (GTexel/s), FP16 (half) performance, FP32 (float) performance, FP64 (double) performance”
    If understood correctly, which or several of them have the greatest impact on GAI that depends on OpenGL v3.3?

You mentioned that you can assign GPUs to specific applications!
So I wonder if I can do this? I use the iGPU built in my Intel CPU as the display output and daily use, while another independent high-performance graphics card is allocated to GAI program for image processing. If feasible, how to set up the graphics cards of NVIDIA and AMD respectively?

There is also a confusion about GAI settings when using amd/nvidia graphics cards.
In GAI preferences, set: Enable dedicated GPU = yes, Intel optimization = yes!
Does “Intel optimization = yes” still work?

1 Like

In fact I have been just testing a AI app using OpenVINO and the combination of an i7/HD 630, with 16GB ram, is faster than using the GTX 1050/4GB on the same CPU.

The app itself will now choose the appropriate mode, GPU or OpenVINO, when you process after clicking Reset Processing as it will test and set the optimum processing method … GPU or CPU with OpenVINO.

So it looks like Intel are actively improving their GPU performance.

Although I am a Intel/NVIDIA fan … from the figures you’re showing above the best value for money is the RX 5700 XT.

The Nvidia Control Panel provides a setting called “Optimize For Compute Performance”. Does anyone know if this should be on or off for Gigapixel AI?

p.s. I just ran my own test and it did not make a measurable difference.

This setting is intended to provide additional performance to non-gaming applications that use large CUDA address spaces and large amounts of GPU memory when run on graphics cards based on second-generation Maxwell GPUs. Graphics cards based on other architectures do not utilize this setting.

Don’t use this setting because it is intended to provide additional performance to non-gaming applications that use large CUDA address spaces and large amounts of GPU memory when run on graphics cards based on second-generation Maxwell GPUs. The affected GPUs are GTX TITAN X, GTX 980, GTX 980Ti and GTX 970

I see. Thank you for your advice.
Talking about OpenVINO, let me suddenly think of Intel’s NCS 2 Myriad X! Intel® Neural Compute Stick 2
I’m curious. Can this support GAI acceleration? If not, I’ll consider using graphics cards to work.

1 Like

I am not sure because it is a Virtual Processing Unit … and I thought it was only for development of AI inference applications and computer vision.

You can read about it here …

Understood,i got it:blush:
Thanks,AiDon

1 Like

First time poster so please don’t blame me if I shouldn’t post this questions in this thread.
Im debating which new GPU I should buy right now, either a used 1080ti 11GB card or a RTX2070 8GB or a RX5700 XT card?
I’m using a Quadro P2000 at the moment but it is not used by Sharpen and Denose AI and despite I have an 9900K with 128GB I have the feeling the software should be able to run faster.
Would the speed of AI applications benefit from one of these cards and most important can I still use these cards in 10 bit mode in PS?
Thanks for reading and again if I happen to post this in the wrong thread my apologies

The Quadro P2000 is comparable to the NVIDIA GeForce GTX 1050 Ti (Laptop) so any of the ones you listed are more powerful.

The AI applications don’t necessarily rely on your GPU because of advances in CPU processing using OpenVINO on Intel CPUs. For example I have a i7 with an onboard integrated HD630 which is faster processing the AI apps than the GTX 1050/4GB.

I believe that the recommended level to process faster than the CPU (Intel lastest generations) is a minimum of a GTX1080ti.

1 Like

Thank you, I was expecting this after comparing the three GPUs with the P2000
Some final questions f you don’t mind.
Will or does AI software benefit from the new RTX thuring architecture?
Will 8GB GDDR6 memory be a better choice than 11GB GDDR5X memory?

I am not sure but I expect so as OpenGL processing is used.

To put it simply … YES … the faster memory is always an advantage as GPU processing is floating point maths and there is no sharing as the CPU does by looking at OS operations.

1 Like

Thank you for your help Sir.
I searched for my other question regarding the possibility to use 10 bit mode in PS and learned Nvidia released new drivers ‘unleashing’ 10 bit mode for their non-Quadro cards.
Thanks again, much appreciated!

1 Like

Based on these criteria I’d think I have to opt for a GTX1080 TI card. (compared with the Gigabyte Aorus 8G RTX2070 Super which is faster than the stock version)
Numbers taken from Techpowerup.com
GTX1080 Ti vs RTX2070 Super,
Memory bandwith 484.4 vs 448
Shaderunits 3884 vs 2560
352 bit vs 256 bit
TMU 224 vs 160
ROPS 88 vs 64
Pixel rate 139.2 vs 121.9
FP32 11.34 vs 9.75

Looks like the old GTX card is a better choice than the RTX2070 Super card.
Am I interpreting these numbers correctly or do I ‘neglect’ something else more important?
I have no clue LOL (hope you don’t mind me asking)

The only thing would be the speed of the Shader Units, the RTX2070 has much faster multi-rendering, and the fact that the GTX1080 Ti has been superseded by the RTX 2080Ti and although the GTX1080Ti has been touted as the “Ultimate GEforce” it is about 20% dearer.

@ AiDon
Emmm…, You mean “the RTX2070 has much faster multi-rendering”, Does that mean “Pixel Rate” or “Texture Rate” ?