Detailed Hardware Requirement Guideline

I am currently using a 2019 Mac Pro with Radeon Pro 580X. The performance is at best acceptable, not much faster than the Intel Xeon, if at all. I am looking for some GPU upgrade options so I can preview my edit easier. However I have hard time to find detailed information about a right one to fit my requirement.

First as a start, I did my due diligence to find suggestions for each apps. They are a little bit too high level. From user support PoV I would hope the team to share the inference time for realistic test cases on at least several tier hardware ranging from professional grade NVidia/AMD GPU to gaming level GPU and laptop GPU then CPU-only. For example for Nvidia/AMD I would be interested to see the following:

Nvidia

  • Quadro Series
  • GeForce GTX 2-Series
  • GeForce GTX 1-Series

AMD

  • Radeon Pro Vega II
  • Radeon Pro W5xxx series
  • Radeon Pro 5xxxM series

Second, to be a bit more technical, usually each of these cards has different computing capability ranging from INT8/FP16/FP32/FP64 as well as its memory bandwidth. Right now I am completely blindsighted what is the required computing resource I should be buying for. I can only guess most of the machine learning inference is performed somewhere at FP16 and FP32, and there will be a performance degradation while I run metal on macOS comparing to CUDA on Windows. The GPU I am looking to buy is ranging from one thousand to three thousand dollars, it could easily top a high end MacBook Pro so I don’t want to bet a sizable investment based on pure speculation.

It would be nice if the team can provide some technical publication or test report on various hardware, or maybe create a tool for the community to benchmark and report the performance running these applications on their platform. The customer experience will be greatly improved.

@marinna.a.cole
When you say team, do you mean Topaz or the community here? The performance of the AI programs is highly dependent on the size of the picture and the preview window. If you would like some of us to test a typical picture you can post it here or in a dropbox. The preview should be at 100% but sometimes that is very fast. It might be more useful to ask for the save time and to use Gigapixel AI as the test program. The processing time is given by the program. I would also suggest that the image be somewhat large (around 3000 px on longest side) and jpg. not RAW. Specify the scaling and settings as well.

I had a RX580 GPU and upgraded to a RX5600 XT.

As an example, I upscaled a 2281 x 3070px picture by X3. It took 1 min and 6 seconds. Settings were auto, Natural, Face refine on, Max Quality on, use GPU, GPU memory high. The 100% preview was very fast at about 1 second.

When I say team I was expecting Topaz to provide the benchmark numbers, but I think more realistically they should just provide a tool for user to benchmark and submit to the server to share with community. A community maintained list of performance is less ideal when performed manually.

manual bench test the hardware can be done if the environment is better controlled. Just start the app, run the inference without changing any settings at all, then the inference time should be normalized against the pixel count. We can then have a ballpark estimate to compare against. The image needs to be big enough to minimize the inference setup cost.

Having that said, manual tests could easily be plagued by various platform issues, maybe system memory is not enough or speed differs, platform is busy at other tasks… etc. It would be way better to programmable perform these tests.

Just my 2 cents…

I agree but I doubt they will do that. They do give minimum and ideal requirements. One problem is that they have sped up the performance a number of times which makes any test results obsolete.

Most of the software projects I have involved professionally will require regression tests for the release build. So this is piece of cake. The cost to buy these GPU cards might be beyond their operational budget however.
This is just a wish list. I really really love the tools, but unfortunately the the edit experience is quite stuttering due to the demanding hardware requirement. It is especially painful for me since many of my photos are tethered and have 100M+ pixels. It would take a minute or two sometimes if I want to render the entire photo.

As some one who wants to start upscaling videos for commissions, having some benchmarks would be great. They would allow me to make an informed decision upgrading from my 1080ti, to a 2000 or even 3000 series gpu, or even just sticking to the 1080ti if the performance boosts are neglible.

Just speculating the type of operations super-resolution network does (high quality network usually involves very deep residual blocks with more intermediate volume size ), you likely will appreciate graphics card that has more compute units and no GPU will be an overkill for that task.

What I couldn’t say is how many memory bandwidth to expect. Workstation level GPU usually charge you a premium at high bandwidth memory like large amount of HBM2 rather than even more compute unit. Given video enhance AI doesn’t really handle raw format and generally should be used during the last step of post-processing I would be hesitated to invest more on server grade GPU. The price for some of the Quadro GPU are just CRAZY. (so do these AMD Instinct series) The latest RTX3090 has 24GB GDDR6 I’d say that $1500 should be a good investment.