AI Acceleration in AMD RDNA 3 cards

I noticed that the new RX 7600 includes “AI Acceleration” is this something that will benefit Topaz Video AI encoding? Is it a big deal, a little deal, no deal at all?

RX 7600 is a lower end GPU and based on stable diffusion model the performance is significantly lower than last gen Nvidia 3060 and even the ARC GPUs. I would not recommend it for TVAI.
AMD Radeon RX 7600 review | PC Gamer

1 Like

Any AI program will run better on an NVIDIA GPU, it is pointless to have an AMD GPU for AI tasks, even for NCNN versions.

1 Like

The “problem” at the moment is, that on the software side, AMD has to catch up a lot. While Nvidia has been in the game for computation for many years and is veeeery good at communicating and supporting this stuff, AMD just has entered the game (if we think about consumer stuff or desktop/workstation stuff otside the CAD realm… AMD has equipped datacenters in the past… but adressing the “smaller market” has not been theyr focus )

this leads to the very difficult situation that its very hard to see through the jungle of whats possible and what not and how fast it is or not… Just look at the mess over at ROCm, basically half of the issues raised are to clear up missconceptions about the documentation, streamlining the docs, no one knows what works, what not, officially only a handfull (most of them entrepreise cards, cdna etc…) seem supported… People supply patches to get regular cards running… Its horrible… With Nvidia - download your one driver that spans over maxwell to ada - and GO…

so there is no eays answer to all of this, sadly…

On the software side, TOPAZ relies on companies like AMD, Microsoft, Nvidia, etc… to build the stuff “under the hood”. If the runtimes don`t support a certain feature or are not able to load a specific model type into the inference - power is left at the table…

So yes, RDNA 3 and above do have “tensor like cores”, but afaik, software support on windows is not far enough to make a reliable prediction on whether it will be performant or not. And its not easy to translate benchmark figures over from the datacenter realm - over there, no one cares about a HIP Runtime in windows or a DIrectML/ONNX Combination, or care to run an inference in sharc/vulkan - they write theyr stuff explicitely in linux on ROCm for the application needed… SO even IF one would get some numbers from a datacenter, they hardly can be used to get a rough feeling of what can be done in TVAI.

Having said all that - my PERSONAL guess is, that AMD will put more effort in ML things on the desktop… I think they realized that its needed now to be competetive. And the hardware itself always has been quite performant, compute wise… And more and more players emerge on the software side, so its not AMD in its solitude fiddling for something that only a nahdfull of people on the planet need - it has become a thing and almost everybody wants a peace of the cake nowadays… Blender just added RT Support in cycles for AMD and Intel… Cores get used… So I think… one year and we will have proper support in software on AMD for all the nice cores they offer and speeds will be comparable (my guess)…

but until then: if you get Nvidia RTX … you have an easy entry to good speeds (having said that, I look at my desk - a few RTX and a truckload of AMD cards…)

while . at the current state - I completely agree that - at the moment - for TVAI - Nvidia RTX are a much better choice…
The statement “any programm will run better on an NVIDIA GPU” is not accurate. There are tons of examples where an AMD GPU would be equal or outperform Nvidia for the same money or time of release…

but not to get anybody confused: At the moment - RTX Cards are the thing to go in TVAI

For example?

  • older tvai version ran faster on VEGA Cards than on Nvidia Cards for the same price
  • quite some games run faster on AMD than on NVIDIA
  • Vulkan Computations run faster on GCN than on Pascal comparing same price points
  • AMD Cards with HBM Memory had higher bandwith than Nvidia DDR5 Cards of the same time
  • Many BOINC Computations on older GCN2 Cards score much higher in FP64 than similar Nvidia Consumer Cards
  • Any FP16 Scalar or non Matrix Vector workload in FP16 runs much faster on GC5 an 5.1 than on maxwell or Pascal
  • mining on GCN 4 Cards vs. Pascal of the same price point
  • Recent non Raytracing Game performances RDNA2 vs. GEN40/30 RTX Cards of the same price

the list goes on…

Don´t get me wrong, there are more examples where Nvidia comes out ahead, especially when you look at modern games with Raytracing or applications where tensor cores are supported (like TVAI in the latest versions). Or Games that have been optimized esepcially for Nvidia.

My point was: “any program will run better on Nvidia” is not true. there are cases where AMD clearly wins the performance crown.

Thanks for all the help in answering my Nvidia vs AMD for my Topaz AI questions.

I ran across this article from Feb '23 that shows the AMD RX 7900 XTX giving team Green a run for the money. From the above comment it sounds like this no longer holds true for the current version of Topaz AI?

Puget Systems topaz AI performance

They mention in there that they cannot rerun all the tests for each update. Makes their charts pretty useless. They would need to label what version of TVAI was used on each GPU for it to start to be kind of useful.

I’m pretty confident that the RX 7900 XTX would beat an RTX 4090 in VEAI 2.6.4. That’s where the high score is probably coming from. All the older cards were probably ran pre TVAI 3.2.0. Or whatever the update was that enabled the big speed increase on RTX cards.

1 Like

Ah. I found the benchmark thread. That makes this post moot. Thanks again to everyone that commented.

https://community.topazlabs.com/t/video-ai-v3-2-x-user-benchmarking-results