What does VEAI performance look like between AMD/NVIDIA?

A compiled spreadsheet has been made.
I don’t remember much with how to use spreadsheet anymore so I don’t know if filtering will work as expected. You’ll have to clone it though.

There are some observations to be made. But I think some of that has already been said here.

Such as CPU seems to have quite a performance impact. And within that, Intel seems to dominate over AMD due to what appears to be lack of proper support for Ryzen’s multi chiplet design.

However, AMD GPU’s seem to have an edge over Nvidia. Even older generation AMD cards could beat the latest RTX 30 series. But if AI Interpolation may mean anything, CUDA code being converted to another variant, in this case it could be OpenGL or similar, then the visual output could be different. It could be faster but it could also be worse.

Though a mix of AMD and Nvidia users would have to agree to compare. Meaning we’d use a proper 1080p high bitrate clip to 4K and upload it. Unless of course someone owned both types.

You can’t make much from the compiled list we have here. Several “tests” we don’t have the VEIA version number, I’m not entirely sure what the numbers @ahilecostas quoted means since “low quality videos” could mean several things, quite a few different settings, etc. If anything we should consider having folks re-run their tests in a more controlled manner, using the same video file, on the latest VEIA version, with the same set of test parameters for versions older than 1.8.1, trying to match hardware setups as close as possible.

I have a fairly decent selection of hardware including a Ryzen 7 2700, r5 3600, r7 5800x, Core i5-3550, AMD RX570, RX580, and the RTX 3060ti

I think the next test I will run is pairing the RX580 with my R7 5800x, and compare the results to see if it seems AMD GPUs are faster relative to their Nvidia counter-parts. I wish I had a faster Intel cpu to test as well, but other than the R7 5800x, the AMD CPUs offered too much value to pass up for my needs.

I don’t want to compare apples with oranges, but from my part, i have the impression that i get the same smooth type of performance i receive from Magix Vegas 18 (former Sony Vegas) using Intel CPU and AMD GPU. I do think it’s OPEN CL based with a big push forward from the CPU. Vegas favors AMD GPU’s and i’m not talking about render times but the ease and smooth feeling in editing regardless how complicated the project is. For me After Effects runs horrible even at 64 GB Ram, except for the final export encoding part that uses the OPEN CL Mercury engine.

Low quality video ? 320 x 240 upscaled to 1920X1080 (450%) using Theia Fidelity v4 i get constant 0.14 sec/frame
Same speed for 640x480, 720x 480, 960x720 to full HD…Doesen’t get better or worse

Don’t get me wrong. Arguing about which CPU has better overall performance is not necessary in this case.

With the video file provided by @shikuteshi, running 200% Gaia-CG model in VEIA 1.8.1 on my 8809G @ 4.5G - 2080Ti eGPU, I got 0.30~0.31s/frame, while running 200% Artemis LQ v9 model, I got 0.18~0.20s/frame, almost the same as your result.

Both the single core performance and multi-core performance of Ryzen 7 5800x with PBO2 enabled will be much much better than a 4.5G 8809G (which is almost got the same performance of a 7700K at stock speed) with no doubt.

And correct me if I’m wrong, you got only slightly faster processing speed with R7 5800x - 3060Ti than R5 3600 - 3060Ti. But with my experience, the processing speed of VEIA 1.8.1 on 6950X at stock speed and 4.3GHz was significantly different, so actually the CPU performance does matter.

Maybe a better GPU did some help on my case, but still, a 9700F @ 4.5GHz with RX480 could get 0.14~0.2s/frame on Artemis LQ model, almost the same speed as my 8809G - 2080Ti combination.

All these results seemed to proved my own opion again, that performance on AMD CPU might be abnormal when running VEAI. Or maybe it was meant to be like this due to some reasons, and I’m really curious about it.

1 Like

In case anyone missed it. suraj, a Topaz developer has posted a thread about VEAI performance. VEAI Performance

1 Like

I think everyone should record the configuration of the computer comprehensively, such as hard disk read and write speed, memory frequency, motherboard model, PCIE4.0 compatibility, power supply and heat dissipation. These may all affect the overall efficiency of video conversion. Many people just focus on the CPU and graphics card. Although these are also important, other factors should also be considered. THX!

Thanx for the chart.
Could you include the input resolution and the compression used in the chart?
Also, the RX480 results seem a little too good to be true …

Everything in the chart is 2x scale by users here from the video provided by me in the original post. Anything higher than 2x becomes more of a VRAM limitation.

I know it´s from the Video you provided, but you forgott to put the codec and resolution in the post as a text info - so one has to actually download the whole video to get this information :slight_smile:

Yes, the numbers are from members, I know - that doesn´t change the fact that the 480 numbers most rpobably are wrong - compare them to other cards and judge the numbers…

I think a lot of information isn’t needed.
Drive speed shouldn’t be a problem unless it’s near real time 4K-8K in a lossless format.
Memory speeds are being recorded. Asking for more specifics like latency doesn’t seem necessary.
We can start tracking motherboard chipsets since it’s been revealed that PCIe Gen apparently has a big impact in performance.
Power supply isn’t necessary. I’ve never seen this actually ever being necessary to track in anything. If your system isn’t being supplied enough power then it would power cycle or shut off entirely.
And generally I hope anyone contributing numbers here would be aware of their system thermals and know if it’s throttling or not. VEAI performance is pretty consistent. So if there’s any results reported as, for example, being between 1.00 and 3.00 seconds then we can probably assume their system is throttling.

I’m still not sure what you’re asking for here.
You’d be downloading the video regardless and getting that information since, for the sake of such a test, consistency using the same source is needed.

And you’d actually be surprised at how well AMD cards can actually do AI image processing when the AI isn’t built around proprietary hardware. If you take a look at some AI interpolation performances then you’d see the older generation AMD cards actually hold up really really well. So I’m personally not that suspicious of the RX480.

Outputing on an external SMR drive can seriously bottleneck, so output drive is not a bad idea to put in.
Chipset is a good idea - gathering the actual PCI Express speed would be better, since many actuall speeds can vary from slot to slot, population of cards, SSDs, etc… so chipset alone doesn´t do the trick.
power supply: I agree - but it wouldn´t hurt to include the info because many who are interested in a setup might make use of the info if the PS is suffiecient or not before compiling a setup.
Throttling: I doubt most people are aware of such a thing and there are combinations out there which seriously slow down when running to hot - often people comment like “its stable in games”, which isn´t comparable - so I´d at least ive the info that people check in hwinfo for temps and throttling.

But of course, it´s your chart, you decide :slight_smile:

Probably chipset and PCIe population order unless you’re a user that knows how to check the manual and give an accurate report on what PCIe speed your running at. As far as I know there’s no digital way to check that. Maybe some advanced bios out there on later motherboards than what I have can do that. But if we also go with the argument that the someone thinking a “stable game” is a non-throttling PC then maybe just getting PCIe population info would be better than straight up asking for what speeds it is running at.

Edit: I’d also be terrified if I knew someone was working off an external storage device and wasn’t using it strictly as a media vault or data backup. Please don’t give me nightmares of this. But yes that is a good point as well. But please nobody be doing this.

Ok, then don´t put the resolution and codec in :slight_smile: Your chart…
I was in the situation of looking this up and since I have a rough feeling of the cards i am using at the moment, it would have saved me some time actually downloading the whole thing - compare to the time we discuss here it would be 10 seconds to include “1080p,h264” or similar in the chart… Imagine you are looking this up on a mobile device on a small data plan … download the whole video just to get the res and codec? Ok - your call :slight_smile:

I am fully aware of AMDs performance with VEAI, I have tested a few douzen cards…

Look at the cart:
Artemis LQv9:
RX480: 0.14~0,20 sec/frame…
RTX 3090: 0.12~0.20 sec/frame

See what i mean?

You also could add VRAM and actual GPU chipset to the chart. Stating the Card Model sometimes is not clear, there are many models out there with different Chips and VRAM sizes with the same name.

Simply fire up GPU-Z and put the reported Speeds in… It literally gives you “PCI-E3/16x”…

Many Laptop or MAC Users use external devices. And there are many SMR drives out there nowadays - a fact 99% of the users aren´t even aware of. Copying a big video file in one peace onto the drive is totally fine, loading of it, too… putting 50 smaller files on it works fine and fast, too… Put as soon as you hit a constant load of single files over minutes or hours, the drive starts to bottleneck so hard that it actually can slow down the encoding of VEAI… Imagine you are cycling through the models, making one or two runs with each - you easily can get to the point where the drive is limiting the whole process…

Well, it´s your chart, i don´t want ot be a smarta… :slight_smile: Thanx for compiling it!

I just wrote an entire essay and deleted it becaus… idk…
Anyways for now the chart is a general idea. I did make a post asking for a benchmarking utility for TopazLabs AI and compiling information in a way like UserBenchmarks does. It be nice to have a fully automated system like that.
Though for now this is what we’ll have. Also with confirmation that the RTX 30 series still need to be worked on since they’re not being 100% utilized and that TensorFlow still has optimizations to be made where we could see performance boosts for the RTX 20 series, the current results should be taken lightly since it sounds like these metrics will change dramatically over time.

Yes, the fully automatic “like user benchmark” tool was something I proposed, too. Topaz didn´t pick the idea up. The benchmark tool from a few months aog gathered some data but now is obsolete.

Reporting in with VEAI 1.9.0
Also forgot that I updated my drivers and didn’t keep track of that.
But at least with my Pascal GTX 1080 there’s been a big performance increase of ~33% performance boost compared to VEAI 1.8.1
Again I did update my drivers which were probably 2 months old. So couldn’t really say which of these updates did it but I’m inclined to say it’s on VEAI’s end since they were aware of an issue for Pascal cards being slower than intended and this was due to an issue with older Windows 10 versions. Mine being 1909. Also seems to be why they’ve recommended Windows 10 v2004 or newer. (no I haven’t updated windows. I can only get version 20h2 and I’ve had issues with my video editors on that version so I’ve been on version 1909 all this time.)

check these charts!
https://3dnews.ru/1026573
https://3dnews.ru/1028403