What does VEAI performance look like between AMD/NVIDIA?

Thanx for the chart.
Could you include the input resolution and the compression used in the chart?
Also, the RX480 results seem a little too good to be true …

Everything in the chart is 2x scale by users here from the video provided by me in the original post. Anything higher than 2x becomes more of a VRAM limitation.

I know it´s from the Video you provided, but you forgott to put the codec and resolution in the post as a text info - so one has to actually download the whole video to get this information :slight_smile:

Yes, the numbers are from members, I know - that doesn´t change the fact that the 480 numbers most rpobably are wrong - compare them to other cards and judge the numbers…

I think a lot of information isn’t needed.
Drive speed shouldn’t be a problem unless it’s near real time 4K-8K in a lossless format.
Memory speeds are being recorded. Asking for more specifics like latency doesn’t seem necessary.
We can start tracking motherboard chipsets since it’s been revealed that PCIe Gen apparently has a big impact in performance.
Power supply isn’t necessary. I’ve never seen this actually ever being necessary to track in anything. If your system isn’t being supplied enough power then it would power cycle or shut off entirely.
And generally I hope anyone contributing numbers here would be aware of their system thermals and know if it’s throttling or not. VEAI performance is pretty consistent. So if there’s any results reported as, for example, being between 1.00 and 3.00 seconds then we can probably assume their system is throttling.

I’m still not sure what you’re asking for here.
You’d be downloading the video regardless and getting that information since, for the sake of such a test, consistency using the same source is needed.

And you’d actually be surprised at how well AMD cards can actually do AI image processing when the AI isn’t built around proprietary hardware. If you take a look at some AI interpolation performances then you’d see the older generation AMD cards actually hold up really really well. So I’m personally not that suspicious of the RX480.

Outputing on an external SMR drive can seriously bottleneck, so output drive is not a bad idea to put in.
Chipset is a good idea - gathering the actual PCI Express speed would be better, since many actuall speeds can vary from slot to slot, population of cards, SSDs, etc… so chipset alone doesn´t do the trick.
power supply: I agree - but it wouldn´t hurt to include the info because many who are interested in a setup might make use of the info if the PS is suffiecient or not before compiling a setup.
Throttling: I doubt most people are aware of such a thing and there are combinations out there which seriously slow down when running to hot - often people comment like “its stable in games”, which isn´t comparable - so I´d at least ive the info that people check in hwinfo for temps and throttling.

But of course, it´s your chart, you decide :slight_smile:

Probably chipset and PCIe population order unless you’re a user that knows how to check the manual and give an accurate report on what PCIe speed your running at. As far as I know there’s no digital way to check that. Maybe some advanced bios out there on later motherboards than what I have can do that. But if we also go with the argument that the someone thinking a “stable game” is a non-throttling PC then maybe just getting PCIe population info would be better than straight up asking for what speeds it is running at.

Edit: I’d also be terrified if I knew someone was working off an external storage device and wasn’t using it strictly as a media vault or data backup. Please don’t give me nightmares of this. But yes that is a good point as well. But please nobody be doing this.

Ok, then don´t put the resolution and codec in :slight_smile: Your chart…
I was in the situation of looking this up and since I have a rough feeling of the cards i am using at the moment, it would have saved me some time actually downloading the whole thing - compare to the time we discuss here it would be 10 seconds to include “1080p,h264” or similar in the chart… Imagine you are looking this up on a mobile device on a small data plan … download the whole video just to get the res and codec? Ok - your call :slight_smile:

I am fully aware of AMDs performance with VEAI, I have tested a few douzen cards…

Look at the cart:
Artemis LQv9:
RX480: 0.14~0,20 sec/frame…
RTX 3090: 0.12~0.20 sec/frame

See what i mean?

You also could add VRAM and actual GPU chipset to the chart. Stating the Card Model sometimes is not clear, there are many models out there with different Chips and VRAM sizes with the same name.

Simply fire up GPU-Z and put the reported Speeds in… It literally gives you “PCI-E3/16x”…

Many Laptop or MAC Users use external devices. And there are many SMR drives out there nowadays - a fact 99% of the users aren´t even aware of. Copying a big video file in one peace onto the drive is totally fine, loading of it, too… putting 50 smaller files on it works fine and fast, too… Put as soon as you hit a constant load of single files over minutes or hours, the drive starts to bottleneck so hard that it actually can slow down the encoding of VEAI… Imagine you are cycling through the models, making one or two runs with each - you easily can get to the point where the drive is limiting the whole process…

Well, it´s your chart, i don´t want ot be a smarta… :slight_smile: Thanx for compiling it!

I just wrote an entire essay and deleted it becaus… idk…
Anyways for now the chart is a general idea. I did make a post asking for a benchmarking utility for TopazLabs AI and compiling information in a way like UserBenchmarks does. It be nice to have a fully automated system like that.
Though for now this is what we’ll have. Also with confirmation that the RTX 30 series still need to be worked on since they’re not being 100% utilized and that TensorFlow still has optimizations to be made where we could see performance boosts for the RTX 20 series, the current results should be taken lightly since it sounds like these metrics will change dramatically over time.

Yes, the fully automatic “like user benchmark” tool was something I proposed, too. Topaz didn´t pick the idea up. The benchmark tool from a few months aog gathered some data but now is obsolete.

Reporting in with VEAI 1.9.0
Also forgot that I updated my drivers and didn’t keep track of that.
But at least with my Pascal GTX 1080 there’s been a big performance increase of ~33% performance boost compared to VEAI 1.8.1
Again I did update my drivers which were probably 2 months old. So couldn’t really say which of these updates did it but I’m inclined to say it’s on VEAI’s end since they were aware of an issue for Pascal cards being slower than intended and this was due to an issue with older Windows 10 versions. Mine being 1909. Also seems to be why they’ve recommended Windows 10 v2004 or newer. (no I haven’t updated windows. I can only get version 20h2 and I’ve had issues with my video editors on that version so I’ve been on version 1909 all this time.)

check these charts!
https://3dnews.ru/1026573
https://3dnews.ru/1028403

Thanx, wasn´t aware of this russian site - very detailed tests, will have to take some time to read. Did I miss the VEAI GPU Benchmarks? I am seeing CPU benchmarks…

One is the CPU and the other is the GPU. The hardware coding of different brands of graphics cards will also affect the production speed. It seems that NVIDIA’s software support and development is better than AMD.

There is a big gap between AMD and NVIDIA graphics hardware encoding speeds, such as ffmpeg H.264 used by VEAI. This is why we can’t just look at hardware devices and ignore software development and compatibility. And this is AMD’s weakness.

No hardware h264 encoding is done in VEAI, it´s pure x264.

ok, too bad they didn´t include GPU Performance of VEAI in the test - that would have been much more important and I see no reason why not to include these - if the program is tested anyway… But anyway, very nicely done tests.

1 Like

https://3dnews.ru/1028403
here is GPU test!

VEAI use CPU to decode and encode not GPU.

“NVIDIA NVENC” is better than “AMD VCE”, but it is irrelevant here, because VEAI is not using hardware encoder.

Just like pointing out RTX3000 series is better than AMD RX6000 series for Ray-tracing game benchmark is meaningless, because VEAI won’t get any benefit from those extra Ray-Tracing cores.

For more detail, you may refer to the post from developer.

That´s the same you already linked.
Again: No VEAI Testing on GPU, so no relevance. Nice Test, but not relevant to VEAI.