What does VEAI performance look like between AMD/NVIDIA?

Thanx, wasn´t aware of this russian site - very detailed tests, will have to take some time to read. Did I miss the VEAI GPU Benchmarks? I am seeing CPU benchmarks…

One is the CPU and the other is the GPU. The hardware coding of different brands of graphics cards will also affect the production speed. It seems that NVIDIA’s software support and development is better than AMD.

There is a big gap between AMD and NVIDIA graphics hardware encoding speeds, such as ffmpeg H.264 used by VEAI. This is why we can’t just look at hardware devices and ignore software development and compatibility. And this is AMD’s weakness.

No hardware h264 encoding is done in VEAI, it´s pure x264.

ok, too bad they didn´t include GPU Performance of VEAI in the test - that would have been much more important and I see no reason why not to include these - if the program is tested anyway… But anyway, very nicely done tests.

1 Like

https://3dnews.ru/1028403
here is GPU test!

VEAI use CPU to decode and encode not GPU.

“NVIDIA NVENC” is better than “AMD VCE”, but it is irrelevant here, because VEAI is not using hardware encoder.

Just like pointing out RTX3000 series is better than AMD RX6000 series for Ray-tracing game benchmark is meaningless, because VEAI won’t get any benefit from those extra Ray-Tracing cores.

For more detail, you may refer to the post from developer.

That´s the same you already linked.
Again: No VEAI Testing on GPU, so no relevance. Nice Test, but not relevant to VEAI.

@reiner is correct, the benchmarks are great but no GPU benchmarks for VEAI, only CPU. To be honest not sure how many are relying on CPU processing for VEAI here, although it’s strange to see a production app where Ryzen 5900X is beaten by a 10900K.

Hopefully someone will have a GPU tests for 6800/6900s and the 3XXX RTX GPUs

AMD R9 5950X 16CORE, 32G DDR4 3600MHZ, RTX 3070 8G, STRIX X570-E GAMING, SSD 980 PRO 500G, ALL PCIE4.0
THIS IS THE TEST WITHOUT OVERCLOCK.
0.09SEC/FRAME 720X480 TO 1920X1080

2 Likes

Ryzen 3950X
128GB 3200
Radeon 6900 XT
1TB 970 Pro

Gaia-CG v5 200%: 0.25s / frame
Theia-Fidelity-v4 200%: 0.28s / frame

I don’t mean to be pushy, but where is this chart of users experience.

Can we change the format of a thread like this to an XML chart locked in on top and a form for people to submit their experience with minimum info and would be nice fields?

Then we could move to pass this loosy goosy info and discuss realistic expectations, time-saving tips, how to improve our systems, and even maybe what to do with our new found awesome pictures and video.

q734227051-8940

In my two systems:

10900K
720P to 2K(200%) Artemis v9
RTX2080:0.22~0.25sec/frame
RTX3070:0.22~0.24sec/frame

10700F
720P to 2K(200%) Artemis v9
RTX3070:0.34~0.36sec/frame
RTX3080:0.33~0.36sec/frame

So what’s the point of upgrading GPU?

my system is running AMD 3900x
RTX1060 don’t use it for VEAI
Quadro P4000 0.17~0.21sec/frame

My card is half the price of yours and meant for commercial use. The turing version of Nvidia commercial cards start at $2000, but the older Pascal versions like mine you can buy for $500 apiece. I don’t know if you are using this software for personal or commercial, but those cards right now in this case are more useful for gaming than for commercial uses like video editing.

If I can get it to work right I should be able to run two instances per card. I will figure out why it’s not working correctly.

Just sharing some testing with Handbrake.

the original file was 534MB then 1.2GB after cleanup with VEAI

Compressed it with Handbrake using H.265 Nvidia (Meaning Graphics card) and the preset to default and everything else auto. I got these results:

quality 18 1GB
quality 20 837MB
quality 22 647MB
quality 24 486MB
quality 26 384MB
quality 30 246MB(at this setting I started noticing the quality of the video being watchable, but hair strands weren’t as defined as before)

I find this very encouraging myself. I have Blueray rips for some movies I couldn’t get the file size down much at all. Plus I recently decided to double the size of my TV shows that are 576P. Was worried about how big the final file size will be, but not anymore.

Of course it’s okay to play games occasionally but it’s obviously prepared specifically for editing and rendering videos. Why not choose a professional graphics card because it is not necessary. Although professional graphics cards are supported by better materials and more professional software. they are almost useless in daily life unless they are used by professional design companies. Why choose 16-core CPU or more core CPU? Video editing and rendering can not rely on single-core processing alone. Multi-core means faster processing speed. There are many professional video production people who have done many tests and there are many related videos on YOUTUBE.
I have tested that this computer can process 5 VEAIs at the same time, 480 upscaled to 2K, the temperature of the CPU and graphics card is 65 degrees. The load of CPU and graphics card is about 80%. The computer does not have any freezes and you can watch videos online at the same time. It takes 40 hours to process 5 two-hour movies simultaneously. On average each movie takes 8 hours. Next time I will try to run more VEAI at the same time.

Sweet, it seems you have done lots of testing and got this down. When I brought up multiple VEAI and got them running, before I jacked up my computer, they didn’t use any more of the GPU. Instead, they ran slower. Instead of a constant 0.19 spf, it got up to 0.26 spf.

Is there something I need to change for more than 45% of my GPU being used by VEAI?

I am at nearing my wit’s end to repair my computer, so if I reinstall Windows I want it set up better to be doing this stuff because the rest of this year is going to be doing this. I don’t know about next year yet, but this year is all about me getting plex going that works even offline and getting 200 plus movies and shows watchable if not upscaled to be looking good on 4K TVs even one day.

PS Does anyone have an HDR TV and how does it look with this upscaling?

You got the lastest and greatest from AMD. Are thoses Frames Pre Second a good rate?

I wonder if this software prefers NVidia cards?

When I replace me GTX 1060 card something like this would be a consideration. I found rumors that even computer games where only doing the minimum amount of work to make AMD work. I hope with this latest gen of cards that attitude changes.

S/F will slow down when running multiple VEAIs at the same time. This is also determined by the number of CPU cores. Of course, if conditions permit, I will use more core CPUs with dual graphics cards to crossfire. But this still requires a lot of investment to upgrade the hardware. Recently INTEL announced that it will release a new version of DIRECTML. This will also increase the speed of VEAI. This is good news for using any graphics card that supports DIRECTML.

Too bad no Navi 1 card on the list.

The W5700 is very cheap for a Workstation Card these days.

But the Pro Navi 2 cards are also coming, but didnt hear any rumors about it.

2x RTX 5000, TR 3000, 24 Core, 128 GB ECC.

Win 10

All standard settings.

output Tiff 16 bit.

1.9.0

Gaia CG V5. 0.23 sec. image.


Gaia HQ V5. 0.23 sec image.


Artemis HQ - (Does not work with Dual GPUs atm) —> Beta V11 → 0.19 sec. image


Artemis MQ V10 - 0.19 - 0.20 sec. image


Artemis LQ V10 - 0.18 - 0.19 sec. image


Artemis AA V9. 0.19 - 0.21 sec. image


Dione TD v1. 0.19 - 0.20 sec. image


Theia Detail V3. 0.24 sec. image


Theia Fidelity V3. 0.24 sec. image


#Note: The curve is already flattening. Buying two small inexpensive gpus with 8 gb each, pcie 4 and fp16 around 20 TF is a good decision i think.

Like 6700 XT or 6800, when prices come down.

2 Likes