NVIDIA RTX 3090/A6000 vs AMD RX 6900 XT VEAI Benchmark

Tests had been done with VEAI 2.5.0b1. Latest GPU drivers. No overclocking. TIFF was chosen for stability. Proteus with same settings.

CPU: AMD 5950X
RAM: 64GB 3600Mhz
Storage: Samsung 980 Pro 2TB

Nvidia RTX A6000 = RTX 3090 vs MSI Gaming Z Trio AMD RX 6900 XT (NVIDIA left, AMD right)

*1080 => 4K with grain
Artemis: 0.31 vs 0.31
Gaia: 0.43 vs 0.4

Chronos v2 (1080p with 23.976 => 60 and no upscaling): 0.33 vs 0.16
Proteus v2: 0.37 vs 0.37

*480 => 4K with grain
Artemis: 0.21 vs 0.22
Gaia: 0.19 vs 0.18

Chronos v2 (480p with 29.97 => 59.94 and no upscaling): 0.04 vs 0.03
Proteus v2: 0.25 vs 0.24

As you can see, the significantly fast processing speed from AMD completely dominates NVIDIA in Chronos and almost 10% in Gaia. I didnā€™t expect this from DirectML which favors AMD than NVIDIA.

+100% performance for Chronus model and +10% in Gaia models is what you get from AMD RX 6900 XT.

Artemis/Dione models seem to perform very identically. And I think you made your mind now right? Stability? Time will tell, and I will update this topic once I have something to share.

LONG LIVE AMD :crown:

ā€¦for now.

4 Likes

Thank you for testing & sharing. :grinning: :+1:
It is very useful for all of us who want to purchase a new GPU.

If AMD RX6900XT >= Nvidia RTX3090,
I think we can assume similar result between, :thinking:
RX6800XT vs RTX3080,
RX6800 vs RTX3070,
RX6700XT vs RTX3060TI,
RX6600XT vs RTX3060,

Might be, but DirectML favors AMD than Nvidia I guess.

1 Like

Soā€¦ could we assume from this that 2x 6900XT would --destroy-- a single 3090? Because I have a 3090 that Iā€™m using mostly for Da Vinci Resolve at present but at the price these are currently going for I could buy 2x 6900XTā€¦ and that would certainly make me a happy VEAIerā€¦ :smiley:

If you use Gaia Proteus Chronus => then yes
If Artemis, Dione or other models => then no
multiGPU does not scale properly with those Artemis, Dione related models.

1 Like

I should post a new topic I guess for this question, but Iā€™ve yet to figure out my ideal methodsā€¦

I am wanting to upscale 5.7K from consumer 360 cameras to 8 & 12K. And encoding introduces a lot of mush because even 12K @360 is only ~1440p (guessing) on a 16:9 segment.
Plus Iā€™m not sure VEAI understands 360 very wellā€¦ the centre is fine yes but unless it fully comprehends why the top & bottom are distorted in the source itā€™s not going to process them correctly. Havenā€™t seen any -obvious- negative results yet, but it just seemsā€¦ risky. Heh

Anyway I have a spare R9 3900x sitting around, while my Resolve PC has the 3090 and a 5950x. Also have a couple of 3080s but am now thinking maybe swapping them for 6900XTs is a better bet, if theyā€™re a match for the model I need. But I donā€™t have a spare AM4 motherboard so that leaves me with a few conundrums:

  • how much does CPU matter? mid-range AlderLake (evtl) vs 3900x? I see you mentioned 5% uptick going from 5800x to 5950xā€¦ that doesnā€™t seem worth it to me :smiley:
  • how much does PCIe matter? 3.0 vs 4.0?
  • and what is the right model for my upscale, do you think? I am thinking I probably need to do several passes to flash out good detail in e.g. trees which always look horrrrible in any video I render. Can check out ā€˜nowandrewā€™ on youtube to see what Iā€™ve done so far. ID0032SR in 8k is the VEAI result.

Higher CPU clock speed = better performance. Single core clock speed also matters.
PCIE 3.0 or 4.0 is just the same for now. No performance impact.
I use Artemis all the time, and I think Artemis models are the best. If you need customization, then Proteus is a better option. Just better, not better than Artemis yet.

And RTX 3xxx are getting full Tensor performance yet? Or update hasnā€™t yet happened?

Tensor cores have very small boost or donā€™t have any boost at all to impact VEAI performance.

But Tensorflow in DirectML is whatā€™s given the RX6xxxx the edge on RTX yes?
And thatā€™s still to be enabled for RTX? Or did I read that wrong?

Tensor cores just support AI, not a requirement to run AI. Itā€™s hard to know what is needed the most, but based on my tests, AMD performs much better than Nvidia in many way. Maybe AMD GPUs are optimized for DirectML.

1 Like

May be because AMD GPU have better FP16 (half) performance than Nvidia GPU.

For example,
FP16 (half) performance
RX 6900 XT: 46.08 TFLOPS (2:1)
RTX 3090: 35.58 TFLOPS (1:1)

1 Like

Not really because the performance seems to be the same.

Any comparision of 6800XT vs 6900XT? If itā€™s close-ish (i.e. more like the 72 vs 80 that the cores suggest vs 2:3 like the price does) then itā€™s worth putting 2 6800XT in a machine instead of 1x 6900XT. Have best of both worlds for optimum price efficiency also.

I would say 6800XT could be 5% slower than 6900XT. Of course you can use 2x 6800XT, but remember, Artemis doesnā€™t scale mGPU properly. And power consumption is what you should care about. What I can say is that running VEAI is as same as crypto mining.

OK thanks. Yes, but for models that can use the mGPU, there should be close to double the performance of a single 6900xt, for about the same price, and less than 2x the power draw, so seems like the optimum config. When using single-GPU models, like Artemis as youā€™ve recommended, if the drop is only 5% then unless you really --really-- need that extra 5% performance, either save money with a 6800xt or get a second 6800XT to use when the mGPU models are suitable.
Yes if one is --only-- going to use Artemis ----andā€” the 5% extra is important then the 6900xt makes senseā€¦ otherwise the 6800xt seems a no-brainer to me :slight_smile:

I use Artemis models most of the time. So mGPU is not for me. And I want to save myself from electrical surge when peak loads happen that could go up to 1200W. Then you have to buy a new 1600W psu or maybe 2000W. Then a single GPU is the way to go. mGPU is not worth it in any scenario.

Will tell you why itā€™s worth it for me.
A 5.7K upscale to 4320p takes about 4s per frame. For even a 10min video thatā€™s a couple of days with a single GPU. Any time the mGPU model --can-- be used (satisfactorily) will save me a day of processing.
If youā€™ re saying that Artemis is soo much better than the mGPU models that itā€™s literally not even worth trying them, then ok, could not argue there.

Btw electrical surges you can deal with with a $40 surge protector :))

And then upscale everything again? Or you canā€™t even run it in 5 min without the computer or the surge shut themselves off? You can spend money the way you want. But donā€™t comeback and tell me why VEAI doesnā€™t save you 50% upscaling time because that wonā€™t happen. 20% time saving for 100% money + 100% electricity + 1000% surge trigger or computer crash. Or even destroy the footage due to the crash. If itā€™s worth for you then OK. Iā€™m here for advises, not for stopping you.