RX 6900XT vs RTX 4080 Super: benchmark - double performance but real video - almost same performance

Hello everyone!
I “upgraded” my 6900XT to a RTX 4080 Super. Benchmark shows almost double the performance in favor of RTX 4080s. I can see similar results in benchmarks of other users too.

But If I try and test a real video with proteus, upscaling 1080p 30FPS to 4K 60FPS, I can barely see any difference in performance…
6900XT: 2m42s
4080s: 2m35s

Why is that so? Do I understand the benchmark the wrong way or do I have to wipe some cash somewhere?

I appreciate your help :slight_smile:

Topaz Video AI  v5.0.3
System Information
OS: Windows v11.23
CPU: 13th Gen Intel(R) Core(TM) i9-13900K  31.781 GB
GPU: AMD Radeon RX 6900 XT  15.954 GB
GPU: Intel(R) UHD Graphics 770  0.125 GB
Processing Settings
device: 0 vram: 1 instances: 1
Input Resolution: undefined
Benchmark Results
Artemis		1X: 	14.45 fps 	2X: 	07.74 fps 	4X: 	02.67 fps 	
Iris		1X: 	15.98 fps 	2X: 	08.61 fps 	4X: 	02.82 fps 	
Proteus		1X: 	15.21 fps 	2X: 	09.57 fps 	4X: 	03.41 fps 	
Gaia		1X: 	06.34 fps 	2X: 	04.40 fps 	4X: 	02.83 fps 	
Nyx		    1X: 	05.91 fps 	2X: 	05.17 fps 	
Nyx Fast	1X: 	11.64 fps 	
4X Slowmo		
Apollo: 	14.20 fps 	
APFast: 	43.10 fps 	
Chronos: 	09.37 fps 	
CHFast: 	16.15 fps 	
16X Slowmo		
Aion: 	43.65 fps 	



Topaz Video AI  v5.0.3
System Information
OS: Windows v11.23
CPU: 13th Gen Intel(R) Core(TM) i9-13900K  31.781 GB
GPU: NVIDIA GeForce RTX 4080 SUPER  15.671 GB
GPU: Intel(R) UHD Graphics 770  0.125 GB
Processing Settings
device: 0 vram: 1 instances: 1
Input Resolution: undefined
Benchmark Results
Artemis		1X: 	30.31 fps 	2X: 	16.34 fps 	4X: 	03.74 fps 	
Iris		1X: 	28.97 fps 	2X: 	17.73 fps 	4X: 	04.22 fps 	
Proteus		1X: 	28.43 fps 	2X: 	16.06 fps 	4X: 	04.47 fps 	
Gaia		1X: 	10.65 fps 	2X: 	07.34 fps 	4X: 	03.89 fps 	
Nyx		    1X: 	12.67 fps 	2X: 	10.49 fps 	
Nyx Fast	1X: 	22.55 fps 	
4X Slowmo		
Apollo: 	35.93 fps 	
APFast: 	59.86 fps 	
Chronos: 	23.84 fps 	
CHFast: 	30.56 fps 	
16X Slowmo		
Aion: 	ERR fps 	

CPU: i9 13900K [just limited to 280W TDP in Bios, auto OC]
RAM: 32GB 3600MHz CL16 DDR4 Dual Channel [16-19-19-39, XMP 2.0 enabled]
GPU: Sapphire RX 6900 XT Nitro+ SE [no changes]
GPU switched to: MSI RTX 4080 Super Suprim X [no changes]
Mainboard: MSI Z690 Tomahawk DDR4 WiFi
PSU: Seasonic Prime TX 1000

You did just switch the fps.

When you enable enhance the 4080 should show its muscles.

My 4090 is able to saturate a 16 core 7950X.

And 4090 is 3,5x faster than a Radeon Pro W6800 (RX 6800/xt) with Nyx.

Was the Proteus at 1X, or did you do any upsize?

Oh sorry, forgot to mention it… I did upscaled 1080p 30FPS to 4K60FPS. It should still have higher FPS …

Useless reply…

Well I did limit the CPU to 280W TDP in total. so it can run 280W all day long. I am using Liquid Freezer 2 420 AIO and it keeps the CPU under 90°C all time.
Except MSI did a great job limiting the TDP in new UEFI again … I set it to 280 and it goes up all the way till 320 sometimes if it can…

Topaz uses only 170 Watts MAX while doing his job.

Did you happen to note any difference in performance compared to the not-so-stable unlimited setting?

I did some testing on my 13900K back in a day and was running it with a slight undervolt to get a little less performance and lower temps. My settings were max TDP of 253W and roughly -100mv undervolt. I flashed my Bios a few times since then and I have to adjust the performance of the CPU again. I just picked 288W because I was a little lazy and it worked fine. Voltage is at 1.4V at light load and 5.5GHz P-Core, 4.3 E-Core.
Strangely when I use Cinebench to stresstest the Temp, my voltage drops to 1.26V and I get 5.3GHz P-Core and 4.2 E-Core. Package Power gets all the way up to 320W somehow…
I personally do not understand this stupid behaviour of the CPU at all on MSI board. If I limit the CPU to 253W, it consumex MAX 200W in cinebench. If I set 288W it gets up to max 320. If I set unlimited it goes all the way up to 380W+… Unlimited setting was also very stable btw, used it for a few weeks with new AIO for testing purposes.

I was aware of the HUGE power consumption of the CPU and probably some long term damge, this is why I limited the power to max 240W and adjusted the voltage back in a day. since I use the CPU normaly, office, some games here and there with low power consumption and max 1.3V at 5.5GHz P-Core and 4.3 E-Core… so the CPU shall be pretty fine.

RAM was also an issues before on this board. Maybe UEFI issue, maybe CPU, maybe only RAM. I used Corsair Vengeance 32GB 300MHz CL15. XMP was unstable and had some issues. After getting RipJaws 32Gb 3600MHz CL16 it works flawlessly with new UEFI version now. XMP is also stable.

Benchmark of Topaz Video AI also barely uses the CPU AT ALL. CPU just chills around…
I had 3950X on Asus TUF X570, before but had a ton of issues with drivers, audio and GPU, this was teh reason for 13900K, was kinda a rage-buy. Now I switched from RX6900XT to 4080s and a ton of other issues are gone as well… Now I can finally play 8K 120FPS videos (with 6900XT it wasnt playing at all or had some lags, crashes). I do not have settings resets in wattman (it did not have any driver issues or errors. always basic settigns after PC restart). I can finally use Topaz Photo AI wihtout ANY crashing. (with 6900XT it crashed all the time).

I am just curious about such a different result etween benchmark and real video upscale with topaz. Maybe it is only an Issue if I use upscale and FPS generation at the same time… maybe I will try this out and put in 6900XT for some more testing. normally 4080 shall still be faster, even if I use 2 models at the same time

I am hugely curious to know what effect the new Intel recommendations for CPU stability have on performance, because all the published benchmarks for 13th and 14th generation i9s are based on the “less stable” unlimited settings.

I also can’t help wondering how many of the “TVAI crashed my system” posts by people with i9 systems are caused by this Intel issue and not by TVAI.

A marketing marvel. :rofl:

Okay guys, I did some quick testing…
I have to mention about Limiting my CPU to 200W TDP now and disabled Enhanced Boost . Getting 5GHz P-Core and 3.9GHz E-Core. Temps are now about 60°C

Upscaling 1080p30fps to 4K30fps

6900XT → 4080super

Proteus: 10m5s9m52s
NYX: 12m27s6m42s

No Enhancing, FPS generation: 1080p30fps1080p60fps

Chronos Fast: 6m3s → 3m52s

It seems like Proteus does not seem to care at all about new GPU…
Sadly I cannot test further, since 6900XT was sold quickly so I do not own it anymore

Do you guys are getting any faster times if you use other formats or use specific settings?
I am using these settings:
image

More important to me is the output quality instead of the speed.

You can get always faster processing by lowering the quality.

So i choose a constant bitrate and high quality.

By the way, in the preferences there is a memory slider, i hope that you set it to 100% for the 4080.

Well, I do not see any difference in time used for processing between H265 LOW quality or HIGH quality. Even if I use ProRes, it all remains the same. Ofcourse there are some little varieties in time, but it is all right and depends on current load of PC.

Yes, the slider is maxed out. There are some speed improvements, but as it seems not for everything. As you can see it on Proteus times.

Speed is less of an issue for me, I am just curious why is 4080s performing so good in benchmarks, but not in real time with proteus (from what I discovered)

Are you using Proteus Auto Parameters or Proteus Manual Parameters? :thinking:

When you select Auto, it has to analyze the scene every few frames to readjust the parameters. The “estimate” process increase processing time, and that’s why it will be slower than the Benchmarks.

If you want a faster processing speed, you should select Manual and set the parameters yourself.

Also Benchmark only measure the AI model processing speed. It does not include encode/decode of the video and I/O read & I/O write to harddisk.

Thank you for your reply!
I used Auto for best “comparison” results.

Isn’t “estimate” a manual one time thing which will be used for the rest of the video as well? So it shall give a faster processing times?

I understand that benchmark is only testing the AI model. But what slows down “proteus” so it runs SO SLOW. Maybe my 6900XT was also slowed down because of this issue. and 4080s is getting slowed down even further…

as you can see NYX had expected results and the upgrade literally cut the times in half compared to 6900XT.

I am also exporting it to M2 NVME drive, is it shall not be a problem if it is not exporting over 500MBs long-term :smiley:

Yes, you can use the “Estimate” button to set the parameter in Maunal mode.
It will give much faster process time compare to “Auto” mode, which automatically “estimate” every few frames. (V3 every 20 frames, since V4 every 8 frames)

OH, it did jump from 9m57s to 7m24s!!! It is really good to know, thank you!
Is Auto option actually useful here?

Results:
Manual Top
Auto bottom

image

1 Like

It really depends on your footage. If your source footage has similar quality throughout the video, you can simply use Manual mode (same parameters for the whole video).

However, if your source footage quality keeps changing, such as being very noisy in dark scenes and having very little noise in bright scenes, then you may want to use Auto or “relative to Auto”. This will allow the program to adjust the parameters for you in different scenes.