What hardware increases speed in TVAI

I keep seeing posts stating that ‘this’ or ‘that’ helps increase speed in TVAI. My hope is that we can make a list of what does increase speed by testing such posts. There are near-endless hardware combinations, and even a great number of models and uses in TVAI alone, so testing for all scenarios is impossible. That means making all-encompassing statements is folly—or at least, that there will probably always be exceptions.

My setup:
CPU: AMD Ryzen 9 5900X.
RAM: 32GB at 3200MHz.
GPU: Nvidia RTX 3080 ti.
Motherboard: X570 Aorus Master PCIe Gen4
Storage: Western Digital 8TB Red (NAS) HDD.
OS: Windows 11 Pro. (No TPM enabled. Might have to test if that makes a difference.)
Room temperature matters, but I have no way to measure that.
Test file: MPEG4 Video (H264) 1920x1080 23.976fps Length 2:10
Test Model: Artemis High Quality 100% scale to PNGs.

For the tests, whatever hardware I change, the Test file and Model must stay the same. I’m only looking at how long it takes to complete as timed by a Python script. If I were to use the TVAI GUI, I would use a stopwatch or similar.

3 Likes

Baseline results:
Best time: 0:05:14.55 (After 14 hours of processing other movies.)
Worst time: time: 0:05:28.33 (right after restart, after CPU usage went down.)

Is there a difference between PCIe gen 3 and gen 4?
In the BIOS I set the PCIe slots to gen 3.
time: 0:05:27.86
Maybe I need more test runs, but that matched the baseline after restart.

Is there a difference between an NVMe drive and the listed HDD?
I set the test to run on my boot drive. It’s a Plextor 1TB MLC NVMe drive.
time: 0:05:19.11
This was ran after hours of TVAI processing and matches the baseline time.

Does RAM speed make a difference?
In the BIOS, I turned off XMP. This set the RAM to 2400MHz.
time: 0:06:07.88
This was done right after a restart and this is also the only one that I will say made a big impact.
Now, I have heard that RAM speed on Ryzen gen 3 makes a big difference, so maybe this is that manifesting. If someone on an Intel CPU were to do a similar test and also see similar results, then I would feel more confident saying RAM speed matters on all systems.

According to my experiences a fast GPU matters the most, even if the application in its current state may not utilize the existing GPU power well.

I could put my 3080 into my system with a GTX 1060, and though I’m sure it will go faster, I don’t think it will get anywhere near the speeds of this system.
Part of these tests were inspired by people on these forums claiming unimpressive improvements going from an RTX 3090 to a 4090. The goal is to find all the other, less obvious, ways to get more out of TVAI.

I understand ForSerious. I am brand new to TVAI and just using it for my own video projects. Nothing fancy but something I enjoy and want to have a good product. Was actually thinking if it worked well it might offer assistance to others for a few dollars. I have a brand new MacBook Pro with 16 GB of RAM and an M1 Pro processor and 1TB SSD hard drive. Seems to be a great program based on converting 3 min clips. However, I’m processing my first video to upscale from 720p to 1080p and I was disappointed that without adding any other settings to TVAI and no other programs running, no other TVAI settings selected, closing down all unnecessary background system tasks to bare bones and even manually reclaiming memory every few hours that my processing time for a 40 min video was 2 days and 4 hours. I have about 221 videos to convert as part of a project. Based on this, my project is going to take years. I know the computer is not the Mac Studio but I never would have expected it to take this long. I have a PC but it is several years old. I don’t know if my license allows me to try TVAI on my Windows computer but I am wondering if that computer may process faster.

Yes, your license allows you to have a couple of Video AI activations active on any and many different operating systems, if you have ‘any’ decent GPU (5700XT, 6700XT, RTX 2070/2080/3060 or better) with a 4+ core PC with DDR4 RAM, there’s a chance it can run as fast or much faster than the Mac M1

The M1s are decent speed in some things, but their clock speeds and also GPU performance + (lack of) drivers leave a lot missing, if you look at actually intensive CPU benchmarks such as software video encoding. Apple ‘cheats’ by having a bunch of extra hardware and silicon to speed some things up, but its architecture kinda falls flat versus chips manufactured on the same fabrication process. See AMD’s announcement of the 8 core Zen 4 Mobile CPU (claims of 20-30% faster than M1 Pro) at the same power for multi-core and very likely singlecore compute

When doing lower resolution upscales (720p to 1080P), there’s also some sort of bottleneck/hard limit. Going from a Vega 56 (AMD) to a 3070Ti should theoretically give me 2-3x performance in compute, which it did in some instances (Hashcat password cracking was a 2x+ increase, gaming was too). Yet, I saw about 50-60% speedup which was much lower than I expected.

For me with anything to 1080p (30fps) it takes 3x ‘real time’ to upscale or re-master a video with a Ryzen 5700X with 3600MHz RAM and any mid-range new GPU. Upscaling to 1080P and 720P even has a brick wall, where getting faster than 0.13/0.15 seconds per frame seems to be really difficult. Additional processing power from going from a 3070ti to a 3080ti (20-25% perf in games and compute) lead to roughly the same speed at 1080p even trying to use GPU Video encoding (NVENC h264/h265) to reduce the CPU and RAM bottleneck. When I tried to upscale to 1440p, the 3080ti was as fast at 1080p as 1440p, unlike the 3070 which had a massive slowdown of 2x more processing time.

Anyways, for everyone else who may have the hardware:
Can we get some DDR5 (6000MHz) Ryzen and/or Intel (ie, 7700X vs 13600K and 13900K) comparisons with the same-ish GPU at 1080P? If anyone has a 5800X3D and a 6800XT/3070/3080, could they see how that does at 1080p upscaling, and/or if anyone has broken past ~0.10 seconds per frame?

1 Like

sunday.weaver thanks for the response. My PC has a 3070 so I’ll test it today or tomorrow. I think the processor is an intel i9 10,700 if I recall and 32 GB of ram though I’m happy to buy more if that might help but I’ll see how memory consumption is on it if I run it barebones just with TVAI. I also have an Elgato HD 60 Pro in it but I have not used it as much as expected and while I’m okay working with computers much to the hardware interactions are beyond me.

1 Like

I’m surprised to hear that about the M1 MacBook Pro. Most everyone on here with an M1 Mac Mini claims to have reasonable speeds. There is something they have said about enabling 100% memory usage. (No idea what they mean since I have never used Mac. They make it sound like an OS setting, but there is the setting in TVAI too.)

Does a GUP overclock make a difference?
My specific GPU is the EVGA FTW3 Ultra Gaming variant of the RTX 3080 Ti. I used their Precision X1 software tool to scan for a stable overclock. It came up with +200MHz on the VRAM and +72MHz on the GPU clock.

time: 0:05:10.63

That is the best time yet. I was also surprised that the GPU temperature did not increase as is usual when overclocking. It might be that the scan runs on the power limit you have it set to. Maybe I’ll up that and try again.
Edit: Upped the power limit. This time it found +80MHz on the GPU clock (VRAM was the same +200), but the best time I could get with that was 0:05:13.05.

Watch out with GPU overclocking on the 3080Ti, it’s power hungry as it is and you may have slightly less consistent performance after a warmup, even at 60C+. I think for VRAM, generally you can try +500MHz and up to 800MHz at most, but do note that you can lose performance by increasing clocks, and/or VRAM frequency pas a certain point due to errors being corrected, and power draw. More VRAM Freq = Less Core freq power budget (So, lower core clocks).

Personally I undervolt and reduced the max freq of my 3080Ti, but with VRAM at 500Mhz as I am not getting artifcating/weird colors in the outputted video. I’d watch out for max frequencies (use GPU-Z) above 1965MHz as it’d ramp voltage up high, and for basically no perf benefit past ~1920Mhz. So, for longer renders see if you can optimize the 1700-1865Mhz voltage range while still being stable in games and programs, which is basically the stock max freq I think. It might be worthwhile just to set a custom scale to 1440p, rather than 1080P as for Artemis (at least) as you’ll have the same speed for a better res video due to the whole bottleneck thing

That’s why I used the scan function in Precision X1. It does a good job of finding something stable that’s not going to lose preference. I can run an underclock test, but I imagine it will add more time. I’m thinking of how these results translate to a two hour long movie.
That’s also the reason why I’m using Artemis at 1080p. None of my hardware goes higher than 1080p, and that’s the exact model I will be using on movies I own.
Awhile back, I found out that 720p to 1080p takes about twice as long as 480p to 1080p. No idea why. That was with VEAI 2.6.4, but I imagine TVAI acts the same way.

Sorry, I’m trying to understand your bottleneck thing you explain, but I don’t. If there was a bottleneck, I would see no gain with an overclock. Now if instead you are talking about optimizations that are not implemented into TVAI, I can understand that. I think several people on these forums are having a hard time understanding why TVAI does not appear to utilize their hardware like similar programs that are made using CUDA, or Vulkan. If it’s really using the RT cores, well, I don’t know that we have tools that can monitor that.


^ 720P source to 1080p, and test downscale to 720p, 360p and 180p


1080p source to 1080p, then 720p, 360p and 18p
^All 3070Ti on my spare PC, 5 second preview, Artemis Medium+Halo, H265 10bit NVENC 4MB/s

This is the ‘best’ example of the bottleneck issue, where depending on your base resolution, you’re stuck at a certain speed for processing and lowering resolution to get speedups doesn’t actually help. IE, for me on my 3080Ti, a 1080P video (720p/1080p source) gets enhanced/upscaled at the same speed to 1080p, or 1440p (my 3070ti has a big perf penalty from 720p/1080p to 1440p).

You’d think it would be faster to downscale+enhance with lower and lower resolutions, but they take the same speed, with the GPU doing seemingly less and less work. If you really like to wait, try out VP9 encoding, which will add even more time per-frame to do the compression as it’s slow on the CPU. Also basically why very different tiers of GPU make barely any difference at lower resolutions, and very frustrating as I prefer to work with 1080P (as you do)

Not sure if it’s still there, but there used to be an issue where the GUI always sets the scale of the filter to 0. This would result in bigger versions of the model being selected. (I haven’t seen a Topaz official say it, but as far as I can deduce, there are separate models for different input and output resolutions.)
I was able to get the correct model to be used for Artemis High Quality by changing the scale=0 in the command to scale=2.5. (There are multiple scale parameters in any command. It’s the next one after veai_up.) This same trick did not work with every model.
So, if I’m not wrong, the GUI was forcing me to use a model that upscales to 4K, then downscaled it to 1080p. I can do a little test to see if this is still the case. (Test complete. The issue is totally still there.)

Anyway, I think you’re getting an issue more like that.

I think I saw that setting in TVAI for Mac. Perhaps I’ll increase that to 100% since it’s set at 90%

Does anyone know what will do better on VEAI when Ryzen and i5 showing up the same?

Example: UserBenchmark: AMD Ryzen 7 5700G vs Intel Core i5-9600K

Thanks!

My guess is the Ryzen 7. I say this because I had an Intel Core i9-9900KF before the Ryzen 9 5900X, and so far the Ryzen has been faster in all ways and does not cause random blue screens of death like the i9. (Though that could have been liquid metal dripped on the motherboard. [Did I mention that the Ryzen runs much cooler?])

1 Like

My Lenovo laptop (AMD 5800U 8 core, 16 GB 4,1 Ghz RAM) is up to 20 % faster than my desktop PC (Intel 8700K 6 core, 32 GB 2,6 GHZ RAM)) in everything except gaming because the RTX 2070 S in the PC is 5 times that fast than the RX Vega 8 in the laptop. Overall the PC is much faster when processing TVEAI / TVAI. :slight_smile:

1 Like

THe new nvidia specific beta 100% throttles my 3090 and 3090ti cards. (and they are being utilized because it’s hard keeping them under 70c when topaz is running just a single video)

Speed increase is very noticeable… haven’t done any AB testing yet

That’s good news. Maybe I’ll actually be able to interpolate a full movie to 60 FPS in less than 3 days.

1 Like

I saw some different gen of Ryzen comes in 6 and 8 cores. The 6 cores are newer but i have read that the 8 cores are more suitable for video editing. Any thougts on this?