What hardware increases speed in TVAI

sunday.weaver thanks for the response. My PC has a 3070 so I’ll test it today or tomorrow. I think the processor is an intel i9 10,700 if I recall and 32 GB of ram though I’m happy to buy more if that might help but I’ll see how memory consumption is on it if I run it barebones just with TVAI. I also have an Elgato HD 60 Pro in it but I have not used it as much as expected and while I’m okay working with computers much to the hardware interactions are beyond me.

1 Like

I’m surprised to hear that about the M1 MacBook Pro. Most everyone on here with an M1 Mac Mini claims to have reasonable speeds. There is something they have said about enabling 100% memory usage. (No idea what they mean since I have never used Mac. They make it sound like an OS setting, but there is the setting in TVAI too.)

Does a GUP overclock make a difference?
My specific GPU is the EVGA FTW3 Ultra Gaming variant of the RTX 3080 Ti. I used their Precision X1 software tool to scan for a stable overclock. It came up with +200MHz on the VRAM and +72MHz on the GPU clock.

time: 0:05:10.63

That is the best time yet. I was also surprised that the GPU temperature did not increase as is usual when overclocking. It might be that the scan runs on the power limit you have it set to. Maybe I’ll up that and try again.
Edit: Upped the power limit. This time it found +80MHz on the GPU clock (VRAM was the same +200), but the best time I could get with that was 0:05:13.05.

Watch out with GPU overclocking on the 3080Ti, it’s power hungry as it is and you may have slightly less consistent performance after a warmup, even at 60C+. I think for VRAM, generally you can try +500MHz and up to 800MHz at most, but do note that you can lose performance by increasing clocks, and/or VRAM frequency pas a certain point due to errors being corrected, and power draw. More VRAM Freq = Less Core freq power budget (So, lower core clocks).

Personally I undervolt and reduced the max freq of my 3080Ti, but with VRAM at 500Mhz as I am not getting artifcating/weird colors in the outputted video. I’d watch out for max frequencies (use GPU-Z) above 1965MHz as it’d ramp voltage up high, and for basically no perf benefit past ~1920Mhz. So, for longer renders see if you can optimize the 1700-1865Mhz voltage range while still being stable in games and programs, which is basically the stock max freq I think. It might be worthwhile just to set a custom scale to 1440p, rather than 1080P as for Artemis (at least) as you’ll have the same speed for a better res video due to the whole bottleneck thing

That’s why I used the scan function in Precision X1. It does a good job of finding something stable that’s not going to lose preference. I can run an underclock test, but I imagine it will add more time. I’m thinking of how these results translate to a two hour long movie.
That’s also the reason why I’m using Artemis at 1080p. None of my hardware goes higher than 1080p, and that’s the exact model I will be using on movies I own.
Awhile back, I found out that 720p to 1080p takes about twice as long as 480p to 1080p. No idea why. That was with VEAI 2.6.4, but I imagine TVAI acts the same way.

Sorry, I’m trying to understand your bottleneck thing you explain, but I don’t. If there was a bottleneck, I would see no gain with an overclock. Now if instead you are talking about optimizations that are not implemented into TVAI, I can understand that. I think several people on these forums are having a hard time understanding why TVAI does not appear to utilize their hardware like similar programs that are made using CUDA, or Vulkan. If it’s really using the RT cores, well, I don’t know that we have tools that can monitor that.


^ 720P source to 1080p, and test downscale to 720p, 360p and 180p


1080p source to 1080p, then 720p, 360p and 18p
^All 3070Ti on my spare PC, 5 second preview, Artemis Medium+Halo, H265 10bit NVENC 4MB/s

This is the ‘best’ example of the bottleneck issue, where depending on your base resolution, you’re stuck at a certain speed for processing and lowering resolution to get speedups doesn’t actually help. IE, for me on my 3080Ti, a 1080P video (720p/1080p source) gets enhanced/upscaled at the same speed to 1080p, or 1440p (my 3070ti has a big perf penalty from 720p/1080p to 1440p).

You’d think it would be faster to downscale+enhance with lower and lower resolutions, but they take the same speed, with the GPU doing seemingly less and less work. If you really like to wait, try out VP9 encoding, which will add even more time per-frame to do the compression as it’s slow on the CPU. Also basically why very different tiers of GPU make barely any difference at lower resolutions, and very frustrating as I prefer to work with 1080P (as you do)

Not sure if it’s still there, but there used to be an issue where the GUI always sets the scale of the filter to 0. This would result in bigger versions of the model being selected. (I haven’t seen a Topaz official say it, but as far as I can deduce, there are separate models for different input and output resolutions.)
I was able to get the correct model to be used for Artemis High Quality by changing the scale=0 in the command to scale=2.5. (There are multiple scale parameters in any command. It’s the next one after veai_up.) This same trick did not work with every model.
So, if I’m not wrong, the GUI was forcing me to use a model that upscales to 4K, then downscaled it to 1080p. I can do a little test to see if this is still the case. (Test complete. The issue is totally still there.)

Anyway, I think you’re getting an issue more like that.

I think I saw that setting in TVAI for Mac. Perhaps I’ll increase that to 100% since it’s set at 90%

Does anyone know what will do better on VEAI when Ryzen and i5 showing up the same?

Example: UserBenchmark: AMD Ryzen 7 5700G vs Intel Core i5-9600K

Thanks!

My guess is the Ryzen 7. I say this because I had an Intel Core i9-9900KF before the Ryzen 9 5900X, and so far the Ryzen has been faster in all ways and does not cause random blue screens of death like the i9. (Though that could have been liquid metal dripped on the motherboard. [Did I mention that the Ryzen runs much cooler?])

1 Like

My Lenovo laptop (AMD 5800U 8 core, 16 GB 4,1 Ghz RAM) is up to 20 % faster than my desktop PC (Intel 8700K 6 core, 32 GB 2,6 GHZ RAM)) in everything except gaming because the RTX 2070 S in the PC is 5 times that fast than the RX Vega 8 in the laptop. Overall the PC is much faster when processing TVEAI / TVAI. :slight_smile:

1 Like

THe new nvidia specific beta 100% throttles my 3090 and 3090ti cards. (and they are being utilized because it’s hard keeping them under 70c when topaz is running just a single video)

Speed increase is very noticeable… haven’t done any AB testing yet

That’s good news. Maybe I’ll actually be able to interpolate a full movie to 60 FPS in less than 3 days.

1 Like

I saw some different gen of Ryzen comes in 6 and 8 cores. The 6 cores are newer but i have read that the 8 cores are more suitable for video editing. Any thougts on this?

Operating systems have more background tasks running these days. The more cores the merrier. Video editors are also now programmed to take advantage of more cores for some of the things you can ask them to do. It is, however, becoming more popular to task the GPU with the video heavy lifting. Check out Gamers Nexus YouTube reviews of the 5700G. I’m pretty sure they covered that one. They should have graphs for video editing usage in their review.

1 Like

I tested the latest beta on both the 6 core Intel 8700K and the 8 core AMD 5800U. The Intel with RTX20 needs 12 hours, the AMD with RX Vega 8 over 4 days for the same job, even tough the AMD is 20 % faster than the Intel so this version strongly relies on the GPU power.

1 Like

This new update is fun.
New best time for the baseline is:
time: 0:03:09.84
That’s a lot better than
time: 0:05:14.55

Here’s some made up example times for running 1 versus 2 movies at a time. This is pretending that the length of the movies are about 2 hours.
Before update:
One at a time processing time: 3 hours.
Two at a time processing time: 4 hours, but two completed.
After update:
One at a time processing time: 2.5 hours.
Two at a time processing time: 4.5 hours, but two completed.

Yeah, holy crap this version makes a big improvement on the gripes/processing speeds

Big buck bunny, 720p, (25/30fps) for 5 seconds

image 3080Ti with 2.6.4 (h264, crf17)

3070Ti (OC) with 3.0.X and same speed on my 3080Ti, as bottlenecked to 0.13/0.14 sec/frame


0.07 sec/frame (14fps) at 1080 and 1440p(!!!) with Artemis Medium w/ H265-10 nvenc, literally double as my GPU can be utilized more :slightly_smiling_face:

There’s a new bug though, previews are broken with vertical/portrait video, but for normal 16:9 and 4:3, everything seems to work

2 Likes

No kidding. I haven’t tried my MacBook Pro with the new update yet but my upscaling time on my Windows 10 PC is now about 1:1 with about 28-33 fps. Great change. Love using Artemis to upscale and clarify with very little in the way of artifacting or sound issues.

They made it sound like the speed increase is only for Nvidia RTX GPUs, with other brands to get the same treatment later. Let us know how your MacBook Pro fairs either way.