Does More VRAM reduce processing time?

I don’t game and I want to upgrade from my RTX2060 Super with 8 GB VRAM just for faster Topaz Video upscaling. I am considering the RTX5070 TI with 16 GB VRAM. The posts in the benchmarking topic indicate that would approx. double my processing speed.

But the next NVIDIA cards in the 5070 TI price range that are coming out early next year will have 24 GB VRAM. I am wondering if waiting for the new cards that have more VRAM will have any advantage in Topaz Video. When I check GPU-Z while upscaling with my 2060 Super, it shows it is only using 3-4 GB of VRAM.

So does that mean more VRAM won’t reduce processing time and I should focus on the speed of the card rather than memory?

It’s not just about processing time. Some of the current and probably a lot of upcoming enhancement models will require more VRAM just to run at all. Right now 16Gb is the minimum, but I wouldn’t be surprised if we eventually see a model that will require more.

VRAM only holds data, and processing is done on the GPU’s cores, either using CUDA or Tensor (TRT). Therefore, higher VRAM alone won’t speed up processing. I’m using a 5070 TI based on my budget. VGA is the least used this year, and you can add 8GB more every year. Therefore, building a PC for the future makes more sense. The GPU alone isn’t enough; all system components directly impact processing speed. Therefore, it’s better to do thorough research and build a system.

Thanks. I tried several models (I use Artemis mostly) and all but Starlight only use no more than half my 8 GB VRAM according to GPU-Z. And GPU-Z is showing total usage, including other running processes, not just Topaz Video.

Starlight maxed it out though, using the full 8 GB, so it could benefit from more VRAM. But I don’t run Starlight because I normally am upscaling full movies, 90-120+ minutes, and it is just too slow.

I haven’t upgraded from my 8Gb GPU yet, because the cost of cloud credits has come down enough to make it practical for my use (short clips, no whole TV episodes or films). But future buys will be 16Gb minimum.

I have run Starlight on a 3080Ti (12GB) and a 3090 (24GB) and processing times were similar. I am now running two PCs with 5070Ti’s and they are about 30% faster than the 30 series cards.

As you noted later, none of the other models use that much VRAM. Also, the 24GB 5070Ti and 5080 probably wont be avilable until Spring (NVidia wants to clear out the old cards first).

I wonder if they will put out a 48GB 5090? It might start eroding sales the RTX6000! :smiley:

1 Like

Hello, could you tell me approximately what is the performance I can expect from Starlight Sharp on the RTX 5070 Ti?

I was also thinking about the RTX 3090 second hand but as you mentioned the performance is much better on the RTX 5070 Ti I would lean towards that one now at Black Friday.

I wanted to wait till next year till the Super cards arrived but I just read in the news TSMC is rising their prices so price rise is expected soon.

Can you maybe test it with a 640x480 video with 2x and 3x upscale on Starlight Sharp?

My use case is upscaling 480x640 TV shows. On the 3080Ti processing a 45 minute program in Starlight (Sharp wasn’t available when I did these tests) took ~52 hours. I didn’t get a time for the 3090, because I never got a run to complete (and had to cut two partial runs together) but the run time was at least in the same ballpark.

I have two 5070Ti’s, one in a system with 12th gen i9, and other in a system with a 9th gen i9. Starlight takes pretty much the same time after overclockling, ~40 hours for the same type of source. Starlight Sharp takes about 2 hours longer. Both GPUs are Asus, one TUF Gaming and one Prime.

Proteus (and I assume most other models) runs MUCH faster on the 12th Gen CPU, roughly twice as fast as the 9th Gen.

I have also recently upgraded the 12th Gen system to 64GB, and when running Starlight the system routinely goes over 32GB in use, and I generally have >24 GB in use as cache, which may influence my runtimes in addition to GPU performance.

Wait, Starlight Sharp should actually be faster than the standard model about 2 times faster if I am not mistaken🤔

That has never been the case in my experience. Sharp adds a Nyx process, which itself takes about 3 hours for 45 minute video upscaling from 1080 to 2160. The actual Starlight processing time is similar for both variants, unless that changed in the 1.04 release (I have not tried Starlight ‘mini’ since the patch).