NVIDIA RTX30X0 bying advice for Video Enhance AI

I am using an RTX2080TI at the moment to run Video Enhance AI. When upscaling PAL DVD to 400% I see approx 75% Load on the CUDA Cores, but the 11 GB of VRAM are completely hogged. The 8 Core 16 Thread Ryzen CPU is not loaded at all and cannot be the bottle-neck.

I plan to replace it with a RTX 3080 or RTX3090 once they are supported.

The 3090 has only a 20% advantage in number of CUDA cores for more than double the price. Will the 10 GB VRAM limit a RTX3080 so the 3090 has more than the 20% advantage? Or is the limit I am seeing with the 11GB VRAM 2080TI in the software stack?

Well based off the render benches we could see around 80-120% uplift with the 3090 much more advantage then gaming benches

Thank you.
I know that nobody probably had their hands on either of these cards to do a real test.

But Rendering software is performing very good on the RTX3080, as well. As long as the necessary data fits into the VRAM of 10 GB. If I remember correctly the RTX3090 has around 10% lead over it in that case.

The real question is: Is Video Enhance AI actively utilizing more than 10 GB VRAM with a benefit?
I don’t need the performance for anything else but Video Enhance AI. Also I do not benefit from bragging rights.

There are two cases I have to think of right now:

  1. Video Enhance AI is limited by VRAM and the RTX 3080 will be limited to using 40 to 50 % of its CUDA core potential → Buy a RTX 3090
  2. Video Enhance AI will allocate more memory, but there is no performance limit if there is less of it, so RTX 3090 and RTX 3080 will run at 75% of their theoretical limit of CUDA core performance → by a RTX 3080 as the 3090 is a waste money.

At the moment I cannot get either, as they are just not in stock. And I don’t have the time to hunt for it all day. So there is time until one of us has one of them and can report.
I hope some members will find the answer as interesting as I do.

In terms of rendering performance, CUDA scales pretty linearly. But for me that’s a sample size of one with a 2080 ti being ~2x the performance of a 1080.

VRAM on the other hand would likely further increase performance by reducing the number of cycles per frame if it’s processed the way I think it is. My only point of comparison is the AI interpolation program known as DAINapp.

In DAINapp’s case, without a certain technique, you would have to split a 1080p image into quadrants to be able to be loaded into a ~6GB VRAM card. Those get process and stitched back together.
The larger the image or the lower your VRAM, the smaller it needs to split the image creating an increased workload.

But that’s only assuming that Video Enhance is using such a method to work around the intense VRAM requirements of AI image processing.