VideoAI - why different models favor CPU vs GPU

CPU Ryzen 9 9950X3D + RTX 5090

  • Proteus during upscaling will use 70% of CPU and 20% of GPU = 8 hour total job
  • Gaia during upscaling will use 20% CPU and 90% GPU = 4 hour total job

Is there a bug? I would prefer models to favor GPU (if possible). CUDA/tensor cores are beautiful when engaged.

1 Like

Try using the Manual setting under parameters for Proteus to reduce CPU usage. When you set it to Dynamic or what was previously called Auto/Auto Relative, the parameters keep changing. The system must analyze the scene every few frames to adjust the parameter settings, which increases CPU usage.

I only use manual settings. I have kept exactly the same parameters for both exports and Proteus was running on CPU (with maybe 100W pulling from GPU) while Gaia was running on GPU with 580W pulled.

1 Like

I don’t think I’ve ever seen anything Video AI does push my CPUs to 70%. They usually top out around 30%. What was your source file like?

Forget not, OP is using a 5090. If the GPU runs twice as fast, processing double the frames in the same time, the CPU load will also double.

it was the same control file: 30G SDR MKV, upscaled to 4K with HDR (HLG).

My main ask was why some models barely utilize GPU while others will max it.

I would like to see Proteus option to run on GPU as well.
I have nothing bad to say about Gaia though.

On Proteus my source file would encode with 70% CPU and 20% GPU at 4fps

On Gaia it would be reversed with 20% CPU and 80-90% GPU at 9-10fps

Maybe Proteus model needs work or optimization to take advantage of GPU.

2 Likes

Same problem for my PC (7800X3D + RTX 5080 FE)… only Rhea model use the 99% on GPU usage… Proteus also in manual mode, is aroun 55% of usage!!

DEVs, time to take a closer look?

1 Like

This has always been the case with Gaia. It’s made in more of CUDA style implementation. If I remember correctly, it did not get a speed improvement when they added the Tensor support. Proteus did get a speed boost, and for the first few versions after it, people complained that their GPUs weren’t being used enough. In reality, most monitoring tools just couldn’t show that tensor cores were contributing to GPU utilization.
I think it’s still similar: Task Manager can report tensor core as GPU usage, but it still won’t call 100% tensor core utilization as 100% GPU utilization. (I could be wrong on this idea.)

Aye, I dont need monitoring tools to show me 3D engine stats but I look at overall wattage of the card. When I see 50-100W I know its not being really used.

I hope Devs optimize other models (Proteus included) to actually use the cards how its intended.

No idea how accurate this site is, but it suggests that the wattage of a tensor load will be less than CUDA. Will it be 50-100W versus 550-650W? I have no idea.

Without detailed settings (processing preferences, enhancement settings used, codec export settings), the most anyone can offer is wild guesses.

For example, the processing preferences can have a large affect on processing speed. Having AI processor set to Auto can be suboptimal. Max processes? Too few or too many? The defaults are not always the best.