Video Enhance AI v1.7.0

Well, if you can run 3 videos at once with no performance penalty, that’s certainly more than I can do :slight_smile:

with 2 x 2080 ti that’s something I would expect :stuck_out_tongue: But I’m still confused with the GPU load not increasing. Always stuck at 50%. With 4 instances the speed reduced a little bit, but load stays the same. There’s for sure something wrong on my end.

What does it take to get VEAI to actually USE my GPU???

I’m running a 480 to 4K conversion right now, and the most it’s going to use my GPU is 8%? But it’s using 35% of my CPU. Yes, everything is set correctly. It’s only giving me .6 sec/frame. This is ridiculous.

i7-8700 / 2080 Ti

1 Like

One of the things that’s always helpful in these kinds of reports is to know what tool is being used to report GPU usage. The tools available are not remotely equal or equally accurate. Windows task manager is one of the worst, for example.

good point. I’m currently using GPU-Z. In V 1.6.1 it was reporting 90%.

I also noticed, after adding another instance (currently running 4) the Frametime for all instances goes up (0.31), but over time (ca. 1 h) drops back to it’s default (0.26).

If I add another instance I’m pretty sure the CPU will be the bottleneck. Currently it’s sitting at 80% usage with 4 instances.

No, it’s really not, and I’m not going to get into that debate, but thanks. If you have something to help with the actual problem, looking forward to it.

I agree with using GPU-Z. In any case, it is clear that with 1.7 they dook a lot of pressure off the GPU, while increasing processing speed for most users. That does shift the bottleneck elsewhere, including reading/writing frames, which can’t be discounted.

On RTX cards it can be faster because of tensor cores but on old GTX it is slower. The same file 480p processed @200%-GAIA-HQ takes 0.23s/frame before and now 0.31s/frame with v1.7
And I may figure out why I have on some cases this latency issue when computing freezes, it’s when I use a virtual frame server from an avisynth script. This started with v1.6 and it can only come from the main change in v1.6 of the rewritten entire video IO backend. It was all fine with v1.5 and all prior versions even with heavy avisynth scripts from the moment the frame server delivers more frames than VEAI can compute (0.25s/frame=4fps).
So the conclusion is that the new video IO backend. since v1.6 has been badly and wrongly rewritten because there was no problem using a frame server with prior versions. And now if it wasn’t enough v1.7 adds another layer of slowing down things.
Why this rush to make support for RTX 3000 series cards that are anyway not even available in 99.99% on online shops without measuring this could slow down older cards ?

Gaia HQ is much slower here than it was in 1.5.1. When are we going to get it faster for Mac?

This version uses half of vram on me too.
But older versions i was having crashes, black squares on videos.
I think that was related to vram usage.
Does the “all gpus” setting working with nvidia optimus?
Because it doesn’t change process speed at all :confused:

I also thought that the more Vram usage the more things would speed up. If with 11GB it already only uses 5.7GB at the highest setting I guess the 24GB of RTX 3090 will be completely useless :rofl:
The main reason of performance decrease must come from that. That’s why users with 6/8GB cards haven’t noticed so much difference and RTX users have seen increase because of tensor cores alone.

Using an Nvidia RTX 2070 Super for encoding HD video 1280 x 720 to full HD 1920 x 1080:

  • used 3,5 GB of 8 GB memory at max setting
  • GPU usage is at 83 %
  • GPU temperature at 73 °C
  • encoding speed in Gaia HQ is 0.34 sec/frame

according to the MSI Afterburner tool. The PC itself uses an Intel Core i7 8700K at stock clock speed, 32 GB of RAM and a fast M.2 SSD. The log folder only got a very few log files, most of them worth zero bytes. :slightly_smiling_face:

I still haven’t gotten over that you’re able to encode 576p to 200% in 0.08s/frame. That’s waaaaay above anything I was ever able to achive. Very impressive!

I finally noticed the specification that automatically downloads additional optimal processing models.

Is this a feature that was first implemented in 1.7.0?
If this is the case, I would have liked to see it in the release notes.
Also, if there are additional models to be downloaded, I would like the ability to tell the user the file size and name (type).

By the way, is there a way to check which processing model was used to process the video?

2 Likes

check GPU usage with Hardware Monitor or RivaTuner. Task Manager is very bad at GPU usage

It started at .35 sec/frame. It dropped. It’s not using the GPU appropriately.

yeah, I’m sure another utility is going to tell me it’s using 75% instead of 8%. Nah. Something’s wrong.

Bad coding.

Try 4 seconds per frame on OpenVINO. :smiley:

Yes, the download of models is new with 1.7. What I know about the files is primarily from perusing the main log file. If you look through a log for the string “.tz” you will see where it looks for ithe optimum one, and whether it finds it, and downloads if it can’t find it. The final item will start with the line “Loading time for model file:” followed by the local file name.