Video Enhance v2.6.4

Have you overclocked the 5800X? I’m interested which parameters you apply.
I’ve it preset @4.5GHz because when I overclock up to 4.85GHz heat can no more be blown down under 90°C and system shut down.
However I observe that independent from CPU speed load is less than 40% on average, while processing is charging the RTX3060 GPU, and processing time varies only±1%.

In fact, I’m replacing my graphics card simply because mine is broken. she gave up the soul.
On a benchmark, I saw that for the RTX 3070 card model, it recommends a 750W power supply. But it doesn’t specify for what type of general configuration. My I9 9900K is already at 95W. I don’t know if it is overclocked. It doesn’t say so in the datasheet for my tower when I bought it. I suppose there could be a factory overclock but I’m not sure.
On my current power supply, I have cables for (2x8 Pins)x2. And I can place apparently 2 cards. And on the power supply, I yet have 2 free ports.

I forgot to say, my CPU is base at 3.6 Ghz

you have on the web some power / watts calculator, in regards of all the component you can have on a computer.
found one :

This is not necessarily accurate for 30x0/40x0 series cards as they are known to have very large transients. Probably wont happen in products like VEAI as the load is fairly constant, but if you plan to do any gaming as well it can be a problem where the card will momentarily draw 2-2.5x the normal load and some PSUs will just shut off at that point.

Granted this mostly happens with people on the higher-end cards but I’d definitely be looking at something like PSU Tier List rev. 16.1A - Cultists Network and picking Tier A PSUs for a PC thats doing GPU heavy work.

the TDP reference from intel mark 9900K as 95W which is not included boost clock ( 95w sometimes not enough to running at base clock 3.6Ghz under AVX2 workload) , 9900k running at all cores 5Ghz/4.7Ghz cache / avx offset -0 can pull about 180-220W max depends on silicon quality and types of workload , casual gaming will consume much less than benchmark / stress tess stuff .

3070 will got about 220-250w if you got something similar as the reference PCB design , there was a power limit and you have no way to pull more than that , some card have higher power limit / unlocked and you can let it have more budget , however how much it can pull still depends on workload and silicon quality.

dont worry , most prebuild are calculated and the thing included is always enough or just barely enough , some series have more headroom for allowance anyway.

for 30 series wattage spike that was another story , most powersupply will fine with that , except some extreme case like gigabyte that one or a very aged power supply / new world unlimited frame cap glitch

Can someone here with a high-end GPU tell us if upgrading to Ryzen 7000 improves their performance a bit in Topaz AI (current version, or the betas). I’d love to know if there’s any practical speedup with a 7000X/ 7900X vs a 5900X/5950X

May I ask, will 2.6.4 be updated again? When I checked the videos processed by many tests in the past, I found that some videos had problems with the processing time. Every file was not checked before, so this bug was ignored. The bug is that after importing veai, the video duration will be missing dozens of frames, about 1s, which will cause the audio and video to be out of sync. There is no problem with the first part of the processing result, but frames start to be missing at a certain time later, resulting in the second half of the audio track and the video track are no longer unified, the video becomes shorter, and the audio playback is delayed because the audio is of normal length. . When I was processing it, I canceled the audio, so it was not the audio problem, and the video duration information was indeed shortened compared to when it was played. Videos are mostly vob videos at 29.97fps. The same video when I import 3.0a and 3.0b has no errors, the video duration and number of frames are correct.

If there is an update to fix this bug, I can provide the video file to reproduce and fix

I also have a similar level of developer experience and completely agree with you. If it ain’t broke, don’t fix it.

1 Like

VEAI only works properly 100% of the time with Lagarith and Huffyuv AVI files (v3.x may have fixed that).

I import AVC and MPEG2 files via Avisynth using a proper source filter like DGDecodeNV, and do IVTC in Avisynth if needed. 99.99% of my video uses those codecs. I don’t know if DGDecodeNV handles HEVC but there are other proper source filters for that, VP9, and AV1 (someone name them or search doom9.org forums).

I then save SD interlaced with Huffyuv compression, or HD progressive with Lagarith compression, to AVI. Huffyuv is mandatory with interlaced for Dione to work. Lagarith tosses the interlaced flag and marks all video progressive.

1080i is a special case. Best results are to use QTGMC with fast preset, save as Lagarith AVI and use a progressive model in VEAI.

Always process audio separately (demux it out and remux it back in as the final step).

pourquoi un préréglage rapide et pas lent pour qtgmc ??

Hi.
New user here.
Trying to use Chronos Fast v3, and I seem to be stuck on a screen that is telling me ‘Preparing AI Model’
‘The AI Model is being loaded - this may take a few minutes’.
No Joke - 15 minutes so far and no obvious things happening.
What is not right here?
Paid version.
Windows 10 (latest)
11th gen Intel i7-11700k @ 3.6GHz with 32Gb RAM
NVidia GEForce RTX 3060 Graphics Card

Any suggestions will be wonderful please as I do not get why things are running so slowly here

possible firewall issue blocking the download of the models ?

Does anyone know if “Auto-detection model parameters” in VEAI 2.6.4 does the same thing as “Auto”/“Relative to Auto” in 3.0.x?

no it’s not. the “auto” detection in VEAI 2.6.4 is the same as proteus/estimate in 3.0. it’s a one frame analyse, when the Auto of 3.0 does an analyse / change on Each frame of the video.

Nope.
It turned out to be the default Slomo setting of NaN and manually setting this to 100% started things off. I had previously ignored this as I stupidly assumed it meant ‘Not Applicable’.

However, I now have a more serious problem that makes no sense - the converted files are running too fast! If I go from the source at 2:00:59:12 and run it to 24fps in Cronos, it end up 18 seconds too short compared to the original file, and if I run it to 23.976fps it ends up 8 seconds too short.
Why is this happening - and more importantly how do I stop it & get an output file the same length as the source file was?

Anyone get their hands on the new Nvidia 4090 card? What kind of performance gains did you see?

1 Like

Result from EposVox:

4 Likes

Looking at that graph it seems it would be better to just build a second PC in a Dan A4 H20 and just let it go 24/7 in a room or outside if you own a home.

Stuff like this is why I’m really considering shoving a A770 to go along with my RTX 3070ti, but I really hope that Video Enhance AI gets Intel XMX’d like Photo AI does to help sped things up a lot. It’s actually crazy that the $350 Intel GPU is a little faster than a 3090/3080 which still go for $600+ on Ebay, and you get good HW AV1 encoding.

There’s also a new Nvidia driver today, with CPU related optimizations. Time to do another run or two of 2.6.4 and 3.0-7b/8b for a before and after?

Edit: Did some tests.
No differences for me on my 3080Ti.

Driver 512.X / 5.17.X
Topaz video enhance AI
2.6.4 RTX 3080ti- (83% pwr limit, UV+RAM OC, 3600MHZ DDR4 + 5700X OC)

0.15-0.16spf

3.0-6b:
0.14-0.15s spf

Both with ‘MP4’ output, 4mbps for 3.x, CRF 17 for 2.6.4

Driver 5.22.25:

2.6.4:
0.15spf (holding here longer/less fluctuation?)

3.0-06b:
0.14-0.16spf (basically, no difference) = 21s for 5s preview

3.0-08b: same thing, 21s for 5s preview