Topaz Video AI 5.5 + 5.5.1

Interesting fact. The VFR is not generated under macOS, only Windows. I have an older machine cMP 3,1 with an RX 580 running Monterey and the output had constant framerate, as it was supposed to do. So I did a test on another newer cMP 5,1 with a similar RX 580, same constant framerate. Conclusion: only Windows has this issue.

1 Like

At least with VFR, you’re not flying blind.
:crazy_face:

Update: pricing is now corrected, and no users were affected by cloud renders during the period of inaccurate pricing. Credits will be refunded to match the correct amount.

7 Likes

And a deal-breaker issue it is (at least enough to put my renders to rest until it’s solved). ‘Repairing’ VFR is a bitch, honestly. And can never even be done properly (but mostly enough so you won’t effectively notice).

Please, don’t forget this one.

1 Like

Got the same problem with dropped frames since v5.4
Install v5.3.6 and every works fine AND it’s much faster (3x).

You should read up about undervolting and overclocking with Msi afterburner. My 3080 currently bench higher than your 4090 looking at interpolation models, but about 50% slower regarding the enhancement models.

A 4090 should be able tobrender enhancement ai models 88% faster than 3080. Faster interpolation too, but not as much as 88% faster, since it uses a different part of the gpu apparently doesn’t differ that much in specifications.

oh well, if so then I am not sure why this has to be differentiated with _1 and _2, I have to remove the _2 from now on on every output video:frowning:

Exactly the reason why I am here, so no not fixed at all. Wasted hours to find out what I was doing wrong.

1 Like

Frame Interpolation to 60fps in 5.3.6 still has variable output for me.
Its been that way for much longer, maybe since ffmpeg 7 implementation, though I haven’t had time to test that theory yet.

Also, even though I set 60 fps output I sometimes end up with 59.94 FPS (when actually looking at the amount of frames), though the metadata still incorrectly states 60fps. .
Slightly unrelated, but like FPS metadata, passthrough of HDR metadata light levels is also incomplete.

Between the variable framerate and wrong FPS metadata, handling of these files is problematic for a lot of applications.

If anyone is having trouble matching up audio tracks after doing additional editing, you can extract the timecodes from the original (TVAI interpolated output) using mkvextract (part of mkvtoolnix, download gMKVExtractGUI.exe and dump in the same folder if u want a gui) and then open mkvtoolnix, dump in your edited content, select the video track and add the extracted timecodes.
Output will now be in sync assuming your edited content contains the same amount of frames as TVAI output.

As for fixing the FPS metadata that can be done in FFMPEG without reencoding OR it can be done during a reencode.

1 Like

I thought empirical data, published here by various users (including by myself) already debunked those claims?

I love stability. I even power-limited my RTX 4090 a bit (to 108% nominal = ~485W). I am always fearful the thing will burn out anyway when I am running like an 8 hour Rhea process. :slight_smile:

Fully understandable my friend! AI rendering use the gpu in a way standard benchmarks and stability tests don’t.

Had to do a lot of trial and error until I found voltage and clocks speed which cause power to hover around 95% +/- 4%. Too much voltage and it punishes clock speed heavily. Slightly too high clock or not enough voltage will cause a crash.

After hours of testing different input formats, resolutions and combination of models as well as a lot of crashes I’ve found the sweet spot.

My 3y old off brand HP rtx 3080 beat all 4090 scores I’ve seen(except for one, which also had considerably higher score than other 4090s which means that owner has also been min maxing his card) when it comes to all of the interpolation models.

For the enhancement/upscaling AIs the fabric 4090 render about 50% faster than my 3080. However, if you’d compare a fabric 3080 to a fabric 4090, the 4090 should be 88% faster if you only compare relative specifications which impact ai enhancement models, such as tensor cores.

At 100% load and 95-98% power usage my temps stay well within the range for minimal degradation. Overnight render won’t have core, hot spot or memory conjunction temps come within 10-15C of its limit, and that’s with a very conservative, but optimized fan curve to lower noise. They average about 50-55% to hold those temps for +12h.

1 Like

Speaking of temperatures… What do you guys have on a 4090 under “full load”? Like processing a video with Rhea, which seems to be the most demanding model. (Rhea XL is still in its “baby phase” IMO, since I see worse results compared to the original Rhea). I have a liquid cooled machine and the temperatures get has high as 68-70 C? Is this normal/safe?

Another thing that I noticed about models, Rhea in particular because I didn’t really test thoroughly the other ones recently. If I use Rhea I get ~4.8fps processing speed. If I activate Focus Fix (even though I know that is not meant for 1080p or lower inputs), the speed is 3x faster (~15.5fps) and it also produces in a lot of cases better results.

Focus Fix downsamples your source before upsampling, so if you start with a 1080p file, Rhea is actually upsampling a smaller video.

1 Like

Interesting stuff. :slight_smile: Thanks.

So, how did your 4090 become faster than mine, then? I set it to us max 108% power, which is 485W (iirc, you could go up to 550W). And yours, at 95-98% power usage, is faster?

I also have mine set to ‘silent’ bios option. (I figured 1 or 2 exta percent in a game really doesn’t offset the rather stern increase in noise).

I have my RTX 4090 locked at max 85C (+1C). It rarely gets to that, though, as I have it power-limited (to 486W max); so mine will hit ‘Pwr’ (in GPU-Z) before it can get too hot.

Rhea is the model which has the highest VRAM usage by far, correct?

In general a +1000-2000mhz memory overclock is fine on a rtx 4090.

Overclocking memory is a lot less prone to crashes than core clock. It’s also not always better after a certain point. Can’t remember the exact reson, but say a 3000MHz memory overclock might throttle other gpu loads, generating frames or something along those lines, while 1500MHz can increase performance a lot.