Video Enhance AI v1.2.1 - OpenVino V3 and Bug Fixes

Just installed it for the first time, on Mojave 10.14.6, and it crashes on launch:

Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY

Termination Reason: DYLD, [0x3] Wrong version

Application Specific Information:
dyld: launch, loading dependent libraries

Dyld Error Message:
Library not loaded: @rpath/libgobject-2.0.0.dylib
Referenced from: /Applications/Topaz Labs/Topaz Video Enhance AI/Topaz Video Enhance AI.app/Contents/Frameworks/libopencv_videoio.4.2.0.dylib
Reason: Incompatible library version: libopencv_videoio.4.2.dylib requires version 6201.0.0 or later, but libgobject-2.0.0.dylib provides version 5801.0.0

Gigapixel AI runs fine.

they did provide some users with a 1.1.0 download link if you use the search engine you’ll find it. So far that’s the only link they provided in the forums, and there’s at least one user who got a ‘1.2.6 beta’ link but it was on private I guess.

OK, thanks for the info

Id like to see them use the NVENC encoder if possible. if that takes away from the performance because the GPU is too busy, then a CPU based encoder if fine too… what ever it takes to speed it it with great results.

It won’t load on for me, I get AI Engine Failure - I’m trying to use CPU mode as my graphics card (Tesla C2075) is too old. Looking at the logs, it looks like it fails when trying to allocate memory. I have 180GB of memory, so size is not an issue.

Some people mentioned this might be a problem with 1.2.1 version, and that 1.1 version does not have this bug. Could someone post a 1.1 version? I can’t find any download links for the 1.1 version.

I found a download link by searching the forum, for others it is here: Video Enhance AI 1.2.0 doesn't work

NVEC encoder would be useless in such a program. On contrary, it allows classic encoding with the slowest presets and settings for the best quality output. If you get for example the same speed with x265 different presets, it would make no sense to not choose the slowest presets while encoding.

1 Like

Hi,

After a lot of testing the best result I get is a first pass in 1.1.1 in LQ 100% Denoise Deblock then a second pass in 1.2.2 in 200% Gaia CG. The result is then very good (but not the calculation time :frowning: ).

1.2.2? Isn’t the latest stable version 1.2.1?

Sorry it was 1.2.1

I think this program has resource management issues. One thing I notice is when I start the application once, and run on CPU, it uses 60-70% resources at approx 20 frames/min. Why does it not put the CPU to full load? I open a second one, and it runs it 80-90%, and they both go 18-19 frames/min. When I run a third one, CPU usage is 90-98%, and all 3 run at around 15 frames/min. Does this not sound off? I would say so. The first issue is: why does 1 instance of Video Enhance not push the CPU to full load? Also, why are there dips in the CPU graphs on task manager. Also, why does opening a second one only add 20% load. Also, why does the frames/min decrease only a little bit with each instance you open. Something isnt right here:

CPU in question is a Ryzen 3900X

Do you notice dips in CPU usage when scenes change or if there’s a complex frame being processed or is it completely at random?

No, I dont think its frame dependant. Even if it was, the issues I described should not be happening. The firs tinstance should put the CPU right to full load. If it doesnt, the second instance should for sure. It shouldnt take 4 instances to fully load the CPU when only one puts it upwards of 70%.

Hi, I checked what you tell, and here is my result :

i put a 1280x720 file (small) cropped on right and left. i have a i7-7700 (intel graphics HD 630) + 32Gb of ram and a Nvidia GT640. my monitor is on the Nvidia Card via VGA output (i would prefer the motherboard / iGpu one but can’t because of the Vga).
The preference is 100% of ressource and Cpu is checked.

Result :

  • the use of the cpu is not going more than 20%
  • the use of ram is around 9 to 10gb.
  • the software is making call to the iGpu every 3 or 4 seconds.
    I had a 6s per frame render.
    Very few software access to the iGpu when the video output is not plugged on it.
    but it seems that Video Enhance do it !? (another software i use and which is able to do it : Vegas Pro 16.)

i used the CG preset to upscal to 1080p.

I tried to do a second render closing the software and running it again, but the use of the cpu was kind the same, between 15 to 20, maybe it’s because i have a iGpu with my Cpu ? and not a CPU with no GPu in it like your Ryzen ?

Oh god, a GT640 and quad core CPU. That must be painful to watch. It must take you like a full day to upscale a 1 minute video… lol

1 Like

Yes but i don’t have an other choice for now, yes it’s painful lol that’s why i’m a bit jealous about people here who complain with a 2070 or 2080 RTX …
so i do it clip by clip and scene by scene (actually restauring a 14mn short movie shoot on DV - > DVD (768x576) for a release in Full HD. yeah, “painful” is the word lol (of course it’s entrelaced and in Mjpeg too lol).

maybe we can help on some videos with faster cards. just upload to google drive and call for encoding help. :smiley:

1 Like

I have a 2080S. While I don’t complain about the speed with it, I do complain about CPU upscaling speed, as this application clearly needs better optimization

1 Like

wow it would be lovely ahaha (i had an Atari 520STF and a 1040STF in the 80’s ) actually i’m doing the CGi scene (it’s a French Star wars Fanfilm low budget (0€ lol), that i did the soundtrack for in 2005 (i’m a musician) but i’m taking care of the Restauration / upscalling / Widescreen Conversion / De interlacing ( :-1: ( ) and Full HD conversion.

yes, i completly agree, but so far, i think the result is pretty outstanding for what it offer. but it’s like any software actually, we’re still the beta testers, i think that most of the issues, the programmers team are fully aware of them, and certainly it will come with time… (i hope) lol. as someone reported, we’re with the 1.2.1 which is pretty good, and he already got a 1.2.6 beta.