Video Enhance AI v2.3.0

How would that work? As far as I know, to train an AI you need the high quality reference result along with the thing you want to reconstruct/improve. Normally users only have the latter of those. And even if users had both the original high quality source footage and the bad one, why would they want to waste their resources on training an AI? Isn’t this why we pay for this software - to have other people deal with this? Why as a paying customer would I have to increase my electricity bill to train the AI?

3 Likes

Not sure if this has been discussed before but I’ve noticed that Proteus doesn’t deinterlace SD videos. The tutorial makes it seem that Proteus can be used with SD video immediately but I don’t think so based on the interlacing.

Perhaps the next update can include deinterlacing in Proteus the way Dione does so nicely.

Dione is still my go to model for SD videos though I haven’t seen a noticeable improvement in this model for many months.

None of the models appear to be able to improve significantly on super 8 mm film or 1980s-1990s vhs transfers.

Thanks for the new version of Video Enhance.

1 Like

Well it is the cleanest looking but on a Mac 2010, 12 core, 48 GB memory, 10.14.6 OS, with a Radeon RX 590 8GB it is stable but will not run on CPU or GPU. The error message is always “Unable to load the selected model. If this error persists, try lowering your VRAM usage in the preferences.” It doesn’t work even when selecting CPU or no VRAM. Before it would run in CPU rather slowly. sigh

I wonder why Proteus in addition offers no permanent auto mode that chooses the best settings for each frame individually during process. :slightly_smiling_face:

5 Likes

It looked like a great release, but the introduction of presets I think messed up the persistence of individual video configurations. What I mean is, if set a video to Gaia-HQ, then add another one to the queue and set it to another model, say Dione-DV, when I go back to the first, it does not remember the previous setting and Dione-DV remains set for the entire queue. In other words, I’m not able to set different models for different videos. Anybody else experiencing this?

1 Like

I thought about it too. But that would cause flickering issues when the first frame does not match the quality of the second frame if the second frame is…bad.

My concern about Proteus is the auto ability based on the frame I choose in the preview, not the whole video. Each frame provide different settings for the video which is… kinda meh for me to use it. The auto could be good for a small video, but for a movie or a long video, Proteus is not a good idea. Unless Topaz can “enhance” it and let AI scan the whole video to adjust the quality meters itself.

2 Likes

Can Chronos interpolate and smoothen animated content? especially hand drawn content like japanese anime? (ie, animation is made more fluid by inserting new frames)
This was possible with RIFE in Flowframes via deduplication of frames…
Will this be added in future updates?

I haven’t tried it myself but I hope not. This bug really shouldn’t be back, it’s very annoying.

I did a test with Windows 11 (RTX 3070) and forgot to switch to Studio drivers but looks like this release is working with the Game Ready drivers that came preinstalled (471.11).

2 Likes

While it is free, I tried running it, but it errored out without producing an interpolated video… something on my end sure, but at least Chronos works, as I can note the progress and increase the frames up to 2k!

It’s still a magnificent free program as it’s an excellent alternative and thanks for the link; I just like the Chronos model so much better!

For one thing, it completely replaces the optical flow setting in many, overpriced adobe rent-ware and it completely does away from the horrid waving artifacts for both slo-mo and higher frame rates, and secondly, it; unfortunately, renders my beloved SVP Pro software mostly obsolete as it not only restores old 15 fps video into a smooth, unfettered, 60-144fps, but also turns 320p videos into HD!

This program is worth its weight in gold for any video editor’s/animator’s pipeline, and I’m incredibly grateful to topaz labs for offering a perpetual license, as rent-ware is a pox in the software industry and doesn’t deserve my money no matter how great the product!

1 Like

thanks to play the mega beta tester :wink: (do you accept to write to me in private and tell what you think of W11 so far ? (as it’s out of subject here !)

Thanks for the big update to v2.3.0!
This is one of the most impressive updates since launch of VEAI.

The new Proteus model is quiet impressive and very helpful.
The export times for Protues are way too high/too long in comparison.
Here I wish an improvement for this model with a next update for faster export.

My system: Windows 10 64, 64gb ram, intel i-7 10700, RTX 2070 super.

Cut your video into scenes. Use Proteus to it’s full effect on each. Reassemble. Extra step, sure, but this certainly isn’t reason to toss Proteus out.

I wish it was that simple. Proteus adjusts the meters frame by frame differently. So scece by scene is not enough. And for movies, that will be time consumingnfor nothing.

I ended up writing down several auto numbers across the whole video clip and then calculate an average value for each parameter for use on the whole clip.

1 Like

You just described why the software won’t ever be able to do it either.

Foundry Nuke has done that, and it works, but it is not Hardware friendly, requires beefy hardware. In Nuke which is a node based video compositing software. You can create a “copycat” node that uses AI and basically you say for example use standard nuke tools to retouch a face on one frame, and than link that to AI copycat node and it will try to see what you did and adjust it to other frames, training itself in the process. Once its done , the training can be shared for similar scenes or for other users.

Here is a demo of how it might work.

CopyCat Quick Start | Machine Learning in Nuke

It would be nice to see that eventually in Video Enhance as a way to customize the process for specific usage and to allow community to share training models between each other.

2 Likes

I noticed VEAI 2.3.0 can crash without any warning, when I try to “test” second video with Proteus auto settings. That sometimes (probably when the videos are of the different codecs or containers) happens after doing so with the first video within the same session. All on i9 7940X, 64GB of RAM and RTX 3090 with 471.11 Studio driver.

If it happens again, I’ll send the logs. Despite being set to use a GPU, VEAI always uses prap-v1-fp32-ov.tz model file to determine the Proteus settings for the selected frame - it’s probably set this way to increase the precision?

1 Like

Nah. I know it’s possible when I understand how the Auto function works. It might take time, but it is 100% possible. With the current direct that Topaz team is following, I’m sure they can or will do it.