Video Enhance AI v2.2.0

Try this one, just rename it back to json file, put it in the models and load Tooaz. It should load DV1. I made it progressive already but its switching the flag interlacedframes to 0.

dione-dv-1.txt (6.7 KB)

Thanks. I think Iā€™m going to have to name it as one of the other four so I can load it from the command line since those CLI option names seem hardcoded .

ā€¦ oh wait there is already a dione-dv-2.json, somehow I didnā€™t notice that before; do you recommend this over that?

(edit again). OK it looks like this JSON is the same as the native DDV2 but with interlacedFrames set to 0. Cool.

[quote=ā€œtaylor.bishop, post:40, topic:23770ā€]ā€¦
We ran a poll in the Facebook community
ā€¦
[/quote]

But as I hate Facebook:
They will not take me back to test for nothing. Sorryā€¦

Do you still use it? Could I send you 1-2 short clips to clean? Full HD progressive, PF25.

Sure. Use a file hosting website and post the link.

Thanks for posting this.

if nothing else, it tells me I have a lot to learn.

How i can preserve the audio during / after upscaling ?
Because i dont have audio after upscaling and is always enabled on settings.

there is some fix for this ?

Thanks you very much.

Iā€™ve noticed a drop in quality, checking old files from last year. GHQ was sharper before GHQ-5

I am saving the processed video to images, then get these into VirtualDub, an old but free video tool. Extract the sound of the original file with a tool of your choice, save it to .wav and let VirtualDub merge both the image files and the sound file together. :slight_smile:

1 Like

Thanks for the Reply / Help but if VEAI cant handle the sound after upscaling them i have soft called Shoutter Encoder which can do all of Extration and Merge too, its all in one.

Thanks alot Imo.

1 Like

Oh, okay! I use VirtualDub not only to merge in the original sound. Earlier versions of VideoEnhance lost quality if I did not choose the option to save to images! Since VE Ai can save the video file to Apple ProRes this issue might be gone. :slight_smile:

1 Like

Thanks, later am going to try that suggestion saving on Apple ProRes next time.

Thanks for the tip.

According to Nvidia website:

(snip) NVIDIA Studio Drivers Supercharge Creative Apps

(snip) ā€¦each new Studio Driver has significant bug fixes and enhancements for creative apps over the prior release.

(snip) New AI features in top creative apps, running on [NVIDIA RTX GPUs]ā€¦ are changing - and accelerating - the way we create. The April NVIDIA Studio Driver provides optimal support for the latest AI-powered features in creative applicationsā€¦

(snip) Creative apps supportedā€¦

Topaz Labs DeNoise AI

Topaz Labs Gigapixel AI

Topaz Labs Sharpen AI

Topaz Labs Video Enhance AI

There are lots of different ways to deal with this problem. Mine is probably one of the more complicated ones, because Iā€™m trying to automate everything.

That said, AviSynth is hard to use. It took me many, many hours to figure out how to do all this stuff and thereā€™s still much I have to learn. And Iā€™m a professional computer programmer! VEIA fills a niche that really needs filling, since most people donā€™t have the ability to figure out all these low level tools. But unfortunately it still doesnā€™t do some of this stuff as well as hand-rolling it.

1 Like

Hybrid is basically a front end for it, and itā€™s designed to install and use without having to set up additional stuff. Freeware. Handles AviSynth x86 and x64, as well as Vapoursynth.

Hybrid (or similar things like Staxrip) are probably the right answers for a lot of people.

AVSPmod is what I have been using for live script changes to Avisynth scripts, and it is a good middle ground in that you have previous of the output, which can then be fed directly into TVEAI as the script file. The only issue is that if the script is intensive - such as the one you use that I tried this weekend - it almost always causes crashes periodically.

I ran it as a straight export to images first to take it easy on TVEAI and it took 4x attempts with moving the starting frame each time as it crashed 4x times while trying to export.

This is also why I avoided using Staxrip - when this crash occurs, you lose everything and it was basically guaranteed to crash. At least using images as an intermediate each time avoided most of the issues when it did as you could start off where you left off.

On a side note, I am not used to using CLI and ffmpeg directly, and I have not attempted to assmble the VFR MKR before. I was able to tease out that I think this is ultimately the command you put in for the assembly of the images, but as you definitely have a better handle on the CLI, is this correct ? (excluding path addresses)

ffmpeg -i ā€œD:\Video\Voyager\S01E03 Upscaled VFR%06d.tifā€ -c:v libx265 -profile:v main10 -pix_fmt yuv420p10le -preset slower -crf 18 -vframes 66476 ā€œD:\Video\Voyager\S0103.mkvā€

The frames total has come from Avisynth reported frames running the TFM/TDecimate process on it.

Iā€™m not passing in -vframes argument at all. (Is it needed to tell it how many frames there will be? It seems to work fine without it). Otherwise that looks pretty similar to what Iā€™m doing for the video. The timecodes are merged in with mkvmerge as another step after this for VFR. (It might be that ffmpeg can also merge the timecodes, but I couldnā€™t figure out how and itā€™s trivial with mkvmerge).

Iā€™m doing other stuff with ffmpeg. Iā€™m also passing the original VOB as a 2nd input for the purpose of reintegrating the audio and subtitle streams. (Note that while this works for voyager, thatā€™s lucky because thereā€™s no offset on the audio stream, but there often is with VOBs. Iā€™m planning to switch merging the audio & subtitles to be done with mkvmerge also, and will use the offset that dgindex reveals there for situations where it matters, just because itā€™s simpler than ffpmeg).

What I like about using the CLI for everything is I have complete control over each step and can introduce a lot of resilience, as well as the ability to optimize parallelizing work to eventually be able to process stuff as fast as possible. And of course everything is configuration driven, each step knows where to find its inputs and where to put its outputs without me having to do anything on various iterations.

For example, when you start the veai step with my CLI frontend, it will automatically check for the last image that was output, and start VEAI at the next one, so it will automatically resume where it left of if it crashed or was canceled.

1 Like

That flag was because I couldnā€™t entirely find your command line setup in the docs links, so I reverted to a second guide someone else did for using ffmpeg to reassemble the MKV for VFR also with using Star Trek. I just copied your settings from the files I could find but used their line format overall.

This was that guide as a reference ds9-upscale/guide.md at master Ā· queerworm/ds9-upscale Ā· GitHub

I actually have no idea if its required, but if you donā€™t use it then its probably fine without.

I did note that the H265 settings definitely take a considerable time to encode. The H264 encodes usually take around 20x minutes, this command (above) when I used it was estimating 9x hours. I have seen some H265 encodes able to use Hardware acceleration, but I donā€™t know if it is possible yet on my rig or at all.

And that same support is also in the Game Drivers, you just also have bleeding edge fixes for newly released games which may cause problems, until that driver has been out awhile and then becomes the next Studio Driver. Didnā€™t contradict a single thing I said.