Though it has issues, TVAI 3.0.0 is the best thing that's happened to my workflow

My use case: Enhance my small collection of DVDs to Blu-ray (FHD) quality to play on my TV through Roku by means of my Plex server.
Issue 1, some DVDs need trimming, all DVDs need frame rate correction, some DVDs need deinterlacing.
Solution: ffmpeg pass.
Issue 2, I usually get the best results from Proteus, but only on a more clean source.
Solution: Run an Artemis pass at 100% scale first.
Issue 3, image sequences are the only output option that are truly lossless.
Solution: Run a final ffmpeg pass to encode to H.265 using libx265. Best quality to file size and able to stream over WiFi with my Plex and Roku setup.

Okay now try doing all of that by hand with VEAI 2.6.4.
I have to make a batch file for the first ffmpeg pass. I have to wait for it to complete. Then I can start VEAI going. I have to wait for it. Then I can start the next VEAI pass. I have to wait for it. Then I can start the final encoding. Once it’s done, I have to merge all the parts, sound and subtitles back into the final file ready to be served. Deleting image files on Windows without a script can take 20+ minutes, just to move them to the recycle bin. With a script that 20 minutes becomes like 1.
And that is issue 4. So much dead time waiting. I cannot always be there for when a step gets done to start the next pass.

Out comes TVAI 3.0.0 with CLI.
I started a little Python script to automate the steps—now that I can call TVAI through the command line.
It quickly turned into a monster, but it’s awesome!
Now all I have to do is create a settings guide file for each movie I want processed, hit run and it does the rest. I do still have to join the final clips, but I could probably automate that too eventually. No dead time waiting.

An added bonus: Python multi-processing. It’s faster to run three of the first pass at a time, but only two instances of TVAI at a time. And the final pass is fastest doing one file at a time, on my computer anyway. So I have it all set to run the right passes with the right amount of instances at a time. Also sorting. I make sure it takes the longest files first. This makes a difference because I’m splitting files into parts. If I have three long parts and three short parts, running two at a time, when the third long part starts, the first short part starts and usually all the short parts can finish while the long part is going. Without sorting, it was often taking extra time by starting a long part last making it run alone for most of its processing time.

Added bonus 2: I have more control over what colorspaces and models get used because I can remove unneeded or unwanted things from the commands. If they improve the command generation with updates, the GUI may become better than what I have, but it’s not right now. Again, only for my very specific use case.

I have timers throughout the whole thing. I’m pretty sure it was faster with the 3.0.0.8 beta, but I’ll keep an eye on the times when the new updates come in. I added all the sorting logic after 3.0.0 came out.

Right now it’s taking about 5 hours and 40 minutes to run two 45 minute files.

I wonder if having libx265 compiled into the TVAI ffmpeg would save time by not needing to output to image files and encoding directly into the final format.

4 Likes

That’s really great to hear! Would you be willing to share any more details or files/scripts you’ve made to help others with similar workflows?

The script is huge and very specific, but it could be a good starting point for anyone wanting to do something similar.
FR.zip (4.1 KB)
First thing, I hard coded the paths. You’re going to have a time setting those up for your computer.
I also hard coded the amount of processes to run at a time. Those will need to be adjusted according to your machine’s ability.

It wants a version of ffmpeg that has libx265 apart from the TVAI ffmepg.
The flow goes: it looks for a file that lists out all the folders you want to process (Example included in the zip). Once that is loaded it looks at all the mkv, mov or mpeg files and if they have a txt file with the same name, it adds them to be processed. The text file lists out all the options you want to run on it. Example
-ff True —de-DVD pass/split into parts. It only can split into two parts right now.
-ss null —Set start time.
-t 00:06:34.753 —Set part one duration
-r 23.976 —Set frame rate.
-amq True —Do AMQ pass at 100% scale?
-prot True —Do the hard-coded Proteus setting pass?
-ahq null —Run AHQ pass, this will run before prot if you should choose to do both, but it was not meant to do both.
-type png —Input type for the prot pass. Not really helpful, but you could put mov here if you don’t have -ff nor -amq set to True.
-fin True —Encode result to H.265
-pt2 True —Is there a second part?
-ss2 00:07:21.198 —Second part start
-t2 00:35:21.084 —Second part duration
-clean True —Delete all image files when done?
Anyway, you don’t have to include all of the options or even put True it just checks for null and if it’s something else it takes that as a do it.

Enjoy.

5 Likes

I’m kinda of digging it. It definitely fits well for batching many videos. For experimentation it’s kinda of a drag.

Previewing is kinda a step backwards while offering more flexibility. I just wish i didn’t have to right click to stop them. There’s so much room for a cancel button in that area.

Best thing that could happen to my workflow, as I work on a variety of sources is better previewing. Just a frame sample from several models. Queing up the models in the background (take advantage of ram!) ability to split screen multiple previews.

Yeah this is the other end of it—when you’ve settled on what settings you want to run on a whole lot of things.

FRMuchUpdated.zip (7.5 KB)

Here is my most up to date script.
I made things a little easier, but probably all of it needs a well made tutorial video on how to use it. I do not know how to make any tutorial video well.

The most basics:
At the top of the script are all the global variables. Change the paths and number of instances as needed.
I have included an options file as an example of how I set it up for a general show.
This options file is calling it to use vapoursynth with the -vpy true. You’ll have to install vapoursynth and all the stuff for QTGMC if you want to use that. You will also need a version of ffmpeg that can open vapoursynth scripts as input.
It also has -mergecommand. To create this, I make a small 10 second clip of the show. Load the mkv and mka files into MKVToolNix and set it up how it want it to merge, then copy the command into a text editor, change the paths to rxx variables and then replace spaces with ~.
rxx3 is the final output, rxx1 is the mka file, rxx2 is the mkv file and rxx4 is the title if you choose to add that.
Sorry I could have made that a lot easier to understand, but this was made for me. Anyway, same goes for -appendcommand except rxx1 is now the part one file and rxx2 is the part two file.

I know for sure I’m leaving out a lot of information and details. Please ask away.
Would it be a good idea for me to put this up on GitHub? I could slowly fill out details and make it into something more useful to more people that way.

Either way, as always, use this however you will. At the very least, it should give you an idea of what such a script can do and maybe be a good starting spot, if you want to make your own.

1 Like

Looks interesting :+1: Later I will take a closer look to it.
Question::: Why moving deleted files to the recycle bin instead of deleting them once and for all ?
You can use FFV1 - FFmpeg LossLess Encoder v3.4 instead.
https://trac.ffmpeg.org/wiki/Encode/FFV1

Yeah… I didn’t know about that encoder when I started making the script. I still don’t think it would work with how I do interpolation. I count on being able to change the frame rate without losing frames.

It’s not moving the files to the recycle bin. It’s deleting them.