Every month or so, I feel like I learn new things that I have been doing wrong. Suddenly, I need to go back and process the movies that drove me to buy VEAI in the first place, again.
Here is my list of all the steps I usually have to take and why:
·ffmpeg to convert 29.970 FPS to the original FPS. Usually 23.976, but I have a bunch that seem to be random. If I don’t, the output from VEAI comes out jerky in mp4.
·ffmpeg to deinterlace. That way I can use any model I want in VEAI.
·ffmpeg to cut original source into clips. If I use VEAI to do this, the audio get seriously degraded.
Those three can be done in one pass.
·VEAI to upscale or interpolate. Output must be image files. If not the dark scenes get lots of gray blocks no matter the CRF value for mp4. I cannot speak for ProRes because I have nothing else that uses it so I have never tried it. This might get rid of the jerkyness mentioned in step one, but I keep it just in case.
·ffmpeg to convert the image files back into a movie file. MKV containers work best. Usually H265 format, since everything I own can handle it.
·MKVToolNix to take the audio, subtitles and chapters form the first ffmpeg pass and put them in the final video.
When I look at all these steps, they could all be automated into one script. Too bad someone removed the command line version of VEAI… But I hear that’s coming back!
I’m curious what steps other people have to do to get use out of VEAI.
my workflow is the following: I open the video file and then have a look at random single frames across the video to compare the original frame with my new settings. at the same time I continously adjust the settings, mainly the proteus mode. when I found suitable settings I let the application render the video to single frames. to speed up things I run two instancees of the application simultanously.
I’ll usually start by creating an AviSynth script (.avs file) to handle any issues the input footage may have. This include problems like incorrect framerates, blended frames from bad deinterlacing, cropping out black bars, as well as lowering the resolution in case it’s already been increased before. It’s best to have your input footage appear as close to the original source as possible. Sometimes I’ll add a Reverse() command in order to reverse the footage, which can give some benefits with temporal tools such as VEAI.
Anyway, then I drag the .avs file onto VEAI which simply treats the script as a video. Doing it this way means I don’t have to render the script beforehand. Lately I’ve been using Proteus and Artemis Dehalo the most in VEAI, but I still give the others a try. If I’m doing a very long video clip, I’ll render it as a PNG sequence since they can be paused, resumed, and recovered in case of crashes. If it’s a smaller video clip, then I’ll go with ProRes. (Having tons of available disk space is essential in the video world but thankfully, huge mechanical hard drives are very cheap.)
If there’s nothing else to do, then I’ll encode the video however I need to, usually with StaxRip. But often I’ll bring the VEAI output footage into After Effects in order to deal with artifacts. VEAI can sometimes alter the colors in ways I don’t like, so I can easily fix that by taking the color from the original footage and applying it to the VEAI output (with layers and blending modes).
Another thing I’ll do with After Effects is make it so that VEAI output is only visible on the hard edges within the image, basically with the Find Edges effect applied as an alpha channel. This helps get rid of those weird repeating patterns that can sometimes show up in plain, flat areas. So even if the original footage is much lower quality and very compressed, it can still be used to further improve VEAI output after the fact.