Pre process strategies TOPIC

I would like to see and learn how people pre process their video before VEIA.

my strategies are various, sometimes something work, sometimes not.

Deinterlace work well or worst for a tons of reasons, then i try different scene by scene.
i start with deinterlace using two different ways :

Second solution give you a 50fps video (doubling frame to avoid risk to waste data from fields, that i sometimes blend to 25 with Resolve optical flow (far better then adobe solution, far faster then it).

Sharpening halo and others artifact
i need here strategies if someone know how to remove without waste too much video i would like to learn. I found different videos and plugins but most reduce too much the small dectails that are recognized like halo defects.
sometimes i had good result to Up smowly, from 576p to 480p (smalling halo), then 480p to 720p, 720p to 1080p, 1080p to 2160p, 2160 to 1080p final.

i observed that sometimes reducing resolution of clip allow VEIA to work better, some clips at original 720x 576 (i’m in pal land) don’t give me good result, scaling them to 640 x 480, or 640 x 400 for 16:9 shooting give me better result.

Color pre process
most of old video codec saturated too much video, i reduce saturation and vibrance to allow VEIA to see small dectails in saturated pixels.

i observed that sometimes i obtain good result from scaling up and down shooting.
from 640 to 1920, from 1920 to UHD, then later again down from uhd to FHD for final master, be cause it give me more pleaseable result.

please share your thought and your strategies.


Hey Carlo!
First of all - Deinterlacing. I used Re:VisionFX FieldsKit a lot, but often it lacks quality compared to AEFX native deinterlacing (interpreting the PAL-footage, 768x576). As I don’t like to struggle with QTGMC and commandlines at all, so I am very looking forward to the upcoming Deinterlacing feature of VEAI! :slight_smile:

Sharpening Halos: I use “Detect Edges” on my footage and use this as a mask for slight blueing or brightening of dark sharpening halos (caused by our Sony PD-150 camera).

Upscaling: Currently Gaia HQ gives me the best results when scaling from 768x576 to 1920x1440.

Color Correction: I use After Effects for all color correction and Compositing. It is important to apply Texture Management on upscaled footage: Denoise with Neat Video and Renoise with Red Giant Renoiser. The results just look like film footage.

See examples here:


Hi @carlo3 and @Marc_Potocnik
We are working on a Deinterlacing model. It’s likely going to be ready at the end of this year or early next year for release. If you’re interested in testing and getting a sneak peak, we do have a beta group here:

1 Like

Hi Emily, thanks a lot, i’m just in beta group and i will be happy to test newer version, especially about interlacing models.
I had many videos to test from different sources from vhs capture to miniDv, to HDV and more.

1 Like

Hi Marc, thanks a lot for suggestion. In past i used Fields kit but i not like the result, redgiant give me better result about static frame vs partial motion. But everytime i check and test in different way.
I use Resolve for color correction and regrain, but my problems are before it, are during preparation of clips for VEIA.
In the last night i upscaled some terrible videos, 640x480 shooted with a bridge in 2005, and result are astonish…

[please forget the foreground, but see the quality of cat fur]
in this period i try to restore and old work, dvcam interlaced, but the result are not so good. i deinterlace in different way (resolve, after, redgiant frames, and more), denoise with neatvideo before export, i keep original pixel, i changed resolution but the result are not more a common upscaling, no miracle or dectails recognized.
i’m wrong in something but i not understand where.
VEIA expects a certain resolution? some format size? some color sampling? what are the best source for veia to activate one or another Ai model to recognize shape and dectails?
i’m going to think too complex ?
if someone give me more info i will be happy to learn.

1 Like

QTGMC in Hybrid does an amazing job with deinterlacing, cleaner than anything else I’ve seen. I couldn’t make it work on Mac, so I use it in Windows (Bootcamp). No command line knowledge is required, although admittedly it is a clunky interface.

I see you are using Artemis LQ. In my experience, this model will produce astounding results with one video and fail completely with another. There is something it likes but I haven’t been able to figure out what it is.

Hi Johnny, follow the link of my first post, there is a just ready package to use Hybrid under mac, i’m also not so skilled with hybrid, but a saint did an amazing work to package and tutorialize it under mac.
me too sometimes i see miracle sometimes nothing to do, and i would like to understand more about what kinmd of source Ai aspect to do these miracles :smiley:

Thank you, I will check out the Hybrid link.

Maybe someone will be interested. New alternative to dainapp. Much faster should be.
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

free windows app integrated rife interpolation,


I do all my preprocessing in AviSynth (Windows):

  • deinterlace with QTGMC(“very slow”), if source is interlaced
  • if any aliasing/jaggies still remains, reduce it with Santiag(1,1, nns=4)
  • Crop() to active image only, if any letterboxing/pillarboxing is present
  • stabilize chroma with CNR2(“ooo”), if video came from the analog domain
  • further denoise chroma with FFT3DFilter(sigma=5, plane=3, bt=5), if analog source with lots of chroma noise - adjust sigma value to match chroma noise intensity
  • adjust luma levels with SmoothLevels(hq=true), to make sure all useful luma information is captured within the 16-235 Y range
  • convert to 8-bit RGB (aka RGBP8, aka RGB24) using either ConvertToRGB24() or z_ConvertFormat() - I often use the latter because it gives me more control over matrix/primaries/transfer characteristics

I’ve found that luma denoising should be avoided if using Theia models, because those already have denoising built in so denoising it beforehand will just unnecessarily soften the picture before VEAI gets a chance to do its magic.

Luma denoising does seem to help with noisy sources processed with Gaia models because otherwise they might end up boosting noise by accident.


Oh, and if you want to serve Avisynth scripts directly into VEAI without creating an intermediate video file first, the combination of Pismo File Mount and AVFS plugin works very well. It’s particularly helpful if you ever need to use 32-bit Avisynth plugins, since PFM+AVFS can read 32-bit Avisynth scripts even if VEAI can’t.

1 Like

Im extremeley curious to see the VEAI deinterlacer!
If its made right it should be a selling point for VEAI on its own.

1 Like

Thanks for the tip! I haven’t tried this yet, but I’ve added it to my toolkit. It looks very appealing for those plugins that only have 32bit modules.

Not sure if you’re aware, but VEAI can read AVS scripts. It’s not available in the UI, but if you enter . (star dot star - forum is stripping it out), or the full avs file path in the file browser (and hit enter), you can then select AVS files. The plugins have to all be 64bit, and you need 64bit Avisynth+ installed, but it works great. If something in the chain isn’t 64bit, I’ve noticed you get a red window…

Thanks for the tip! I haven’t tried this yet, but I’ve added it to my toolkit. It looks very appealing for those plugins that only have 32bit modules.

I just figured out an alternative way to pipe A/V output of 32-bit AviSynth scripts into 64-bit applications that doesn’t require Pismo.

TCPDeliver (download) is a plugin that used to ship with classic AviSynth but it got moved into an external plugin when AviSynth+ was released. It allows one AviSynth process to send A/V frames to a local network port and another AviSynth process (running on same or remote computer) to read the AviSynth frames over TCP. The functions are documented here.

Let’s say that source32.avs is the script which uses 32-bit plugins. Add TCPServer() to the end of that script and load it in 32-bit VirtualDub2. (You probably don’t need to use VirtualDub specifically, I’m sure other 32-bit video apps could suffice too, but I tested it in VDub.).

While keeping VDub open, create another script, let’s call it client64.avs, and add just a single line to it:

That’s it. When you load client64.avs in VEAI it will load client64.avs using 64-bit AviSynth, which will then request A/V frames from localhost over TCP, and the 32-bit AviSynth instance running inside the 32-bit VDub process will serve the requested frames from server32.avs.

1 Like

Hello everyone! Interesting solutions have been given here. I also use a Mac hence I can’t use AviSynth (and that Hybrid package is too confusing for me), but I use FFMPEG which is kind of similar (using the FF-Works GUI, saved my life).

I deinterlace with FFMPEG using the Bob Weaver method which gives me the cleanest field results, and I used to slightly denoise the footage with NeatVideo in Premiere. However, I realized that in VEIA, the footage would come up much softer than if I feed it without the Neat denoise (as VEIA has builtin denoiser anyway and this way it reads the noise/footage information better).

With the latest versions of VEIA, Artemis v9 has been my go to since it’s the faster and visually satisfying solution. With the recent 1.9 update, I was totally blown away with the new Deinterlaced models included! the DV and TV give much sharper results and convert to 60p smoothly, I guess this is what QTGMC+Avisynth users been doing all along, but being a Mac user, haven’t had this experience. Also, I realized that it gives a much more detailed quality than deinterlacing with BobWeaver, which whereas amazing, gives a blurrier result.

I did ask this recently on another topic but seems like it wasn’t the right place to do it, so I’ll ask here again: I have some SD (DVD) footage that I want to upscale, and when I feed it into VEIA, while it works great, I’m getting the “common” deinterlaced coloured lines effect in high motion areas. I’ve seen this happening before when using the FieldsKit, RedGiant etc plugins; but BobWeaver in FFMPEG does remove them perfectly, but like I said before, it gives a blurrier image in comparison with the direct VEAI new models.

A user told me this happens because of a YUV 4:2:0 Sub-sampling and color space mismatch and recommended to upsample to 444, however I tried that (in FFMPEG rendering to YUV 444, color space to Rec709 in a ProRes container) but no luck… I still get those annoying red/yellow/greenish lines on some motion areas of a few frames.
Anyone has any ideas of how to get rid of that while maintaining interlace footage so VEAI can do all the work, OR… a better method to deinterlace that is sharper than BobWeaver?

Thank you very much in advance!