VEAI New Features Request

Here is a list of features I would love to see in one of my new favorite tools VEAI v1.71.

Adobe Premiere/After Effects and OpenEFX Plug-in [Top Request]
The ability to use the powerful features of VEAI without leaving my favorite editing and vfx software would be a gift from the heavens.

Save and modify presets
The ability to name and save custom presets as well as modify Artemis/Gaia presets would be an amazing time-saver and increase user control.

Save projects
Saving and naming projects would allow user the ability to revisit previous projects if any adjustments to media are needed without extensive notes of Topaz Video Enhance AI settings.

De-interlace model
Topaz Video De-interlacing AI with adjustable settings.

Bit-rate instead of Compression Factor
Bit-rate gives more accurate control of total bit-rate/final file size instead of Compression Factor. Add bit-rate/quality controller to Quicktime (.mov) export as well.

Clip duration
An accurate reading of video frame count between In and Out points.

OpenEXR/DPX import/export
CGI artist will rejoice with the addition of these VFX formats.

Keyboard shortcuts
Time saving hotkeys.

In/Out points jump-to buttons
Clicking the brackets outside of the In/Out frame count will move cursor to that specific In/Out point along timeline.

Video refresh control
The ability to disable video refresh during preview/batch process render with Caps Lock or Check Box toggle would decrease render time.

Video Enhance AI Timline
Adding a video/audio timeline to batch video area for user control of the following:

The ability to set keyframes along the timeline allowing the user to set different video enhancement settings to any part of the media.

Video containers with multiple audio tracks
Adding multiple audio tracks to video exports for increased language and audio channel versatility.

Modify audio settings and file export
Audio encoder, bit-rate, channels and sample rate modification and the ability to export separate audio files (.wav/.mp3).

Subtitles/Closed caption
Subtitle/Closed Caption import would be ideal for hard-coded video export.


veai basically just uses ffmpeg/libx264 to encode the video, but perhaps there can be an option to customise the command line used to encode the video/audio

Mind you, bitrate based encoding tends to work better when you use 2-pass encoding, and I’m not sure this would work well with veai when it upscales the video on a frame by frame basis, unless it were to output the frames as png images as an intermediary step, doing the first pass during the upscaling process, and then doing the second pass after all the frames have been upscaled.

Otherwise, you could do the 2-pass encoding yourself after outputting as png

When I process large video files, a movie for example, I save to images so I can stop and continue or just continue if the application crashes unexpected. I can only use jpeg which unfortunately looses noticable quality of what I see in VE preview window because when I save png files instead VirtualDub freezes when I want to load in these files. Maybe the memory consumption even with 32 GB of RAM is still too much for that many png files, so I would like to see the JPEG files to be saved in a higher quality setting!

1 Like

I am all for Topaz giving us as many options as possible, but like many here I use all sorts of workarounds and other software to pre or post process. In the case of needing higher quality jpg files I would just produce them myself - save as PNG, then use Irfanview to batch process to JPG at whatever quality level I want.

That is what I did before VE. I extraced the video to png images and let Gigapixel improve them. But that takes far more time than just using VE.

I found that using Gigapixel on videos created too much flicker - the big advantage of VEAI when it came along was that it paid at least some attention to the frames before and after the one it was working on, while Gigapixel had no such knowledge.

The single biggest win for me would be the ability to switch models within a video. Old clips, Artemis-LQ; new material, Artemis-HQ; CGI, GAIA-CG. Just let us zoom on the timeline, and set start points for the given model. Don’t even need an end point, just “Model X” starts here, and if we don’t set another start point elsewhere, continues until the video is done.

1 Like

I haven’t seen a need for that, but if I did I would just do two recordings and concatenate them.

I’d like to be able to output to an mkv container rather than an mp4 container.

mp4 has a more limited set of audio codecs that can be used with it. So it would be easier to just copy the audio stream directly into an mkv container rather than having veai encode it to aac to make it work with the mp4 container.

But also, mkv tends to work even when the file is incomplete. You can start playing the file almost immediately after it starts writing to it. This is useful to see if it works as well as you’d like.

1 Like

I would like no gamma or color shifts when upscaling dslr 1080p > UHD

What we need above all in my opinion is a way to completely customize the encoding parameters. This should be fairly easy since VEAI relies on ffmpeg, it’s only a matter of exposing the command line. Sure it’s not user friendly, but you can make it an advance option for all I care.

VEAI wants to be professional grade software, but the encoding is very poorly handled and it’s not a tool I would dare use in a professional setting. I would be remiss to encode a video using diamond motion estimation and a single reference frame… There are a ton more settings that are needed for this software to have any chance of being more than just a gimmick targeted to casual users : codec tune and preset, colorspace, an actual cropping tool etc.

Deinterlacing is superfluous in a software that is already compatible with AviSynth scripting engine by the way. You can open your script with a filter like QTGMC and feed that to VEAI.

NEED output mov format by command-line, plz

So I feel like this would be a heavy lift and I don’t know how much of a help it would be given the amount of input data I assume is needed to train models here, but here goes…

Would it be possible to make some sort of module whereby users can contribute to model training? I’m thinking of something where you input a SD source along with an actual HD remaster of the same material (thinking along the lines of Star Trek TNG) and you sort of crowd source the model making? Given the amount of old movies and TV shows out there that were actually remastered this could potentially be more useful than training on stock footage of different resolutions, especially for people who own digital copies of the material and have GPUs capable of doing the work. While experimenting with VEAI on TNG and comparing results to the actual remaster (yes I own both legally) I estimate that it would offer ~10 million frames to train on.


I wonder if anything could be done to reduce any edge enhancement/sharpening before resizing.

Most of the time it’s not too noticeable but any high contrast areas really show it up.

As for deinterlacing, note that even if QTGMC is indeed the best option out there, especially for badly deinterlaced progressive footage, it still remains a traditional method, and just as AVISynth offers the best traditional video processing tool, I believe an AI powered deinterlace could potentially make a huge difference, like being able to deinterlace without the need to apply some optimized smoothing or denoising or post processing.
So I do not agree, deinterlacing would not be superfluous. But of course this only makes sense in VEAI if the AI deinterlace is implemented in it and not made as a separate tool.

I use QTGMC all the time, both input type 0 (for true interlace) and input type 2 (for badly deinterlaced progressives). What I don’t see how they could do is to grab the output as it occurs for AI processing. I’m not sure there is a demand for say, checking a deinterlace option, and having to wait until that completes before any AI work was done.

Deinterlacing done by a neural network (I know the name of the software has AI in it, but we know it’s marketing mumbo jumbo so let’s make sure we’re talking about the same thing here) wouldn’t be much better in my opinion than an intrafield interpolator like QTGMC, considering it’s already doing a lot of analysis to predict the missing lines, as well as compensating with motion estimation. (Smoothing and denoising can also be controlled, and they are not necessary for a good result.)

Now of course I’d be interested in seeing the result of a deinterlacer that would work with a GAN to perform in-painting to restore the missing pixels, although I don’t except a massive improvement over QTGMC, but that’s a whole different project that I don’t expect ever being part of VEAI.

1 Like

Yes I agree AI is marketing, but on the other hand it helps categorizing the approach the software uses.

As for the subject itself, you are right : if Topaz Labs actually manages to achieve some efficient (let’s say properly trained) de-interlacing models that would allow to recreate detail lost in the process (like what QTGMC already does with a “traditional algorithmic approach” with options like, for example, SourceMatch and Lossless) or even to find and remove interlacing artifacts that QTGMC fails to detect / address (I have quite a few of these in some old family footage that a family member “ripped” before throwing the Hi8 tapes away…), chances are high that they would release it as a separate tool.

I can’t imagine them integrating it in VEAI at all, because I believe it requires extensive development work (the fact that they have started working on it a while back without a single release or teaser yet is a strong indication of how difficult it must be - or maybe it just doesn’t work at all !).

@ infernoproductionz
I also wanted to react to this : VEAI is not mean to be a complete video editing suite.

Most of the things you are asking for would indeed be nice, but I think multiple audio tracks and subtitles are somewhat off-topic in the sense that all we’d really need is for VEAI to pass them through and not re-encode or edit them. That would let us use a proper encoder with all the required oprions once the video has been enhanced.

This being said, options to save presets, projects and processing queues, to chose ProRes quality options, to control framerate and let us use different models for different “scenes” would indeed be a major improvement.


Being a happy VEAI customer, I believe the Prosumer market is the perfect niche for the software and having AI Deinterlacing would be both a much needed saver in time and storage. I have over 100 hours of DV footage I would love to upscale in 1 render session and at 3-5 sec per frame to denoise, sharpen, deinterlace and upscale as appose to doing the conversion with Premiere and Magic Bullet at 8-12 sec per frame, the decision to use VEAI is easy.

Agreed, more encoding options would be welcomed .