Video Enhance AI thoughts and requests

It’s written on the Newsletter :

✓ Upgraded AI Models
✓ DeNoise/Deblock processing
✓ Audio track preservation (if output mp4)

It also indicates the GPU VRAM usage when processing

Some additional info on how the program handles certain video is really needed for professional users. Mainly:

  • The program will now open interlaced video. What deinterlacing algorithm is used?
    This actually isn’t true… most DVD’s are actaually progressive but flagged as interlaced. It would have been nice if Topaz had communicated here.

  • The program will open anamorphic video (i.e., DVD) and desqueeze 720x480 or 720x576 according to the metadata (for example 720x576 at 1.78 = 1024x576). What sort of algorithm is used for the desqueezing?

  • In looking in the tldb directory, I can see there are 1x, 2x and 4x training data. How does the program determine what data is used? For example, if I scale 1024x576 to 1920x1080, that’s 188%. Is it using the 2x model data? At what percent does it decide on which training data? If the final result is 202%, does that mean it switches to 4x?

  • For final resolutions that are not 2x or 4x, what scaling algorithm is used to get to the final resolution? For example, if I scale 480x360 to 1920x810, that’s 2.25x. Does that mean it uses the 4x training data, then downscales the result, or the 2x training data, then upscales the result? If so, what downscaling or upscaling algorithm does it use (lanczos, spline36, spline64, etc.)?

03/22/2018 - Questions still not answered. Why won’t Topaz communicate? :frowning:


It’s clear that Nvidia has more market than AMD, but with RDNA arquitecture that tendency is changing, so AMD market is growing.
Just for curiosity it would be good to know why AMD GPU isn’t supported. Do Nvidia and Intel implement something that AMD doesn’t support? Is this just for Video Enhance or is something general of the AI engine in all the Topaz apps?


I’m sure there are many things being worked on to improve Video Enhance AI. So, maybe the concerns I will express here will be non existent after additional versions come out.

After processing a clip with Video Enhance AI, I’m not able to do any further processing of this clip in my video editing software. I’m using Power Director 16. A Video Enhanced AI clip can be placed into the timeline just as any other clip, but I can’t use any of PD 16’s tools to do any of the many things it allows me to do to an unprocessed clip. I can’t even edit the length of the clip. At this point, anything I need to do to a clip needs to be done before it even gets imported into Power Director 16.

It works okay on small videos. Anything more than 1 hour of processing and it just crashes without an error. Tried 2 PCs with different setups. It’s simply too crashy to be considered a professional tool.

Faulting application name: Topaz Video Enhance AI.exe, version:, time stamp: 0x5e612cd7
Faulting module name: ucrtbase.dll, version: 10.0.18362.387, time stamp: 0x4361b720
Exception code: 0xc0000409
Fault offset: 0x000000000006db8e
Faulting process id: 0x26b8
Faulting application start time: 0x01d5f75b35964d06
Faulting application path: C:\Program Files\Topaz Labs LLC\Topaz Video Enhance AI\Topaz Video Enhance AI.exe
Faulting module path: C:\WINDOWS\System32\ucrtbase.dll
Report Id: b8da3011-11f4-4021-ad42-cbb85f7d9b38
Faulting package full name:
Faulting package-relative application ID:

Barely uses half of my mid-range gpu, which is a good sign it’s not optimized.

Would buy a couple of copies for work, but considering the product can’t remain open long enough to finish a decent sized job, its not worth the aggravation and wasted time.

1 Like

Hi, If you are using Windows 10 please raise a support request at the main website,, I take it you meet the technical requirements which are:

  1. Need nVidia GPU >4GB VRAM to run fast (CPU can run but quite slow).
  2. Cannot handle “interlaced” video directly, footage needs to be to de-interlaced first.
  3. Windows 10 platform ONLY, Mac OS 10.12 or higher

I tried a DVD with interlaced PAL. VOB files were processed (not re-encoded) into the mkv container using mkvmerge. Checked with mediainfo, the video stream in the mkv is definitely interlaced. The mkv opens and deinterlaces in Video Enhance AI. So I think you should clarify what you mean by “Cannot handle “interlaced” video directly, footage needs to be to de-interlaced first,” because what you’re saying doesn’t appear to be true. Edit - I was wrong; the DVD I used was only flagged as interlaced, but was encoded as progressive, so Video Edit AI handled it. It would have been nice if someone from Topaz had communicated here.

terryleemartin13, as a workaround you could save / export as png in Video Enhance AI, if you have the disk space. That way, if it crashes, you should be able to restart the program and pick it back up at the exact frame where it crashed.

To encode the individual png frames into video:

ffmpeg -fflags +genpts -f image2 -i filename%06d.png -r 25 -profile:v high -level 4.0 -preset veryslow -crf 10 -pix_fmt yuv420p filename.mp4


%06d is the number of numbers at the end of the file name (filename_000001.png, filename_000002.png, etc.).

“-r 25” is very important, this needs to be changed to the exact frame rate of the original video (29.97, 30, 24, etc.).

Finally, “-crf 10” is the quantization factor, you can change this as you please. 23 is default, 17 or 18 is considered “visually lossless,” so 10 would be a very high quality that is suitable for further processing until you get your final result. If the output from Video Enhance AI is the final result, you might change this to 18 or 23.

Alternatively, you could split your source video up into say 15 minute segments, process those separately in Video Enhance AI, then join them back when finished.

For source video:

ffmpeg -i input.mp4 -c copy -map 0 -segment_time 900 -f segment output%03d.mp4

“-segment_time 900” is the size of your segments in seconds, 15 minutes in this case.

Process output000.mp4, output001.mp4, etc. in Video Enhance AI. When finished, to join the results:

ffmpeg -fflags +genpts -i "concat:output000.mp4|output001.mp4|output002.mp4|output003.mp4|output004.mp4|output005.mp4" -vcodec copy -acodec copy output.mp4

Note that the above ffmpeg commands for splitting and joining do NOT re-encode anything; the streams are bit for bit the same, just split / combined.


In my testing of anamorphic DVD’s, I found that Video Enhance AI produced much superior results if the content is NOT desqueezed first. If opening an original anamorphic DVD into Video Enhance AI with the 16:9 metadata left alone, Video Enhance AI will desqueeze first, THEN scale. I can tell it’s a “fast” desqueeze. The results don’t look bad, but the program then doesn’t seem to do nearly as much “enhancement” of the result as it does if the source video is NOT desqueezed first. So, the program should really do a 2x or 4x scale on the source video as a 1:1 scale, overriding any 16x9 flags and treating the source as 1:1.

To force 1:1, simply import the source DVD vob files into MKVToolNix GUI, and under “Video Properties”, select “Display width/height” to match the source DVD (usually 720x480 for NTSC and 720:576 for PAL). Then import the mkv into Video Enhance AI; it will come in as 1:1 and not stretch or desqueeze. You can then process the result with ffmpeg or Hybrid (AVISynth) to the final dimensions of 1920x1080 (or 3840x2160, etc.) using your scaler of choice (Lanczos, Spline144, etc.).

But Video Enhance AI should really do this on its own for anamorphic content, since it produces superior results when scaling such content as 1:1 FIRST, then desqueezing and resizing to the final dimensions.

Video Enhance AI Could USE Pro Formats Like ProRez ,DNxHD and DNxHR .Also Mov and AVI Formats Uncompressed Video Formats.AI could be Dime on active frame. Features From Denoise AI,Sharpen AI Adjust AI Giga Pixel AI could be Added too.Video Editor Programs Like Pinnacle Studio use Public codecs like FFMPEG .Perhasp Video Enhance AI would better as Effect Plugin for Adobe or Vegas17.Standalone Video Enhance AI could read the metadata to Denise And enhance it.

I’m rather new to processing video. I’ve done a lot of re-encoding and simple processing, but for the most part, this is pretty new to me. I’m having a blast and learning lots of things that most people probably learned years ago…

I’ve been struggling with a single video sourced from DVD. It’s stored in an MKV container with all the video, audio, subtitles and chapters untouched. Every attempt has failed at sending this video through Video Enhance AI – the audio is ALWAYS out of sync.

Through the support forums, I found this was a symptom of interlaced video. A small sample encode made me think I was right, but when I let the whole thing finish the audio was so far off at the end that it was unusable.

I have a hunch that my problems are related to some assumptions that the program is making with FFMPEG, both extracting the frames and putting them back in the video.

I think my problems, and many others could be solved by allowing more access to the parameters that are passed to FFMPEG. I would be happy even if the options weren’t available in the GUI – maybe a config file or something. Let me specify the default commands for the different operations, such as source framerate, encoder, and codec.

I’d also like to see the app be able to take input from Avisynth. I spent quite some time trying to get my video deinterlaced, and found that Avisynth worked great. It’s not difficult to get setup, but trying to understand this program can be, and it took me a couple tries to get it running. If the app could take input from avisynth, you could do some pre-processing (deinterlace, inverse telecine, etc, etc) all in one go.

I’d also love to see a command line mode. I would prefer to process each frame independently, and I know you can do that with Video Enhance AI, but it’s time consuming to load them all in. All of the problems that I’ve had with the program would actually be solved if it had a command line batch processing mode…

I could pass the mkv avisynth to deinterlace, handing it off to ffmpeg to split the video into frames. When that job was done, I could pass each frame to Video Enhance AI (via command line!) and get an upscaled optimized version. After all files were processed, I could feed them back to ffmpeg to reassemble the video… all that work with a single script driving it.

Awesome program, though… I’ve been happy with the video results and I just need to fix my audio issues (which are in essence, video issues – playing to fast due to lost timing data).

Some thoughts:

What is the framerate on the original DVD? And, what is the frame rate on the output from Video Enhance AI? They should match exactly. If they do, it’s weird you’d get the audio going out of sync.

You could try processing ONLY video, not audio, with Video Enhance AI. Deselect everything but video in MKVToolNix. Then remux using MKVToolNix with the new video stream from Video Enhance AI.

Check frame count on source and output and see if they match. On mkv from DVD:

ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 input.mkv

This is slow but should give an accurate frame count. It counts full frames, not interlaced or “half” frames (you would double the result for that). Run the same command on the Video Enhance AI output. The numbers should be the same.

If they are, you could try (if you have the drive space) extracting every frame as a png from the Video Enhance AI output, then recombining them with the exact frame rate from the DVD.

To extract all frames:

ffmpeg -r 1 -i "Output.mp4" -r 1 Output%06d.png

To re-encode all frames with the exact frame rate as the source:

ffmpeg -f image2 -i Output%06d.png -r 24 -profile:v high -level 4.0 -preset veryslow -crf 18 -pix_fmt yuv420p Output2.mp4

The “-r 24” is important, make sure to change this if needed to match the source DVD exactly. “CRF” is the quality setting, 23 is default, 17 or 18 is considered visually lossless. The veryslow preset should give the best result.

Some freeware tools you will probably find useful (hope these URLs are OK to post here):


(Hybrid is a front end for both AviSynth and VaporSynth. Just install and run. Then you can select AVISynth filters and options using a graphical user interface.)

Here are my suggestions for improvements:

My platform/hardware: MacBook Pro (2018), macOS 10.15.3, Razer Core X Chroma eGPU with PowerColor Radeon RX 5700 XT “Red Devil”.

  1. Please add eGPU/AMD support! Top priority for me as Mac user. There seem to be a lot of efforts from Apple to improve GPGPU/eGPU support, CoreML, PlaidML etc.

  2. Choice of codecs for export. ProRES, H265, etc., maybe adjustable bitrate etc as well?

  3. BONUS FEATURE: I ran into a noise problem after digitizing VHS video - there is a pattern noise that I can’t get rid of. Even DenoiseAI fails at removing this pattern, increases its strength instead. Would be awesome to remove this pattern (it looks like little sand dunes from top - wavy little stripes across the image)


Meanwhile I tested the new version with a lot of my home video footage. Mostly the same footage, which I tested also with the versions before.
On the end: the old version enhanced the videos much more better, even the old version damaged faces and small structed. But, the denoising und deblocking worked much more better.
The main problem with the new version: it adds structures - sometimes smallers/sometimes bigger - to the footage. So, on the end, the new version doesn’t really help for enhancement.
The only advantage of the new version:

  • it often preserves faces much more better than the old version.
  • speed of rendering improved significant
  • audio is now also rendering

Unfortunately it isn’t possible to add images to the posts here in order to show, where the problem with the added structures/lines with the new version is :frowning:

You can post the images on imgur, then post links to the imgur posts. (Example: Imgur: The magic of the Internet)

ok, thx

Meanwhile the best and most impressive results I get, if I downscale up or down to 640x480 and then using the CG render modus with 200%.
I guess, the AI learnings are not below or above 640x480.
This should fixed by time.

1 Like

In addition: converting all footage before using in Video Enhance AI into 640x480 AND 50fps preserve mostly structures, increase sharpening and mostly prevent generate new structure, which wasn’t part of original footage.

1 Like

Seeing the comments about interlaced video I will def try this for my next test (I assumes you could not use interlaced video). Quick question - does anyone know, if I import 25i video will the exported video be 50p? Currently I am encoding interlaced videos with bob de-interlacing and double the frame rate to keep the same motion. Does/will Video Enhance do this automatically?

1 Like

Mostly of my footage I work on with Video Enhance AI is interlaced, like e.g. old VHS and S-VHS footage.
Before I use the Topaz Video Enhance AI, I render it consequencly to 640x480 Mp4 and 50fps.
Video Enhance AI itself has problems with interlaced footage; the results are not useful.
Also VEAI doesn’t increase 25i to 50p by itselfs. So, thats the reason, why I deinterlace and increase 50fps before I use VEAI.


OK, my before named settings for post production working mostly pretty good for footage of e.g towns.
For landscape footage, I test now, no settings and no up-/downscaling before using VEAI are helping:
VEAI consequencly adds small structures on trees and grass. The results are always not useful.

@Topaz: are there any plans to fix that?

1 Like