Video Enhance AI v1.6.1

I am speechless, it’s an amazing result comparing it with mine. I was extracting those files with Make MKV and then deinterlacing them with Handbrake: I’ll start working with Vapoursynth and then enhance with Gaia HQ to achieve those amazing results. Thank you so much you helped me a lot!
Only one question: should I denoise it before Gaia HQ or avoid that?

Been reading your posts today with great interest, and those are some pretty amazing outcomes. Owning Voyager DVDs myself, I can attest as a source they aren’t very good, so getting those results is mighty impressive.

I’m still pretty new at all this, and previously just used handbreak for deinterlacing only. I’ve downloaded everything needed for Vapoursynth, but a bit lost on how to quite get going with it. Is there any guides for beginners you could link to? Is there anything else worthwhile beyond deinterlacing that Vapoursynth can do to potentially help with upscale quality?

I’ve done extensive testing with that and I feel that no, you should NOT denoise first. You will degrade the Gaia-HQ result. I use these QTGMC settings:

QTGMC(Preset="Placebo", InputType=0, TR2=0, Sharpness=0.2, SourceMatch=3, Lossless=0, MatchPreset="Placebo", MatchPreset2="Placebo", ediThreads=2)

TR2=0 means NO denoising. SourceMatch=3 is the best SourceMatch option. Finally, I do NOT recommend Lossless. It’s not really Lossless, and most of the time, it puts back in very kinds of interlacing artifacts that you are trying to not have.

If you find noise in the upscaled results and you want to denoise, you can run the above QTGMC line again. Change InputType=0 to InputType=1. This means the input video is progressive and you only want to denoise. The Denoise settings are TR2=1 to TR2=6. The higher the number, the more denoising, but the more fine detail is lost.

Side note: QTGMC also supports InputType=2 or InputType=3. These are for progressive videos that have been deinterlaced and still have interlacing artifacts that you would like to try to remove. Sometimes, you can actually use InputType=0 for this as well (InputType=0 means actual interlaced input), and it might give better results, just be careful if you do that as it can sometimes cause some jaggy kinds of artifacts itself (when InputType=0 is run on progressive input).

Ok so I won’t denoise the video. One last thing, I’m having troubles with installation and use of Vapoursynth with Qtgmc, I’m trying using it to deinterlace but without luck, any advice?
Thank you very much for the patience!

Use Hybrid. It’s a completely self contained front end. Since you are new to this, expect to spend some time on a learning curve.

Ok I’m testing with Hybrid. Let’s see what happens.
Thank you very much for the patience!

Dear mikaljan, thanks for your desire to help, but this is not the reason.
I have three TITAN RTXs, I run three VEAIs and process at the same time
three videos. In version 1.5, all three TITAN RTXs are loaded by 70%
and CPU-40%. In version 1.6 all three TITAN RTX are loaded by 30-40%,
and the CPU is 90%. I thought it was just my problem, it turned out
which is not. I think there may be a problem with the new VEAI engine.
Maybe someone from the developers will explain.
And yet, if it’s not difficult, write how much GPU and CPU load you have in version 1.6, and if you remember in version 1.5

1 Like

Thanks you for your detailed explanation. If it’s not trouble can you post a picture of QTGMC settings you use in Hybrid? I mainly use Vapoursynth in Hybrid because I upscale a lot of anime/cartoon and I like the ANIME4K/ Waifu2x Vulkan filters. I’m trying to upscale a cartoon but it has slight grain/noise and I thought of cleaning it before using GaiaCG to upscale it. If you have a different denoise/degrain method by all means share it as well because I’m no expert at all.

Hi @andrey19604,

I am running 2x RTX 2080 Ti in parallel, I’ve notice something strange too, I mentioned in my previous post https://community.topazlabs.com/t/video-enhance-ai-v1-6-1/16587/85?u=mikaljan

I discovered that if running more than one instances of VEAI, the GPU needs to be assigned and run in specific order for best performance. (for me it is GPU#1 first, start processing, then GPU#0 is chosen and started as second instance of VEAI) If GPUs were used in the other order where GPU #0 first then GPU #1, then one instance would slow down a lot!

I find this to be strange and assumed it’s probably a bug with VEAI or NVidia’s driver. Can you try and confirm this with your setup?

@Varbit

Depends on your video source, you should address the problem and decide what kind of pre-processing is needed. A rule of thumb is to avoid any pre-processing that will “destroy” details before feeding it into VEAI, denoising being one of them. However, other thing such as chroma noise needs to be addressed before processing with VEAI because it can’t deal with that in. Other things like contrast, dynamic range, gamma…etc. should also be addressed before being fed into VEAI. Just experiment a lot with short footages and you will gain the instinct of how do deal with various type of videos and how to properly process them.

I think I also answered your question in this post: https://community.topazlabs.com/t/video-enhance-ai-v1-6-1/16587/136?u=mikaljan

Dear mikaljan, I already wrote above that when starting one VEAI
the situation is the same
render time of a 10 minute clip
1.5.1-42minutes GPU-71% CPU-11%
1.6.1-61minutes GPU-40% CPU-30%
if it’s not difficult, write how much GPU and CPU load you have
in version 1.6, and if you remember in version 1.5

I can confirm this. I have 2x 2080 Ti and run them via NVLink. I must assign the bottom card first GPU#1 then open another window and assign the top card GPU#0 in order for the frames and performance to be correct using it this way I get about 19-20s a frame if I do it the other way around I get 74-80s a frame. I must mention that running 2 cards crash the program most of the time no matter what resource 50%-200% it doesn’t matter.

1 Like

I’ve only encountered crash when resource was not released by previous sessions, it’s a simple fix by doing what I mentioned in this post (no reboot needed):

https://community.topazlabs.com/t/video-enhance-ai-v1-6-1/16587/123?u=mikaljan

** My setup is Ryzen 3950x (16 cores), 64GB DDR4, 2x RTX 2080Ti NVLinked, 10TB NVMe

Hi,

I only have the numbers for VEAI v1.6.1, I don’t remember what the numbers were for v1.5.1, however, for me, the processing time is about the same despite which version.

Within “Task Manager” I’m seeing CPU load of VEAI to be around 20% for each instance, and for GPU load I’m seeing around 10% for each instance. However, this is inaccurate! You need to use GPU-Z to see the true GPU load! When checked with GPU-Z the GPU load is actually around 85% for each card!

So in conclusion, when I’m running 2 instances in parallel, on my machine, I’m using about 40% CPU load, and 85% GPU load on both cards!

Ok so I have to avoid processes like denoising, to keep all details, and I have to work considering the source. I was thinking to extract the files from the DVD with MakeMKV, use Hybrid to deinterlace the video with QTGMC on placebo preset and then feed it into Gaia HQ. All of this considering DVDs as the source. Alright, or am I missing something?
Thank you very much for your help!

I have not used MakeMKV nor have I used Hybrid, so I cannot give recommendation on those. But I’m sure fellow members could give you feedback on those. Though, if MakeMKV does any transcoding at all, rather than simply copy/mux your DVD/Blur-ay into MKV container then I wouldn’t recommend it. The reason you want to learn Vapoursynth/Avisynth so you have full control what what you’re doing to the video.

The idea is that you don’t want any extra transcoding in pre-processing step than what’s needed and try to use Lossless encoding whenever possible to avoid deterioration of the video. Like I said before, depends on your video, various filters might be needed to get the video into proper condition before feeding it to VEAI.

Ok I understood everything. Your help was very useful so I really thank you very much for the patience and for the tips.

Dear mikaljan, about the same in my version 1.5. I’ll try
1.6.1 again with your GPU queuing settings.
I will inform

I installed 1.6.1 again and tried all the options for the GPU sequence.
The result is the same. I returned to 1.5 and will wait for updates.
Anyway, thank you very much for your participation and help.

1 Like