Topaz Video AI v3.4.1

Can you post your logs and system profile?

Yes, this a known issue with all of v3.x built, however, it is not consistent and some files (models, and resolutions) increase the mismatched nature of the frames.

As a workaround, I tend to use an external player when running previews.

We are working to resolve the preview issues!

It would be nice if we could also use a scale lower than 1x for the intermediate resolution with second enhance. Background is that there are some interlaced progressive videos that deliver better deinterlacing when scaling to half the resolution first.

I did try setting the intermediate scale to 0.5 and using the CLI, but the output is identical to scale 1, so this is definitely not supported currently.

Hi, Fumio!

Maybe the script isn’t yet adjusted for the newer versions of VEAI? Since it probably uses the videoai.dll library (through ffmpeg) to determine/query, what your GPU can do in hardware, and downloads optimal models according to the results.

It makes perfect sense to me how this works
 But maybe my way of thinking is not typical?
I mean, I would have made it to work this exact same way.

You pick your desired output resolution. (DVD to FHD).
You try a preview of just Proteus.
It’s not quite good enough.
You decide that you want another model pass after Proteus, like Artemis HQ.
The intermediate resolution will be what resolution the first pass uses.
By changing the intermediate resolution scale, I can control what passes do the upscaling and how much. Either way, the output resolution of FHD is what the output will be.

2 Likes

Since DLSS 3.5 is not publicly available yet, Topaz Labs probably doesn’t have access to it or can’t comment on it.

But even without a comment from Topaz Labs, I’m going to assume that DLSS probably doesn’t work for what Topaz Labs Video Enhance AI is designed for. See below:

For reference, DLSS 3.5 is made up of a few parts.

  1. DLSS Super Resolution (Upscaling) - For DLSS Super Resolution to work properly, it requires jittered pixels, motion vectors, and a depth buffer. Motion vectors and depth buffers can be estimated with AI since most cameras can’t record that data, but the jittered pixels are going to be difficult. You can turn off jittering, in DLSS, and in that case DLSS Super Resolution only works when something is moving. And I’m not sure what kind of results you’ll get with it.
  2. DLSS Frame Generation (Frame interpolation) - For DLSS Frame Generation to work, it needs the current and previous frame, along with the motion vectors between them, and maybe the depth buffer. Frame Generation takes the two frames, creates it’s own motion vectors using the optical flow accelerator, compares the optical flow pass to the other motion vectors and makes a decision on which one to use for frame interpolation. As mentioned earlier, most cameras can’t record motion vectors, so it will have to be generated with AI. However, the motion vectors generated by this AI will be better than the motion vectors from the Optical flow accelerator or a game engine. So you should just skip DLSS Frame Generation and use those motion vectors to interpolation the video directly. But even then, the motion vectors doesn’t take into consideration non-linear motion, something which Apollo from TVAI apparently can do?
  3. DLSS Ray Reconstruction - There hasn’t been much explained about it, but it seems to integrate with DLSS Super Resolution (so it probably needs the same information as DLSS Super Resolution) to offer denoising+upscaling for ray traced effects + the general game. Denoising is useful for other things like removing noise from video on cameras with limited light access. But since DLSS Ray Reconstruction PROBABLY requires the same, or more, information as DLSS Super Resolution, you face the same issues as using DLSS Super Resolution.
2 Likes

You can set the scale to lower than 1.0 with the command line and two pass enhancement, but you can’t in the UI at the moment. If you would like the feature in the UI, you can suggest it here: Ideas - Topaz Community

Both of these examples work (Note, I’ve put a lit of things like HALF_INPUT_WIDTH and model=DEINTERLACE, or vram=X. Swap these out with the values you want to use for your video.):


Scale to 0.5x → Deinterlace → Upscale
Here’s the filter that you would use in the command line:

-filter_complex scale=w=HALF_INPUT_WIDTH:h=HALF_INPUT_HEIGHT:flags=lanczos:threads=0,tvai_up=model=DEINTERLACE:scale=1:device=X:vram=X:instances=X,tvai_up=model=UPSCALER:scale=0:w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:device=X:vram=X:instances=X,scale=w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:flags=lanczos:threads=0,scale=out_color_matrix=bt709

This filter is made of a few parts:

Scale to 0.5x: scale=w=HALF_INPUT_WIDTH:h=HALF_INPUT_HEIGHT:flags=lanczos:threads=0
Deinterlace:   tvai_up=model=DEINTERLACE:scale=1:device=X:vram=X:instances=X
Upscale:       tvai_up=model=UPSCALER:scale=0:w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:device=X:vram=X:instances=X
Scale and pixel format "correcter": scale=w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:flags=lanczos:threads=0,scale=out_color_matrix=bt709

Deinterlace → Scale to 0.5x → Upscale
Here’s the filter that you would use in the command line:

-filter_complex tvai_up=model=DEINTERLACE:scale=1:device=X:vram=X:instances=X,scale=w=HALF_INPUT_WIDTH:h=HALF_INPUT_HEIGHT:flags=lanczos:threads=0,tvai_up=model=UPSCALER:scale=0:w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:device=X:vram=X:instances=X,scale=w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:flags=lanczos:threads=0,scale=out_color_matrix=bt709

This filter is made of the same parts as before, but in a different order:

Deinterlace:   tvai_up=model=DEINTERLACE:scale=1:device=X:vram=X:instances=X
Scale to 0.5x: scale=w=HALF_INPUT_WIDTH:h=HALF_INPUT_HEIGHT:flags=lanczos:threads=0
Upscale:       tvai_up=model=UPSCALER:scale=0:w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:device=X:vram=X:instances=X
Scale and pixel format "correcter": scale=w=OUTPUT_WIDTH:h=OUTPUT_HEIGHT:flags=lanczos:threads=0,scale=out_color_matrix=bt709
1 Like

Me too

Is there an explanation on what ‘second enhancement’ is and how it works?

The second enhancement feature is a feature allowing you to run two enhancement passes without having to process a video once, then import it back into TVAI and process it again (the way you would do it before the introduction of the new feature).

For example, you can use Dione to deinterlace your video, then Proteus to upscale. Or some other combination of models.

I personally use Proteus to denoise then use Gaia to upscale.


To you use it, open the settings, go to the Application section and enable Show Second Enhancement Control and you will then get the controls for it in your UI.

There is another option in the settings called Show Intermediate Resolution Control that ties into the second enhancement feature.

To use it:

  1. Import a video.
  2. Select your desired output resolution at the top like normal.
  3. Select your “first pass” using the “Enhancement” section like normal.
  4. Click on the “Add Second Pass” option at the bottom of the enhancement section and select the second filter you want to run.
  5. When you export your video, TVAI will process each frame with the first then the second pass.

Note: The Intermediate Resolution controls, that also need to be enabled in the settings, controls the resolution of the first pass. So if you have a 960x540 video and you set the output resolution to 3840x2160 (4x). Then this is what will happen for each “intermediate resolution” option.

  • 1x - The video will be enhanced from 960x540 to 960x540 with the first pass, then upscaled to 3840x2160 with the second pass.
  • 2x - The video will be upscaled from 960x540 to 1920x1080 (2x the original) with the first pass, then upscaled from 1920x1080 to 3840x2160 with the second pass.
  • 4x - The video will be upscaled from 960x540 to 3840x2160 (4x original) with the first pass, then enhanced from 3840x2160 to 3840x2160 with the second pass.
  • Auto - Automatically picks between 1x, 2x, 4x based on your final output resolution.
6 Likes

Thank you so much for your reply. Very helpful so I will try and get to grips with it.

Kind regards,
Martin.

I COMPLETELY forgot that they use vector variables for all of this including denoise. I think my brain might have turned off when I read that NVIDIA incorporated the Deep AI network they have for the denoise, but yeah still has to have vector data. What a great summary, was real nice to read!

1 Like

Oh snap!
That solves the problem I mentioned in my comment above, about the sharpen artifacts when you scale up the video from the first pass!!!

I had no idea what “Intermediate resolution” meant
or how it worked,
so I didn’t think to turn it on!

Thank you so much for clarifying!!

2 Likes

Thanks!

(Update) So the below issue (see from “(Original)” on) was from running TVAI from the checkbox at the end of the 3.4.1 installer. I exited and used the desktop icon and a dialogue appeared:
image
Which then did a big bunch of stuff and I got an activated window when it finally appeared. Perhaps don’t have a “Run” checkbox in the Installer if you may not get a functional instance?

(Original) So after I installed this it comes up in Trial Mode (not uncommon, however I’ve never owned software that reverts out of licenced mode regularly as a security thing). Clicking on “Trial Mode” then “Activate” says “Opening Browser” but nothing opened anywhere. I logged into my Topaz account and have been looking around, but I can’t see how to activate it. It subsequently said “Sign in with Browser” after I’d written this, but again clicking gives “Opening Browser” (which is open already, plus still says to click the now-replaced “Activate”) and still nothing happens. Any ideas pls?
(Win10 Pro 22H2 x64 Chrome is the default Browser - this has worked many times before)
image
image

Ok,
so after a bit of experimentation with the “intermediate resolution” feature,
here are my results and an really easy-to-implement request!
4x gives the best results by far!

But


It comes with a heavy price as it takes more than 5 hours to convert a 22min 640x480 video and up-scales it temporarily to 2560x1920,
before down-scaling it to the final resolution of 1020x680 that I set as target.

My request is to add the a 3x intermediate resolution option in the drop down menu that is currently missing!

The result is still great by it takes almost 40% less time than 4x!

Now that the developer Beta 6 of Sonoma is out the Iris bug in all TVAI 3.4.x versions is still there.

I may be wrong, but I think each AI model name, has three versions: 1X, 2X and 4X. They would need to make new 3X models for each named model
 
and if they did, it would look different than the 4X version results you like.

The 1x 2x 4x is just an upscale factor.

It is literally taking the original video resolution and multiplying it by 1, 2 and 4.
Its not model specific.

They just need to add one more option that multiplies the resolution by 3.

And I know the results are good because I tried this by upscaling a video by 3 using Handbrake (another program) and gave that to Topaz to denoise and enhance.

I have to disagree. If that was the case, the results I got would not be so obviously different.

1 Like