Personal review and first impressions from Topaz Video AI

The first time when I tried TVAI was at the begginning of 2024. That time I had portable Windows computer that I bought in december 2023. It is with 6-core i3 CPU, 16 Gb 3,2 GHz RAM and 512 Gb NVMe SSD. It is with integrated intel graphics and the performance was very poor but enough to value TVAI. I tried to enhance old bad quality TV rip with 3 enhancers on: upscale from 288p to 1080p, Dione TV filter and Deblur. The rendering speed was 0,14 FPS and for 10 minutes episode it took 28 hours (!) rendering. That’s crazy but I made it for the experiment. It was worth it. Now I have serious machine with GPU Nvidia Quadro A4000, CPU AMD Ryzen 7 7800X3D, 32 Gb 5,6 GHz RAM, 2Tb SSD and 8Tb HDD. I installed the studio drivers for Nvidia, optimized TVAI and now the program flies like a rocket! But I am still with the demo version because I want to be sure that I can manage TVAI and it is capable to gime me the results I expect. This weekend I spent good amount of time for testing. The referrent material are short videos with various genres (live shot and animation) and quality (poor and Very poor). In this post I will speak for what I observed and the conclusions I draw (that will be supported with screenshots). For the experiment I use only FFV1 8-bit 4:4:4 videos (except the original) and all shots are made inside TVAI.
From what I read here and from my personal experience I understanded that it is not good idea to run many enhancers simultaneously. And that’s right because with this approach especially with poor quality videos the end result looks like CG animation. Currently I am not able to post such shots because I am on the laptop and do not have access to those videos but I have the videos from the next experiment that I conducted previous day. Yesterday I tried to put in action the approach of several renderings of one material with small number enhancers instead of one rendering with many enhancers. I tried various combinations. For the source material I use episode from 2000’s cartoon named “Baby Looney Toons” (the same bad quality TV rip; see above). For the first part of the experiment I separated the two main stages of enhancing - improving detail and cleaning noise (and other disturbancies). First I upscaled the video from 288p to 1152p without any filtering. All parameters were set to 0. The results were of course bad. On the second pass I switch on Proteus but the filter did nothing. I tried second attempt with 1152p upscale but this time with Rhea filter but I put on only the parameters for improving detail (fix compression, improve detail and sharpen). The results were better (more details) but again when I try to put filter on the second run, the filter cannot clean the image. My explanation for this observation is that when you improve details without cleaning noise, the program treats the disturbancies like useful information and the last are deeply integrated in the image and then filters are not capable of recognizing those disturbancies. So I tried to clean the video without upscalling on the first attempt, then upscalling. With Proteus on and original 288p resolution there is cleaning the image but due to low resolution the image is still with poor quality and it is difficult to recognase what is lost detail, what is noise. On the second run with 4x upscale and filter Rhea (all parameters on) there are visible artifacts. Here are examples. The first shot is the original, second Proteus with native resolution, third the second run with Rhea and 4x upscale.




After those disappointing results I tried to run the original again with Proteus but this time with 2x upscale (from 384x288 to 768x576). On the second run I put Rhea and additional 2x upscalling (from 768x576 to 1536x1152) and that was the first time with noticeable progress (there are still artifacts but less). The first image is 2x upscale + Proteus; second image is the second run.


With Proteus on and 2x upscale there is balance between detail and noise reduction. With 2x upscale + Rhea this time the results are significantly better in comparisson with the first one where I use upscalling only on the second run. Conclusion: in the presence of extreme settings, good results are not obtained even with more than one run. Certain frames are even worse with multiple enhancers on in one run. It is not good to have too much detail enhancement without filtering, nor just filtering at very low resolutions.
The next big step in my experiment is utilisation of Motion Deblur. It is useful for any kind of bluriness. For the first portion I tried 4x upscale with Deblur on. Here’s the result:
Original

Improved


I expected bigger difference. On the second run I turned on Proteus but the results are poor like the first tries with improving only details then filtering. Also pop up weird artifacts (see the leaves).

What I learned: Deblur is useful only when it is run together with filter (at least in video with such bad quality). Next attempt: Proteus + Deblur and 2x upscale. I give shot with Baby Bugs’s rocket.
Original

2x upscale, only Proteus

2x upscale, Proteus and Deblur on

There is difference between shot 2 and shot 3 but is subtle and visible only in A/B comparisson. I hope you see guys because the resolution is not very high but is very difficult to shot exact image several times in KMP or other player because their minimal time resolution is in seconds. So I shot the images in TVAI where the shots are not full screen and so with lower resolution.
I tried the same combination with native resolutions but the results are similar so not interesting.
Next step - putting deblur on later stage. The most successfull algorythm (at this point) for secondary enhancing is Proteus, 2x upscale. Then 2x upscale, Rhea and Deblur on. Here’s the results:
2x upscale, Proteus then 2x upscale, Rhea, Deblur

The action of Deblur is not felt at all including with other filters (look the faces). I give for comparison the same frame processed only with Rhea.

Subtle difference between both frames is the brighter red on Marvin’s shirt in the first picture.
Conclusion 1: Deblur is useful combined with filter
Conclusion 2: Deblur is useful on first run
So the best algorithm for this particular video is 2x upscale, proteus, deblur, then again 2x upscale plus filter suitable for final polishing. All algos that I tried to this point even successful one gave me decent results but not perfect. I know that video is extreme case and is not something I will use to feed Topaz every day but just for the experiment! On the final I try to clean “War of the Weirds” a bit more and for that purpose I experimented with Artemis. Artemis LQ gave me best results. With MQ and HQ there is almost no difference, with Aliasing/Moire the results are weird and destructive. Actually Artemis gave a bit better results than Rhea and I experimented with different combos.
Original

First run: Proteus + Deblur, native resolution
Second run: Artemis, 4x upscale

First run: Proteus + Deblur, 2x upscale
Second run: Artemis, 2x upscale

4x upscalling on one run looks a bit fuzzy compared to 4x upscalling on two runs (look the flowers on the bush). So with 2x upscale, Proteus, Deblur plus second 2x upscale and Artemis LQ I got near perfect results (not perfect in particular shots) and with that I end for today. I hope this post to be useful or at least entertaining. On the next post I will discuss frame interpolation.

2 Likes

This is great. You’ve done a lot detailing your process. Would you consider a grid based summary based on your conclusions?

I was thinking on that. Yes, but after additional tests with other videos and refreshing my Excel knowledge.