Model Usefulness [Progressive Scan]

Currently there are six AI models for progressive scan videos, with the following descriptions:

  • Proteus (Fine-Tune/Enhance): General enhancement model that allows fine-tuning several parameters for optimal quality. Great for denoising low to medium quality footage.
  • Iris (Face Enhance/LQ-MQ Video): [No description]
  • Nyx (High Quality Denoise): [No description]
  • Artemis (Denoise/Sharpen): General enhancement model that offers a good balance of improved detail and reduced noise + artifacts. Includes variants trained for low, medium, and high-quality source footage with different problems like halos or aliasing.
  • Gaia (Upscale HQ Video): Improves already high-quality and computer-generated input videos.
  • Theia (Details/Fidelity): Sharpen and add additional detail to your input video.

I am starting this thread to get a better understanding of when each model should be used and how the community has gotten the best results from each model. I have worked with a wide variety of videos using every model the program has to offer (from progressive, interlaced, and interlaced-progressive), and I’ve found certain models to be useful across many kinds of input types and others not to be beneficial at all.

For example, Proteus has been the all-around best model for me with progressive videos. Iris has been very useful with low-quality or damaged videos. Until recently, I thought that Artemis HQ added a nice touch to recordings that were already very high quality. But after watching some AHQ videos on a much larger screen, I saw that Artemis produced a lot of errors.

Gaia’s description as being trained for computer-generated content and animation is appealing, but I have yet to see Gaia upscale a computer-generated or animated video well. Artemis tends to make recordings look computer-generated, so I thought that Artmeis might be a good choice for animation. But it likewise has not done well with animation. Similarly, I have not seen Theia noticeably improve any videos.

So, how have you been able to make the various video AI models work for you? Have you found certain models to work better with certain input types? In my usage, both recordings and animation/CGI have turned out best with Proteus. Iris can work wonders with videos that are exceptionally poor quality. Can you provide some examples showing how you have used the different models?

3 Likes

hi, I think it all depends on the video processing we were able to do before sending to vai. for my sd sequences, I use hybrid before, but depending on the filters that I use with hybrid, I do not use the same models with vai. sometimes it’s better with iris, sometimes with artemis hight + iris or even artemis medium + iris. I have already found good parameters with hybrid so that the video that I process with artemis hight + iris or artemis medium + iris (with the same parameters) are almost identical (no change in quality between the two)

I’ve been saying Gaia doesn’t do much more than a Lanczos resize filter. I was wrong. It really does do a lot more than that. Honestly I think it’s the best for not introducing artifacts. It seems to be able to enhance what it can and leave the ‘unknowns’ alone. (If the quality is bad enough, there will be little difference between it and a Lanczos upscale.) Having used it on some cleaner videos, it did an amazing job.
The biggest issue it has is that it can bake noise into solid textures. For example, one of the videos I was trying to enhance had a white sign. It was supposed to be pure white with words. After enhancing with Gaia, it had little spots on it.
If you can clean out the noise first, I imagine that will become a nonissue.

just trying now nyx + gaia :slight_smile: it looks amazing indeed on a clean video

I can only agree with ssbroly that it’s totally dependent on the type of video an the post-processing. That said my method is a bit different so I also use these models differently. I combine different models with each other by using Adobe Premiere to layer them with different opacity settings.

Therefore I often use Artemis LQ/MQ as base layer. On top I put a little proteus for more details and now with Iris I also mix it in, to make things “pop” a little more. On top of that I mostly use a layer Gaia HQ and CG, since it gives somewhat of a more natural look. Sometimes when this turns out to be not convincing enough, I also add a little Artemis AA, as a kind of “mix with original” option.

BTW. I know this method is crap and way more processing intense, but I am lazy and don’t like to do as much comparing in TVAI. Comparing in Premiere is also way better, since you can click a layer off an on instantly, so the contrast between the different layers is more visible.

One area where Artemis has struggled on several of my videos is with concrete:

concrete_AHQ

It leaves these diagonal lines across the concrete. Sometimes they run across the entire screen. Any idea why this would happen with Artemis? I haven’t encountered this problem with other models.

What is the order you doing, Nyx 1st Gaia 2nd or the other way around?

nyx first and gaia second :slight_smile:

1 Like

It’s always done that on my videos. Trees and grass are just as likely to get those spots added.
I think it’s less likely to happen if your source is 720p or larger.

Source is 1080p :frowning:

Well never mind then.
I know for sure it does not happen when it’s ran at 1X scale on 1080p sources. I’ve only been using it like that for denoising. Now that we have the nyx model, I plan to switch over to that.