Fixing common video damage -- bad interlacing and bad upscaling

I’ve noticed two common types of damaged video that Topaz Video AI doesn’t handle well.

The first is interlaced video that was either scaled, over-compressed as progressive, or de-interlaced poorly. This is video where you’ve got combing or wavy edges on moving objects that doesn’t improve (and often gets worse) when running through Topaz Video AI. This is true whether you use a progressive or interlaced model like Dione.

The second is simply upscaled video. If something was shot at 480p and naively upscaled to 720p, there is often stair-stepping encoded in the image that Topaz Video AI isn’t able to deal with properly. It treats the stair steps as image details and enhances them, resulting in a poorer quality image.

If Topaz Video AI had models to correct these kinds of common damage, it would be pretty amazing. I realize these are both complex problems, but I think that’s exactly the kind of magic a program like Topaz Video AI should tackle.

There is no magic sauce as you think.

A very poor source needs a LOT of knowledge and manual intervention.

Models that could come close are many years away.

I’d have thought the same thing about the “revert compression” slider on the Proteus model, but it’s blown me away time and again: very poor sources come out looking much better. I’m not expecting a bad source to look like pristine HD, but reducing some of the most noticeable artifacts would be very useful.

Specifically – I would be surprised if a model trained to fix broken interlacing wouldn’t be able to do a decent job. It’s a narrow but common problem and it’s easy to create plenty of training data too – just scale up or down any of the millions of interlaced videos that already exist. I’m not sure why that problem would be less tractable than over-compressed sources, which are handled very well already.

1 Like