Enhancing a video : video vs photo application?

I know it may sound strange to a few. Topaz VEAI processes a video frame by frame, very much like a photography application would process a batch job image per image (picture per picture).
It remains my opinion that there is a lack of control on the output from VEAI. Sometimes you have over sharpening (extra blur), sometimes you would like to restore more detail…
This has pushed to turn to photography applications where you have more control on the settings and the output. I convert my video to a format with a constant frame rate and then export the footage to an image sequence.
I started with Topaz Denoise AI which produces interesting results on slightly blurry videos. DenoiseAI does sharpen, sometimes better than Topaz Sharpen AI (fewer monsters).
After playing around Topaz products, I tried competitors. DVDfab Photo Enhancer AI impressed me with its sharpen and denoise tools. The ‘sharpen’ feature produces smoother outputs than Topaz applications. Where Topaz produces monsters, DVDfab photo enhancer AI manages to sharpen an extra mile without producing a monster.
Moreover, the output generally produces less grain, providing a better picture quality.
But there are limits in batch capabilities and the editor/vendor does not seem to want to work on this.
It is my belief that the more control a user can have the better the result will be and the happier the user will feel.
Because every AI model is trained differently, I also recommend to look at the competition. Naming DVDfab Photo Enhancer AI was meant to be a constructive comment. A competitor may have a model which suits better certain types of sources because another model went through a different training path.

  • Bertrand (I work on analog sources. VHS/laserdisc)

hi i tried dvdfab but i was not convinced it deinterlaces videos but very bad the picture was very ugly in my case and it does not double the pictures which makes the video ultimately unusable.

@ssbroly You sound like you are talking about the video application and have not read the post.

as much for me! I didn’t know there were other applications for photography with ai. :slight_smile: I do not know them.

According to the docs, the VEAI models do actually not just use the frame they’re enhancing as input, but also other nearby frames.

That’s fairly easy to confirm if you have a video where you have a pan to a full stop where nothing happens and the frames are identical. If it was something that the model isn’t too good at enhancing on the first try, so to speak, it’ll kind alike bump up and get sharper over several frames.

I’ve actually had one recording that was 30 fps., but from a 5 fps source so a lot of frames were repeated. It was also a very bad source quality so the AI enhancement was kind of “flickering” with progressively sharpening and then getting another bad frame.

That said, I did find that Gigapixel AI models often produce way more impressive results out of very poor inout quality. It does also seem to be hogging down way more system resources, and it doesn’t make sense with most of the stuff I’m trying to restore - the output has a tendency to look a bit “organic” cause while the frames are great, each of them is unique and different and just doesn’t really belong to one sequence, it’s more like a thumb flick comic someone drew by hand: close, but wonky.

There is another feature I"M missing with the Topaz products - but also I haven’t seen any commercial products that do this: there;s no way to selectively “specialize” or “focus” any of the models. One of my colleagues actually raised this when we were talking about our post production flows: he has a studio he does most of his shoots in, and there’s not that many different light configurations or scene dressings he would use, because he wants have his model shine, not random background elements. So he was really hoping for a product where you have these well trained models, and he could feed a first pass of just thousands of pictures in the same setting over years of minor changes to “specialize” the model to this particular studio and scene type.

I really’ seen a commercial product that does that at al - at least not for video: it’s more common with audio processing. And I’ve seen some open source image stuff that encourages that - but they don’t have any well trained models to start, so while it’ll then get the locale right and detailed, it won’t really help with anything else. It sometimes looks like a weird organic photo frame with an out of focus picture of the model in question in it since that’s the extent of enhancement the sample model could provide and it only got fed scenery. xD

With a good model to start with it would probably create very impressive results that retain like the vibe his place has at all times. Huh, let me cross post that as an FR for Gigapixel, it might not be that hard? :slight_smile:

So, at least right now I’ve seen that when you process the frames individually, they individually are higher quality, but when put together as video, it’ll be weirdly glitchy in many cases… needing another Video pass to deflicker which doesn’t always leave all of the enhancements each frame.

Hi Maggie, with a photo application, you cannot analyze nearby frames.No glitches for me when I recompile to video.
As you say, results can be more impressive with Gigapixel (or Denoise AI for myself).