Reduce motion blur model

Would love a model meant to reduce motion blur.

So maybe a lot quality video can be processed first with a general thing such as Proteus/Artemis, then reduce motion blur, and lastly, frame interpolation.

I would love this option as well! I think there’s several research papers on this topic.

It would bring a lot more clarity to lower FPS videos if we can first reduce blur and then do frame interpolation.

1 Like

Not sure what the etiquette is here, just noticed this separate thread for an almost identical feature request here: Add a AI model for motion blur removal

Maybe they can be merged by an admin?

I love that one too. To generalize this model it would be great to not only reduce motion blur but to even arbitrary adjust it(even increase, e.g. set it to a specific shutter angle). While the ai models do the optical flow very well, it should be possible to adjust the shutter angle based on the movement vector.
Since Avatar 2 we all have seen the impact of motion blur on the cinematic look and movements do look a lot smoother with motion blur, so I would address this as an enhancement, too, right?

1 Like

we are currently working on a motion deblur option, but it is still in a very early alpha state and will take some time until arrival. :slight_smile:

1 Like

Thanks! I believe that is very challenging, but it looks very promising!

I guess, increasing blur will be much easier, so it would be great to consider this feature in future releases.
Standard tools like Resolve or RSMB often apply blur both on moving objects and static background. Since Topaz does a great job separating different moving objects and background in the frame interpolation modeks, I’m sure it could do much better job for applying motion blur.
Looking forward to any update:-)

1 Like

Will this help for out-of-focus videos that is not caused by motion? I am stunned at how well Sharpen AI can create a focused shot from when a camera was too busy focusing on a lamp instead of the subjects.

I dunno. :eyes:

Well, it depends on how the model is trained, but I guess the motion blur will mainly be reduced based on the optical flow, the movement of objects as analyzed in the previous and and next frame(s). As defocus is not a result of a lateral movementt, this won’t work here. However, e.g. the motion blur mode in photo Sharpen AI works for defocus as well, where the motion is estimated from a single frame. If this approach will be part of the VEAI model you’ll probably will be lucky here.
Right now I personally recommend using Proteus with the Sharpen and Detail slider for defocus. It does a great job.

SD footage, especially one done with old DV cameras often has excessive motion blur in comparison to current HD and UHD cameras.
This can become a bottleneck when it comes to enhancing the quality because while still scenes appear sharp things becomes excessively blurry on movements.

Topaz already has an algorithm inside Sharpen AI that fixes excessive motion blur on still photos and I would suggest to adapt it to VEAI.
This could be a seperate model, or more convenient a slider on other models that controls the amount of motion blur removal prior to upscaling.
Incorporating it into the Chronos models would also be a good fit because higher frame rates usually mean higher shutter speeds.

1 Like

Revisiting this one to cast my vote and add an example of what can be done. I have run into this problem with a lot of footage. Sometimes I have something where I filmed at 30 fps (1/30 seconds per frame) but the shutter was clearly set to something like 1/60 seconds. This has always been a standard recommendation for “natural” looking footage, because it leaves some motion blur in each frame when a subject moves, like waving their hands around or swinging a baseball bat. Looks great in the original footage, but when you want to slow things down 15 or 20 years later (as with FlowFrames or Topaz Chronos/Apollo) the interpolator just interpolates the blur and it looks very unnatural.

Even worse with camera shake.

I recently found this paper for a tool called CDVD-TSP (GitHub here, with link to their paper: GitHub - csbhr/CDVD-TSP: The repository is an official implementation of our CVPR2020 paper : Cascaded Deep Video Deblurring Using Temporal Sharpness Prior). This uses information from nearby frames to identify sharp versions of blurred features in order to maintain maximum sharpness and clarity in video clips.

I was able to get their model running and with their pretrained model I get pretty good results on certain pieces of footage (some are too far gone though). Their training seemed to focus on hand-held footage, so it is great for that type of camera shake, but it’s bad if you have footage of, say, a baseball game where you had the camera on a tripod and all the blurring is due to subject motion. It seems to do nothing in this case.

1 Like

Ask if you could become a betatester.

Thats what @nipun.nath is looking for.