Topaz Video AI Alpha 3.1.0.0.m - Motion Blur Reduction

Hello everyone,

We have a very early alpha version of an AI model for reducing motion blur. Currently, it works ONLY on Windows with GPU and speed performance is NOT optimized. Please test and let us know your feedback on the following:

  • The overall performance of the model on reducing motion blur.
  • What areas need improvements?
  • What does the model do well?

Download - Windows GPU.
Please share your test videos - Submit to DropBox.


What does this model do?

  • This model is supposed to reduce motion blur generated from camera or object movements. Some examples are shown in below:


Note:

  • Other filters and changing output resolution and FPS are turned off intentionally.
  • Please provide your feedback ONLY on the new AI model. Thanks.
9 Likes

For how many CPU cores is it programmed?

I only ask because I need to decide on a processor and I also want to include testing and not just my own work.

1 Like

There is no hard limit for the number of CPU cores but, as always, more is better.

2 Likes

That’s great, on my 8 core cpu it’s no fun with almost 7 spf.

But I still have a big one with 24.

The problem of the missing test material also exists again and again, I would have to create one myself.

Visual material at the beginning of the thread (pictures before after) would also be good, so you can see what to look for.

here are my first impressions:

  • I often have trouble either getting the processing started (nothing happens) or it started but I struggle to find the preview
  • processing a 720p30 video of 4 minute length takes 4 hours. waiting for the first preview image takes 10 minutes or something (still waiting for …)
  • the function to play the preview in a lowered speed does not work at all.
  • in the first test video I can not see any difference between the before and after image

I try this model, he’s very bad because he’s really slower. And i don’t see any difference between before and after.

Thanks for the feedback. The model is supposed to reduce motion blur caused due to camera shake or object movement. Can you please make sure the video contains such motion blurs? Thanks.

2 Likes

@TPX, @Imo, @TomaszW: Some before and after examples are added in the post. When you test a video, please make sure it contains motion blur due to camera shake or object movement.

Thanks for testing and commenting.

2 Likes

The problem he’s very slow this model for the exportation video. Not faster. :pensive:

I question the usecase of the model.

Among 213 smartphone videos I have, I found maybe 2 videos where I could use it, but on the basis of the poor quality of the videos of my smartphone, these videos have only a memory value.

Now the question is, because I am applying the stabilization model including Proteus to the same video, if the Motion Blur Reduction model could be helpful here.

Because the other two models smear moving things.

difficult matter



I’m looking at an amateur filmmaker’s video and just asked if I could have this for internal testing purposes.

The thing is that with these amateur filmmakers problems occur that would never occur with me because my equipment is higher quality.

This is about testing a new feature and its stated that its slow… Speed will improve with time.

5 Likes

Yes and I hope it is. In any case should be a sharpening in areas with no motion blur.

Motion blur removal will be very useful when planning a fps increase with Chronos or slowmo with Appolo.

1 Like

Looks very promising! I haven’t been able to produce a preview or output yet. Have tried h.264, h.265 and DNxHR source files. Preview loaded the model, but wouldn’t process. Export then flashed the preview window to black for a frame or two, but didn’t create an output. AMD RX6900XT, Windows 10, 5950x.

  • Other filters and changing output resolution and FPS are turned off intentionally.
  • Please provide your feedback ONLY on the new AI model. Thanks.

Yeah, this is not gonna be a problem for me in the least, as the resulting output will fit quite well into my current pipeline, I’m just excited about working with this new model!

It will be some real competition for the Apollo model!

EDIT: Scratch that, it will work magnificently alongside the Apollo model!

It costs only my CPU. Should that be normal?

I do wonder where the sweet spot generally is, I mean when do we see deminishing returns? Would be cool to add on the “longer term” feature list to add a benchmark tool in topaz. People/Youtubers using Topaz as a benchmark will give you guys more publicity as well I imagine.

  • The overall performance of the model on reducing motion blur.

I’m doing some real intense tests, so going forward know that I’m running the resulting De-M-Blur model video into VAI BETA with interpolated frames up to 120-240, so I can see/simulate as many frames in transition as possible with the models.

The reason is that I’m doing some tests for more localised motion blur, for example, on a multi-patterned cloth worn by a fast moving subject with isolated movement, it works amazingly well as the patterns accurately follows the motion at least 75% across the simulation!

Compare that with a chequerboard pattern with only (With no De-M-Blur model treatment) fast chronos, and you’ll end up with areas of the pattern not matching the pattern across the simulation; with apollo, the details with the patterns themselves are illustrated well, and translate the motion with minimal pattern de-stitching, and as close to the original as possible. (At least 60% over fast/chronos)

But when apollo is used in conjunction with the De-M-Blur model the results are amazing as the chequerboard pattern stays 80% stitched together as it moves as though the resulting blur between the frames which is 90% non-existent, as well as stay consistent with the motion across the video footage!

  • What areas need improvements?

I’d say the time it takes to simulate a 15-second video at 15-20 minutes and a 3-minute video at 1-2 hours depending on the severity of the host video’s patterns/motion complexity, etc… but that’s due to the AI having a schite-tonne of calculations to do, which I’m fully confident of it being shortened over time!

About the only improvement other than that would be to finish and refine the model to work well alongside of the apollo model, as both used in conjunction with each other will help you immeasurably with your work and become an invaluable tool in your pipeline!

  • What does the model do well?

It isolates the pattern while simultaneously eliminating the blur in-between the frames when following the motion, and restores/generates a non-blurred pattern in its place, couple that with apollo, which already isolates and illustrate the pattern, and you’ll get an 90% elimination of both the blur, but also the restoration of the patterns across the entire simulation!

This is some serious tech you guys have been working on, and I’m so proud to be even a miniscule part of its development!

3 Likes

Should I be seeing a watermark? I’ve logged out and back in, but it still appears anytime I preview.

No, you should not, especially when you’ve already successfully logged in/out of the program!