We have another alpha version of AI models for reducing motion blur. Currently, it works ONLY on Windows with GPU and speed performance is NOT optimized. Please test and let us know your feedback on the following:
The overall performance of the models on reducing motion blur.
@ Nipun: I hope you read my last post, I’ll try a different methodology and upload a video of my tests this time!
EDIT:I keep getting this type of error for every output video format used:
Last FFmpeg messages:
Output V:*CUST DIR*\Topaz Video AI ALPHA\OUTPUT/Anyvideo_n_tvai.mov same as Input #0 - exiting
FFmpeg cannot edit existing files in-place.
So I’m just going to export my test vids via tiff until this issue can be addressed, I’ll upload a video as soon as I’m done with the current test that I’m running…
Looks super promising from what I tested but very very slow.
7 minutes to render a clip of 30 seconds SD 640X480 video on a RTX 3090 (about 0.41s/f)
can’t even imagine with some HD inputs.
I tested it right now. The application spends 100 % of its time with “generating preview…”. When the processing is finished there is no saved file to be found. The preview via the previews menu shows something after finish processing where I can no compare with the original footage because of the lack of comparison views. While I greatly appreciate the time and effort put into this phantastic new option I am also getting sick of continous starring onto “generating preview…” with no control image at all and no comparison view around. Holding the mouse button in the preview after processing does not change anything either. But I feel like I beat a dead horse here.
@ Nipun:
Just uploaded a cat attack lossless video, other vids upon request if it doesn’t fit the criteria for showcasing the AI… I have vids that better showcase the AI, and while there’s no nudity, (It’s dancing) it’s still not necessarily safe for work in spite of it allowed on YouTube…
EDIT:The dancing that I’m talking about is belly dancing, which I’m of the understanding that it’s NSFW…
seems like it will be better for this to run before frame interpolation, whether before or after image enhancement I am less sure, probably after.
this being slower highlights a need for an easy way for us to (either or both via UI or a file format) specify between what times we want which filters active, maybe for something this slow we want to manually specify which times we want this filter active, sandwiched between image enhancement and frame interpolation, and then at other times, just enhance+interpolate, etc.
I think the devs are more focused with teaching the AI model how to best eliminate blur across a myriad of scenarios with real world use, than with ways for the UI/program elements to improve speed, speed will come later once the model learns more!
There are at least 3 types of the model to choose from, which may be as versatile depending on the situation, it just needs to be developed more, as the model is in the alpha stage after all.
The speed should take care of itself as the model’s AI learns more of its job to eliminate motion blur, as motion blur occurs with the subject and the subject’s clothing/hair, which are difficult beasts to master in its own right!
i tried it several times but its not working. I also removed it using Revo Uninstaller and re-installed but didn’t work… I cannot get rid of this water mark
I guess much of the time when I talk about motion blur I am in fact talking about ghosting
Will the model be trained to handle this as well? Though I see it in non-TV content I think due to low capture rate of the camera due to the motion being recorded or something silly done to the video in post processing.
Pasting something here related to that
“Ghosting and Interlacing are not related in first case! However Ghosting may be the result of using a bad Deinterlacer (e.g. “Blending”), when converting from Interlaced to Progressive.” - this may be the cause of what appears to be “motion blur” in some cases as well.