- The overall performance of the model on reducing motion blur.
I’m doing some real intense tests, so going forward know that I’m running the resulting De-M-Blur model video into VAI BETA with interpolated frames up to 120-240, so I can see/simulate as many frames in transition as possible with the models.
The reason is that I’m doing some tests for more localised motion blur, for example, on a multi-patterned cloth worn by a fast moving subject with isolated movement, it works amazingly well as the patterns accurately follows the motion at least 75% across the simulation!
Compare that with a chequerboard pattern with only (With no De-M-Blur model treatment) fast chronos, and you’ll end up with areas of the pattern not matching the pattern across the simulation; with apollo, the details with the patterns themselves are illustrated well, and translate the motion with minimal pattern de-stitching, and as close to the original as possible. (At least 60% over fast/chronos)
But when apollo is used in conjunction with the De-M-Blur model the results are amazing as the chequerboard pattern stays 80% stitched together as it moves as though the resulting blur between the frames which is 90% non-existent, as well as stay consistent with the motion across the video footage!
- What areas need improvements?
I’d say the time it takes to simulate a 15-second video at 15-20 minutes and a 3-minute video at 1-2 hours depending on the severity of the host video’s patterns/motion complexity, etc… but that’s due to the AI having a schite-tonne of calculations to do, which I’m fully confident of it being shortened over time!
About the only improvement other than that would be to finish and refine the model to work well alongside of the apollo model, as both used in conjunction with each other will help you immeasurably with your work and become an invaluable tool in your pipeline!
- What does the model do well?
It isolates the pattern while simultaneously eliminating the blur in-between the frames when following the motion, and restores/generates a non-blurred pattern in its place, couple that with apollo, which already isolates and illustrate the pattern, and you’ll get an 90% elimination of both the blur, but also the restoration of the patterns across the entire simulation!
This is some serious tech you guys have been working on, and I’m so proud to be even a miniscule part of its development!