Hey,
I’m developing a new software for editing videos (personal initiative, nothing profesionnal) and would like to integrate Topaz Video inside to ease my workflow. Basically, during export, the user has the possibility to choose adding Topaz features (enhancement, interpolation, …) to the pipeline if he has a valid license.
To do that, I saw Topaz Video is based on ffmpeg filters, tvai-up (upscale) and tvai-fi (frame interpolation).
tvai-up takes several parameters (model, model params and target resolution), but then the original ffmpeg scale filter is also used to “upscale” to the desired resolution. In other words, it looks like the enhancement feature is:
- firstly improving the video stream, leaving the resolution as its original size
- then upscale from original size to the new desired size.e
And I said it “looks like”. Because by testing this, I first get the original resolution size between tvai-up filter and scale filter, and the desired resolution after the scale filter, but I do have interpolated pixels in the final result, not “duplicated pixels” as we have using only the ffmpeg scale filter. Nevertheless, comparing the Topaz scale filter and original ffmpeg scale filter, I can’t see any difference between the two. And using tvai-up filter without the scale filter just doesn’t work. So… I don’t really understand how this is working, technically speaking, at ffmpeg level. I’m not interested in the tvai-up filter implementation, I totally understand that’s Topaz property. But I’m just wondering how we can have the same input/output size in tvai-up filter, then do an upscale using what looks like to be the original ffmpeg scale filter, and get interpolated pixels calculated in the previous filter ![]()
Thanks!