I’ve not considered this to be a “bug” but a frustrating software behavior. I had wanted to use the Dione models for 2x de-interlacing, with no upscaling. What I’m finding is that the Dione models require approximately 5x the encoding time, compared to running the same video through a QTGMC Bob de-interlacing outside of TVAI. I don’t know if either method to de-interlacing is superior; however, for the small differences that may exist in quality, I’d rather save time and use QTGMC. Has anyone else experienced this?
for me qtgmc is always much superior to the vai models and actually faster
1 Like
Thanks. That makes me feel better about my workflow. I don’t have the patience to wait a day for Dione to complete a de-interlacing on a movie-length video, not when QTGMC can complete the job in 5 hours.
no problem, I tried lots of them and qtgmc is the best because it is fully configurable on spatial and temporal noise and it deinterlaces better without leaving any artifacts like with Dione TV or Dione DV
2 Likes
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.