When I have been dealing with progressive footage, I have certain model choices that I have found work wonders - either as intermediate steps or in one step. Models such as Artemis and Nyx for example.
I have, however, tons of interlaced PAL footage (an a couple of interlaced NTSC disks as well) and my model choices are far more limited. Given the excellent “intermediate” results I have gotten with Artemis and Nyx, I would like to use these models on the SD interlaced footage before I move on to the actual upscaling (for which I use Proteus - I just consistently get the best result with the least artefacts - but that is just me). Or at the very least, I’d like to experiment to see if the difference and the added processing time is worth the effort.
But it seems there is no option to simply de-interlace and nothing else in Topaz so I am guessing I am going to have to find an alternative of at least equal quality to the excellent de-interlacing of Topaz. furthermore, I would wish to output to something like FFV1 lossless so that I am not losing anything at all before forwarding that de-interlaced footage into Topaz.
I am guessing my option is probably line commands with FFmpeg, but before I go on a long experimentation process, has anybody been there and done this and achieved top tier results?
Thanks
This should get you most of the way there. It is not that hard, once you go through the pain of installing a vanilla FFmpeg build.
$ ffmpeg -i "${infile}" -filter:v "bwdif=mode='send_field':parity='auto':deint='all',setparams=field_mode='prog'" -codec:v 'ffv1' -level 3 -pix_fmt:v 'rgb48le' -color_range:v 'pc' -codec:a 'copy' -f 'nut' "deinterlaced_rgb48le.nut" -y
That FFv1/rgb/nut output file can be previewed in MPV or VLC. It will be big, but it will have the full temporal resolution (25>50 or 29.97>59.98) and will be encoded with a lossless codec.
The command uses bwdif, deinterlacing each temporal field into a separate frame. It uses RGB48le full range pixel format. The intermediary RGB pixel format is just to skip a pixel format conversion step later and make TVAI’s life easier in the final enhance. It encodes as FFv1 level 3 lossless and stores it in a NUT container.
Alternatives are QTGMC deinterlacer, which is reported to be slightly better than bwdif. NNEDI is an alternative, which is slower. bwdif is a fine compromise between speed and performance, but AI it is not.
That said, internally, Topaz GUI just uses bwdif for deinterlacing anyway - once you dig deep into the FFmpeg command that TVAI uses - so I’m not sure you are saving much by deinterlacing via a preprocessing step in this case. The advantage of a preprocessing step is that you can do all sorts of FFmpeg tweaks beforehand (finer control over bwdif field_mode vs frame_mode, hqdn3d temporal denoise for digitized sources and tagging colorspace/primaries/transfer characteristics for color accuracy). But for just deinterlacing? Topaz will be using bwdif internally anyway.
[I’m sure Topaz are not trying to be disingenuous when they don’t expose that FFmpeg’s bwdif is the real deinterlace filter in the dropdowns/GUI, I’m sure it comes from a good place and that the “deinterlace-optimized” models do perform *some magic* which may be optimized to reduce noise from the deinterlacing step, but bwdif is already the actual deinterlace filter under the hood.]
3 Likes
Thank you very much for this - it will be a great help. I am not necessarily committing to this workflow - I simply wish to experiment to compare an end result. The reason for wishing to de-interlace outside of Topaz is only for one reason. With some footage I have (which is 576p), I have had interesting (and very good) results using Nyx without changing the resolution and then using Proteus to upscale the Nyx (FFV1) lossless output to Full HD. It is not a good workflow if the source material is fairly good SD quality to begin with but a lot of what I have is pretty bad. The Nyx AI may have its own drawbacks but in some cases the drawbacks are far more palletable than the drawbacks in the original material.
But Nyx of course can only handle progressive footage and that is the only reason for wanting to try this. As I say, my tests have worked quite well with 576p footage so I am interested in turning the 576i into progressive so I can test it in the same way.
1 Like
That makes sense and sounds like a perfectly reasonable use case. I did not realize that you could not use bwdif + Nyx in the GUI. That would be all the more reason for bwdif deinterlace to be a separate processor in the app, independent of any enhancement. If you care, post an enhancement ticket for something along the lines of…
"Move deinterlacing (bwdif) and inverse telecine (fieldmatch) into a separate “Deinterlace / IVTC” section of the app (before Enhance > Frame Interpolation > Stabilization > Motion Deblur > Grain), so that a user has granular control of the deinterlace and inverse telecine filters. This would empower a user to be able to use bwdif deinterlace with TVAI filters which cannot currently be used with interlaced footage (such as Nyx).
Maybe you’ll get some votes. Who knows?
Alternatively, Field & Frame Enhance could be a section which covers all of the temporal functions…
- Deinterlace
- Inverse Telecine
- Frame Interpolation
- Slowmo
Personally, I only use the GUI to test a workflow, but I do the real tvai_up
processing with Topaz’ custom FFmpeg command line, where I can create custom filter-chains.
1 Like