Interlaced Progressive Specific

To date there is no way to actually fix interlaced progressive content and restore it as though it had been properly deinterlaced. It’s a two fold issue and so far it has never been treated as such. The first issue is trying reclaim image quality. Some models do an ok job at this but none are specific to the main issue that interlaced progressive content brings and that’s motion vectors being destroyed. A model for fixing interlaced progressive should also double frame rate. There is no good motion interpolation for this purpose currently. All topaz motion interpolation is geared toward unblurring fast motion where as if I simply deinterlaced content the motion would still be blurry it would just be smoother. I would love a specific model for interlaced progressive content that addresses both these issues.

the framerate dropdown will offer deinterlaced*1 and *2 as soon as you mark the content accordingly.

Unfortunately since there is no top or bottom fields left in interlaced progressive content attempting to deinterlace it will do nothing except repeat the frames.

i don´t have any such footage at hand. can you upload a short clip? i would like to take a look at it, maybe there is something i can do.

Sure I can provide you some clips that are interlaced progressive to play with. I don’t know your level of knowledge with video editing but just so you know what interlaced progressive is and where it comes from. Pretty much any content that was created before the 00s and posted on the Internet will be interlaced progressive because the ntsc standard used to be 29.97 interlaced. Tv receivers and VCR were built with that standard in mind and automatically deinterlace it. When you take the content to a computer the computer doesn’t know to deinterlace it and you see the interlaced frames. At this point there’s no problem it can be easily deinterlaced and 59.94 progressive frames can be obtained from the 29.97 interlaced. Thats because the interlaced frames store field data and motion vectors in them. The problem arises when the interlaced content is ran through an encoder because the field and motion vector data is lost now it’s 29.97 progressive awful looking frames with no field data or motion data to reassemble it.

Never mind it seems you can only attach images here not video clips?

you can upload zip archives afaik

Interlaced (13.2 MB)

i will take a look later on, may take a bit

AviSynth is how I currently handle them.

that´s evil.

this footage looks like it has been interpolated to a higher resolution right after the original footage was simply declared to be progressive. thus the distance between combed lines is not evenly distributed, and the lines are contaminated by remains of neighboring lines, put into them through the interpolation process. finally, the low bitrate has killed any chance of coming back from this :frowning:

i think i’d be limited to damage control.
what do you do to clean this mess up?

i played around a bit and think i found a way to reverse these crimes against intelligence in a somewhat productive manner… check my uploaded files…
2-Interlaced (29.9 MB)
2-Interlaced (22.2 MB)

To my eyes I like a natural look and what you’ve done is very fake and smeared looking. I’d actually prefer the original footage to your solutions.

In AviSynth I use this script:
SetFilterMTMode(“QTGMCp”, 3)
FFmpegSource2(“”, atrack=1)
t = QTGMCp( Preset=“Placebo”, InputType=2, SourceMatch=3, Lossless=2, NoiseProcess=2, GrainRestore=0.4, NoiseRestore=0.15, Sigma=1.8, NoiseDeint=“Generate”, StabilizeNoise=true )
b = QTGMCp( Preset=“Placebo”, InputType=3, SourceMatch=3, Lossless=2, NoiseProcess=2, GrainRestore=0.4, NoiseRestore=0.15, Sigma=1.8, NoiseDeint=“Generate”, StabilizeNoise=true, PrevGlobals=“Reuse” )
Repair( t, b, 1 )

That produces 29.97 useable frames I’ll attach as fix 1

Interlaced progressive fix (15.0 MB)

From there I use aaa9 and apf2 I add noise back in at a 3 to retain the original look of the content. Again in my opinoin if someone can look at it and say something looks wrong or fake I don’t want it. This footage is part of an entire 3 hour race broadcast and this is only 30 seconds of it I can make this piece look almost perfect with Iris v2 but Iris v2 doesn’t work with the actual race footage same with apollo fast it attempts to deblur the motion that should remain blurred making it look completely unnatural. Which is why I say we need a specific model to fix interlaced progressive content because currently nothing does and I have a ton of it.

Interlaced progressive (39.2 MB)

our step1 is nearly identical (i use vdub or hybrid), but in step2, i let iris/proteus run over it for a cleanup and for 1440p@60fps.

the problem with a special model will be that (probably?) cannot realise how it will have to downscale the footage first, and by how much, to get back to something it can then use to deinterlace it…

Yes but 1440p is definitely too heavy handed on footage like this it’ll never look good trying to over do it. That’s not really how AI works though it doesn’t need to know what’s wrong with footage to fix it. It just needs trained on the footage with these problems. Topaz has never trained any models to deal with interlaced progressive. It’s as simple as taking interlaced footage and feeding one side of the model properly deinterlaced content and the other side these awful interlaced progressive content and it’ll train itself to make one look like the other. Trust me its in their power lol

(raises the question what they have done so far for implementing the “interlaced progressive” option… this is not a rant, i´m just clueless)

My opinion is Topaz markets itself as an enhancement software and what I’m asking for is repair. They don’t really have any “Repair” models they’re all designed to enhance useable footage. That’s why my first steps with most footage is AviSynth. I only use Topaz for upscaling and very minor touch up. However, I think they could implement an actual Interlaced Progressive model if they wanted to and I for one would love it.

Dinesh - I agree with you. Based on the age, and that it is US content, that is probably 480i that has been upscaled to 720p (without either deinterlacing or interlace-aware scaling). It may have even been 240i if it was digitized from a VT.

It is either going to be 240i (VT), 480i (NTSC/ATSC) or 1080i (ATSC), but it is unlikely to be 720i, since that’s an oddball combo. And it was unlikely to be 1080i.

I would assume that the best way of recovery with FFmpeg (or avisynth) is scale back down to 480 with scale=size=ntsc (which isn’t going to be perfect and will lead to pixel blur but should at least put the interlace lines close to the original capture), followed by bwdif=send_field:parity=bff:deinterlace=all and let bwdif (or avisynth qtgmc ) do it’s magic, possibly followed by hqdn3d, exporting to FFv1 lossless before running through iris or artimis back up to 720p. I wouldn’t push it further than 720p.

If the assumption is correct - The irreversible damage has already been done when it was scaled from 480i to 720p without deinterlace. I don’t think it is as much of an “untagged interlace” or field-picture vs frame-picture problem as a scaling issue.

You’re correct that’s the exact issue. Id love topaz to have a model to fix this issue. You’ve outlined the best way to get 29.97 decent fps back but getting it to 59.94 fps as though it had been properly deinterlaced to begin with is the real problem. And in footage like this which is auto racing 29.97 is extremely choppy. Using any available motion interpolation just attempts to unblur the motion of the cars making it look extremely unnatural.

I don’t know how you could train a generic model for scaled-interlaced content since the corpus of broken files and pure, unmolested originals would be very hard to gather - unless someone creates a series of broken interlace+scale files from a progressive source.

A non-model based approach would be an algo that looks for interlace-looking lines (combdetect?) and then determines a downscale factor that results in the lines being only 1px tall in the luma (since chroma planes can be subsampled in 4:2:0). This feels like a pre-process technique that should be done prior to Topaz.

I ran the original file through the following idet (interlace detect) filterchain and idet detected the highest number of interlaced frames when it was down-scaled to qntsc (240) - so I’m therefore concluding that this was digitized/captured from VHS VT. A geq filter was used so that the idet analysis was only performed on the luma plane, since chroma is likely to be subsampled. Someone may be able to come up with a better scale factor, or challenge the hypothesis.

$ ffmpeg -an -sn -dn -i ./Interlaced\ progressive.mp4 -vf "geq=lum_expr='lum(X,Y)':cb=128:cr=128,scale='qntsc',idet" -f null -

I’m therefore assuming that the following is going to get as close to the original digitization from VHS VT, as if it were captured and deinterlaced to 240p. That is, of course, only a guess and others may have better suggestions.

$ ffmpeg -i ./Interlaced\ progressive.mp4 -vf "scale='qntsc',setparams=field_mode='bff',bwdif=mode='send_field':parity='bff':deint='all'" -c:v ffv1 -level 3 -c:a copy out.mkv

It is worth trying both send_field and send_frame.

I’m too stupid to work out how this can be performed iteratively in order to identify the best downscale factor for any given content - and I’m assuming that a human would need to make an educated guess (NTSC vs PAL, age of the content, frame-rate, color-space, likely source of the content) to identify how to restore it back to one-scanline combing before deinterlace.

You’re thinking about it wrong. You can take any interlaced footage and scale it up without deinterlacing and turn it into the footage I’ve provided. So as long as I have the untouched interlaced footage I can simply deinterlace it and train the model that way. It’s easy to recreate bad footage to train with.