Very old video, multiply mangled (but guessable): low-framerate cuts cause interlaced messes

I have a set of very old videos, which come to me through lots of mangles. Based on what I know, I’m fairly confident the provenance is roughly this, in this order:

  1. 8mm or 16mm reels, ~10 fps, maybe 12, but definitely less than 15.
  2. TV signal (video tape), probably American (29.97fps, interlaced)
  3. Digitized to who knows what format originally
  4. Who knows how many encodes/transcodes
  5. But finally: 320x240p, 30fps RealMedia (you heard me)

Now, for the most part, I can use Topaz Video AI to do some marvelous things to this. But there is one category of artifacts I cannot persuade the models to deal with.

It was probably literally videotaped from an actual reel-to-reel projection, rather than telecined on a dedicated machine. Naturally the original 10fps film frames and the 29fps interlaced frames obviously don’t line up. And in particular, there are “cuts” in the original that show up as inter-cut, interlaced frames in the stream I have. The “correct” fixed output should simply drop these frames, in favor of the true frames on either side.

Any thoughts on how to accomplish this in a semi-automated way?

when I got a video that was originally 23,976 fps but was false re-encoded with 30 fps I just use a converter of my choice and save it back to 23,976 fps to get rid of the false introduced, doubled frames. you could try to convert it back to 10 fps with a regular video converter if that was the original frame rate.

1 Like

So maybe use FFMPEG to decimate the video, and manually tune the offset to find a location where it will “miss” the in-between frames most or all of the time, do you think?

I run into similar situations sometimes, where the video is clearly not an even divisible of the source FPS. The process I go through then is as follows:

  1. Identify the original resolution by frame-stepping the clip (e.g. in virtualdub), watching for dupe-frames and skips [*].
  2. Decide on a decimation strategy to get rid of as many dupes as possible while sacrificing as little actual motion as possible. There’s seldom a perfect decimation, so I tend to err on the side of removing some good frames to get rid of all the dupes.
  3. Interpolate away skippy motion. With the now decimated clip (e.g. from avisynth’s SelectEvery filter), look at the now low-fps decimated video to see if there are any skips/jumps in the motion. This indicates that frames were destroyed/dropped either previous in the clip’s lineage before I got it, or as a result of the aggressive decimation in the previous step. I identify a cycle range such as 3 skips every 7 frames or whatever watching the video reveals. Then I interpolate that many new frames in each cycle between the frames with most motion and their respective preceding ones.
  4. Now I have non-stuttering clip which I can use for future processing, such as running TVAI on it.

TVAI’s motion filters really hate uneven motion, so running through a process like above before bringing the clip it into TVAI is a must.

Have been doing this for a decade (sans TVAI), so have of course automated it (extracting motion information, identifying optimal cycle, decimation factor and interpolation, using a simple statistical/ML model), but the process is basically the above. Sometimes I still revert to the above manual process when the automated approach fails to produce a satisfactory result, as no ML solution is 100% perfect.

Edit: As for interlaced videos, I just tend to run QTGMC over the clip before I start the process. If it fails to produce a good progressive set of frames, such as the source being badly combed in progressive, then some additional and time-consuming steps would be required, such as flagging frames for interlaced or not, slice those segments that contain a sequence of “good” combing out for isolated QTGMC deinterlacing, then just drop the segment transition frames, as they’ll be adequately fixed automatically in step 3 above. I’ve never had a clip yet that is a complete jumble / random in terms of combing, but that would be the worst nightmare scenario I can think of.

Not sure I fully understand the context you’re trying to describe. But if I interpret it as simply “How can I replace a frame with a dupe of the most similar adjacent frame”, then that’s easy. You can do that with Avisynth’s built-in filters with YDifferenceFromPrevious and YDifferenceToNext to find which frame is most similar. Then use ConditionalSelect to choose which of the two you want to replace the frame with. Likewise, if you’re trying to find which frames this scheme should apply to, then you have a host of nice metrics available to decide that. E.g. if you’re aiming at scene transitions, then using one of the above with a threshold in exactly the same fashion would allow you to trigger the above behavior or not. E.g. say motion is > some-threshold, then do the frame replacement, else use the original frames

1 Like

Generous response, thanks a ton!

The “correct” fixed output should simply drop these frames, in favor of the true frames on either side.

What I mean by this, is that if the true frame rate is 10 or 12, and the captured frame rate is 30 (or nearly 30), then with extremely few exceptions, every original frame will appear in its entirety in one or more of the encoded frames. So it should be possible to select, for every original frame, one of the true frames, and reject any “interframe” interlaced crossovers of original frames.

But maybe I’m dreaming. :slight_smile:

Is it possible to share a piece of clip of 30 seconds in order to do tests and thus better advise you?