Missing 2x Deinterlace (29.97 → 59.94 fps) Option Without Frame Interpolation

In previous versions of Topaz Video AI, I was able to perform a simple 2x deinterlace conversion from 29.97 to 59.94 fps without enabling Frame Interpolation or any AI models. In the new Topaz Video software, this workflow is no longer possible because frame rate control is now tied exclusively to the Frame Interpolation setting, which forces the use of an AI model. Even with the “Default Image Sequence FPS” set to 59.94 in preferences, exported video files remain at 29.97 fps unless Frame Interpolation is enabled. When I enable it, processing speed drops by 80–90%, making this feature unusable for standard deinterlace workflows. I rely on 2x deinterlacing for legacy footage restoration, and without the ability to double frame rate independently, the new Topaz Video software is not viable for this purpose. Please consider restoring the previous ability to deinterlace to 59.94 fps without invoking Frame Interpolation or AI-based processing.

1 Like

To change from 29.97 to 59.97 you will have to engage the Frame Interpolation model to accomplish this.

There is an option to turn on the telecine option in the video input settings and that will help shift some of this without having to go to turning on the frame interpolation model.

Thanks for responding, Kyle. I think there may be some misunderstanding about the workflow I’m referring to.

In previous versions of Topaz Video AI, I could perform a simple 2x deinterlace from 29.97i → 59.94p without enabling Frame Interpolation or any AI models. The new version no longer allows this, frame rate control is now tied exclusively to the Frame Interpolation setting, which forces use of models like Apollo, Chronos, or Aion.

I’ve already tried enabling Telecine under Input Settings, but that only changes the base frame rate to 23.97 or 47.95, which isn’t relevant for my 29.97i sources. What I need is a straightforward deinterlace that doubles the frame rate, no interpolation, no motion estimation, just field-based 2x deinterlacing.

Can you please confirm whether this workflow is still possible in the current version? If not, I’ll have to continue using the legacy version, as this limitation makes the new software unworkable for restoration projects that depend on true 59.94p deinterlacing.

In the older versions of Video AI the app was still running a frame interpolation step to provide the deinterlacing aspect and change the frame rate for your output. It was just not making it be applied in the UI screen as a second model.

Do you remember which version of the app you were using before that you had this behavior in? I want to have the product team do a deep dive back and see what the actual workflow was in the backend of the app for clarity.

1 Like

Keep in mind that converting 29.97 interlaced into 59.94 progressive requires either interpolation or duplication. A 29.97i video already contains 59.94 fields, each of which are half of the full frame. The only way to convert that to 59.97 progressive is either to combine each pair of half frame fields into a single full frame and then duplicate it, or interpolate the extra frame.

Duplication technically results in a 29.97 frames per second result with each frame consisting of two full progressive fields, and with no effect on the visual playback. Interpolation results in true 59.94 FPS and smoother motion.

The real question is: What are you trying to accomplish?

I understand that may be the case, where some interpolation was happening internally in older versions, but there seems to be a clear disconnect here. In previous versions, running a 2x deinterlace from 29.97 to 59.94 fps had zero impact on processing speed — it was just as fast per frame as 1x output.

I upgraded from a nine-year-old Mac to a new one specifically for faster frame processing, and in Topaz Video AI (up to 7.1.5) that worked perfectly. Now, in the new Topaz Video, because 2x output is locked behind Frame Interpolation models, performance has dropped 80–90%. No combination of settings restores the old speed.

That’s the disconnect — same workflow, same goal, but drastically slower results. Why was 2x deinterlacing efficient before, yet now it’s unusably slow?

That’s not quite accurate. In a 29.97 interlaced source, each field represents a unique point in time, offset by 1/59.94 of a second, so there are indeed 59.94 distinct motion samples per second, not duplicates. When properly deinterlaced, each field becomes its own progressive frame, resulting in true 59.94 fps playback with every frame showing different motion.

This isn’t interpolation or duplication — it’s simply reconstructing the original temporal resolution that’s already present in the interlaced signal. Converting only to 29.97 progressive combines each pair of fields into a single frame and effectively discards half of that motion information, which is why 59.94p deinterlacing produces visibly smoother and more accurate motion.

Sorry, but I have to disagree. In an interlaced video, each field contains only half the scan lines. In digital terms, that means that each field of a 640x480 video contains only 240 vertical lines. Whether a FRAME of video starts on the odd or even numbers row is determined by the “top first” or “bottom first” tag on the video. It takes two fields to make a frame. When deinterlaced, the two 240 line fields are combined into a single progressive field, which is equal to one frame.

Bottom line, the interlaced source only contains 29.97 frames per second. To have more frames you need to ‘create’ them. To achieve what you describe, you need to fill in the other 240 lines of information for each existing field of 240 lines. That is interpolation.

You’re correct that each field only contains half the vertical resolution, but that is not the issue I’m addressing, and it has nothing to do with what I’m trying to accomplish. The problem I’m describing is entirely about temporal motion, not line count or field structure.

In this type of studio video from the 1980s and 1990s, each field was recorded at a different point in time, exactly 1/59.94 of a second apart. When that material was later digitized, those 59.94 temporal samples per second were preserved in the video stream. Each field represents a unique moment, not a duplicate or static half-frame.

When properly deinterlaced, each field can be reconstructed into a full progressive frame using spatial interpolation only for the missing lines, without touching the timing between them. The goal is not to create new frames, but to restore both temporal samples that already exist in the interlaced source.

So while the vertical resolution of each field is 240 lines, the true frame rate of motion within the signal is 59.94. Deinterlacing to 59.94 progressive maintains that original temporal cadence and delivers motion that matches how the footage was originally recorded and broadcast.

Your focus on the vertical resolution overlooks this key point. The interlaced source is not limited to 29.97 unique moments per second, it contains 59.94 distinct motion samples, and deinterlacing to 59.94p is the only way to preserve that original motion fidelity.

We are in violent agreement! Earlier you posted…

My only point was you can’t go from interlaced to progressive with each field becoming a frame without interpolation. SOMETHING has to supply the missing information for each field. That means you are ‘creating’ information through interpolation.

You still don’t have a real solution to the problem. You clearly don’t understand the type of source video I’m working with, and you haven’t explained why the previous Topaz software could perform a 2x deinterlace to 59.94 fps much faster than the new version.

I can double the frame rate using a basic tool like HandBrake with no AI involvement. It simply doubles the frame rate before upscaling, which is exactly what the older Topaz versions did quickly and efficiently. I’ve been using Topaz Video AI for over three years and never once selected an AI model when performing a 2x deinterlace — I would simply choose 59.94 fps (2x Deinterlaced) and process the file.

Even Topaz staff confirmed that something under the hood handled this before, whatever that was, I don’t care — the point is it worked and it was fast. Now the same workflow forces the use of Frame Interpolation models, and the speed has dropped by 80 to 90 percent.

You’re not addressing the performance issue, and your focus on interpolation theory doesn’t fix the fact that the same process now runs drastically slower. If that was your only point, then you’ve made it, but you haven’t solved my problem. Please move on and stop contributing further if you can’t provide an actual solution.

I’m surprised no-one has clarified this, but 2x deinterlacing is (was?) being provided by the bwdif filter in FFmpeg. No frame interpolation models needed, just the enhancement model to handle the spacial reconstruction (you have to enable Enhancement, process the video as interlaced, and select Iris, Proteus or Dione). Then you can select 2x deinterlacing in the Frame interpolation section without enabling a frame interpolation model.

I’ve not tried the studio version of Topaz Video to see if anything has changed, but the devs should know this.

3 Likes

I’ve noticed that too. I work with video files digitized from Hi8 and Digital 8 tapes. I used to test Proteus and select x2 to go from 25 fps to 50 fps, but that’s no longer possible for some reason. With the newer versions, you have to use interpolation, which significantly slows down performance and gives worse results than before, since it probably used deinterlacing without AI.

2 Likes