AI Model for Correcting Field‑Blended or Mis‑Encoded Interlaced Sources

Hi Topaz Labs,

I would like to submit a feature request regarding Topaz Video and its handling of problematic interlaced sources. Some of my archival videos were originally interlaced, but at some point in their history they were incorrectly re‑encoded using a field‑blending process.

As a result, the files no longer show the classic “combing” artifacts of interlacing; instead, each pair of lines is duplicated (for example: line 1 = line 2, line 3 = line 4, etc.). This produces a very visible “double‑line” pattern and effectively destroys the original field structure.

This type of damage is unfortunately common in older transfers or poorly deinterlaced workflows. Traditional deinterlacers cannot fix it because the fields have already been blended or collapsed into each other.

Some third‑party tools (such as Hybrid with specific scripts) attempt partial reconstruction, but the results are inconsistent and require complex manual tuning.

Why this matters?

Topaz Video AI already offers excellent AI models for interlaced material (such as Dione, Iris Interlaced, and Proteus variants).

However, none of them are designed for sources where the interlacing has been corrupted before encoding, especially when:

  • the original temporal fields have been blended or merged,

  • the even and odd lines are no longer independent,

  • the video appears “progressive” but with repeated line pairs,

  • motion information is partially lost due to field collapse.

These cases require a different approach than standard deinterlacing.

Proposed feature

I would like to suggest the development of a dedicated AI model or processing mode capable of:

  • detecting field‑blended or line‑duplicated patterns, even when no combing is present,

  • reconstructing missing vertical detail by inferring the lost field information,

  • restoring proper line alternation and recovering as much temporal resolution as possible,

  • optionally offering a Progressive < > Interlaced reconstruction toggle, allowing the model to either rebuild a plausible interlaced structure before upscaling or directly output a clean progressive frame with corrected vertical detail.

This would be similar in spirit to how Proteus, Iris, and Dione each have specialized variants, but specifically targeted at mis‑encoded interlaced sources where the field structure has been damaged.

Why AI is ideal for this?

A machine‑learning model could analyze local motion cues, spatial inconsistencies between duplicated line pairs, residual field‑phase patterns, and chroma/luma discontinuities caused by blending, and reconstruct missing detail in a way that traditional algorithms cannot. This would be extremely valuable for restoring archival material, early digital transfers, and improperly deinterlaced TV recordings.

Why Interlaced Progressive does not work?

The existing “Interlaced Progressive” option in Topaz Video AI is unfortunately not suited for this type of damaged source.

That mode is designed for progressive videos that contain occasional interlaced segments, where the original field structure is still intact and the model can detect and deinterlace those isolated sections. In my case, however, the source was originally interlaced but was later mis‑encoded using a field‑blending or field‑collapse process, which means the even and odd lines are no longer independent.

Instead of combing artifacts, the video now contains duplicated line pairs, where each pair of horizontal lines is identical. This destroys the original field alternation and temporal information, leaving no usable interlaced pattern for the current model to detect.

Because of this, the “Interlaced Progressive” mode cannot reconstruct the missing detail, as it is not designed to handle sources where the interlacing has been corrupted before encoding. A dedicated model would be required to analyze and rebuild the lost vertical and temporal information.

Conclusion

I believe such a model would fill an important gap in the restoration workflow and would greatly benefit users working with legacy or poorly encoded interlaced sources. Topaz Video AI is already a leader in this domain, and an AI‑based “Field‑Blend Repair” or “Interlaced Reconstruction” model would be a powerful addition to the suite.

Thank you for considering this suggestion, and for your continued work on improving video restoration tools I do love very much.

Kind regards, Vincent.