AI model to correct/remove analog head-switching noise?

I’ve always thought it should be possible to analyze the skewed interlaced lines (at the bottom of the screen) created by helical head switching in analog videocassette playback, and re-construct the damaged portion of the picture. The data is still there, just not horizontally aligned and possibly out of phase with the correct field dominance. Head switching noise is different in every playback device, yet it shares common attributes, and there are a very limited number of ways it can manifest. I would think AI could fix that if it knew what to look for and some different ways to adjust for it.
Any thoughts?