New Starlight beta and ability to give the ai more data

Continuing the discussion from Project Starlight - Video AI 6.1 Beta 3:

Here’s what I don’t understand. We all know that’s Cliff Richard. We all know what Cliff Richard looks like and so we can have a good guess at what the frames should look like,. This is information we have and information that the algo is only guessing at.

Why can’t we give the computer more information to work with.

  • here is what the subject actually looks like from different angles. In high resolution pictures.
  • perhaps here is a picture taken at the actual event that should show you (the algorithm) how some of the vhs distortion is working.

I have lots of home videos but I don’t want to distort pictures of my family. Why can’t I upload pictures of my family a la deepfakes to get more accurate guesswork?