Is it possible to increase quality of video based on image(s) via Machine Learning?

MY personall guess would be less than 10 years…

I have my doubts. Even though A.i is growing rapidly, recognizing the same person in 2 different photos, and be able to ‘repair’ one with parts gotten from the other, does go quite a bit further than ‘simple’ statistics. But my primary thing is, that it will be extremely computational, and will require several temporal passes, at the very least. Like temporal denoising, but endlessly more complex, like going thru the movie X many times, trying to extract the same people (not just 1 person), from all different angles and such, they appear in. And people are already complaining VEAI takes so long to complete. :slight_smile: And another thing, of course, is that this sort of A.i. would have to be done locally, and not merely pre-trained (although, to a certain degree, it can be).

So, I give that kind of functionality maybe 25 years, even.

we`ll se… Facial recognition is already a thing for quite some time now - to identify people and objects in video footage is very possible…
And generating 3D counterparts of faces and even complete scenes with shadows, etc… also is a reality now.

The difficulty is to get it going in a fidelity way to give cinematic grade quality.

Yes, it is. But what the OP wants to do goes quite a bit further: he wants to recognize faces/body parts from a fuzzy (low-res) part of the video, and then use footage from other parts of the video to repair faces/body parts of the former. Like I said, not only would that take any number of temporal pre-passes, but would also require the software to repair the fuzziness with new, spliced-in parts (corrected for scale/angle, etc) from other footage inside the video. Like a fuzzy dress seen somewhere at 10 minutes into the movie, which then re-appears 16 minutes later somewhere, seen in higher quality. I cannot fathom how time-consuming such a process would be. And, remember, the time doing this for just 1 person would already be staggering, but far more so when you basically want all characters seen get the same treatment.

But, indeed, we’ll see. :slight_smile:

Maybe if we change the approach to something more like using AI to recreate the scene in a 3D engine, then render that as the movie. To me that seems more possible, but still years out.

1 Like

The simplest thing I can imagine is to replace the original material with new.

As if one would record the material again only with better equipment.

But what we have now is compare and replace with others or edit.

This can be done for inanimate objects and has been around for many years in high-end software such as Mocha Pro and other Borisfx products as used in professional film production - using masking and often horrendously slow rotoscoping techniques for partially obscured oblects, extracting info from other parts of a video or external clips. There usually has to be continuous huge manual intervention to get right anything other than the simplest ‘replacements’.

It is also very VERY slow even on high end computers and in HD or UHD needs a highly divided workflow involving many people to produce anything other than short clips, which are then joined.

Taking the approach suggested for animate objects e.g. faces must be at least an order of magnitude more difficult and slower. It will be many years before we get there in the sense of how people want to use TVAI i.e. a few clicks, leave an hour 's video running and return next morning or even next week to see the completed job. Haviing used Mocha Pro and Bfx Silhouette for years, I’d say the above is unlikely to happen in the current decade for animate objects - especially faces, which are some of the most difficult of all animate objects to get right.

1 Like

Yes, vice versa as well - you can’t have it one way without the other.

Probably since you set yourself up as the English Grammar Police, which is quite frankly a joke. Your English is very good but nowhere near good enough for that.