Is it possible to increase quality of video based on image(s) via Machine Learning?

The simplest thing I can imagine is to replace the original material with new.

As if one would record the material again only with better equipment.

But what we have now is compare and replace with others or edit.

This can be done for inanimate objects and has been around for many years in high-end software such as Mocha Pro and other Borisfx products as used in professional film production - using masking and often horrendously slow rotoscoping techniques for partially obscured oblects, extracting info from other parts of a video or external clips. There usually has to be continuous huge manual intervention to get right anything other than the simplest ‘replacements’.

It is also very VERY slow even on high end computers and in HD or UHD needs a highly divided workflow involving many people to produce anything other than short clips, which are then joined.

Taking the approach suggested for animate objects e.g. faces must be at least an order of magnitude more difficult and slower. It will be many years before we get there in the sense of how people want to use TVAI i.e. a few clicks, leave an hour 's video running and return next morning or even next week to see the completed job. Haviing used Mocha Pro and Bfx Silhouette for years, I’d say the above is unlikely to happen in the current decade for animate objects - especially faces, which are some of the most difficult of all animate objects to get right.

1 Like

Yes, vice versa as well - you can’t have it one way without the other.

Probably since you set yourself up as the English Grammar Police, which is quite frankly a joke. Your English is very good but nowhere near good enough for that.

What you can say now guys? In today era of AI is that pretty possible right?