[FACE TRAINING] - Local training of face recovery AI

When recovering faces from images that are blurry/noisy/bad resolution, faces that are known to me often come out as a more generic face. Like an old picture of my kids don’t look as MY kids anymore, even though they are sharp and noise-free. Would it be possible to make the AI for faces go through MY locally stored images and make an additional locally stored training dataset, that the AI may use when dealing with my images. This would mean that each user could have a locally optimized AI.
I suggest this since I don’t think uploading my images to Topaz for general training would help, since it would “drown” in the general pool of data (millions of images) used to train the algorithms, so would not give enough “bias” towards recognizing the features of my subjects in particular.
Just an Idea, I don’t know if it’s even possible…

Regards
HAL

Late to see this post, but Stable Diffusion should be able to do what you want (after becoming familiar with the application and models available). Even without local model training, you can get fairly close to a good representation of the face of the person of interest, by use of the controls.

But you can also train your own model using your own photos e.g. using Dreambooth if you have a lot of them (or good quality video). This is all new stuff, free and time consuming to get to grips with but under continuous development and it’s just a matter of time for it to find its way into easily manageable software. Meanwhile, unless you are a DIY guy with a decent GPU, sit tight and wait…

Allow the AI model to accept input images to finetune its training/model for enhancing photos of specific people.

I have a few decent old images of my grandparents. If the model could learn how they looked, maybe the final result would be better on the many lower quality images that I have?

EG: better refined facial features and more accurate shapes, smile, smile’s impact on the eyes, eye color, hair styles, facial hair, hair color, etc… At the moment I can process 3 different LQ photos of the same person and the output can end up looking like 3 different people.

When attempting to improve face quality it would be immensely helpful if users could upload a folders of high res. photos for those who are in the videos.

If Topaz Video AI could rightly identify the faces in a scene and then weight the faces in the reference stills faces to use for detail I suspect the accuracy results would be much higher than trying to use generic facial feature up-rezzing (which will result in sharp, but in accurate – Uncanny Valley type faces.) Using actual faces of the subject, sourced from photos of the same time frames, ought to produce amazingly detailed faces HD scaling with the unique benefit of having the face appear most accurate.

1 Like

Wow, you’re asking for a lot! I think perhaps you’re a couple of decades ahead of where this is going someday.

What you are describing is basically how deep fakes work, were you can supply a bunch of source faces to learn from and it replaces the destination face. Deep fakes is a bit more general where you are applying a source face to any destination face, but it can be used to replace a low res face of the same person too. It would be interesting to see a combination of DeepFaceLab → Topaz VEAI and see if it helps with the temporal consistency of the face.

yes! What better way to uprez a face in a low res image than to use high res pictures of that very same face?

1 Like