Local training of face recovery AI

When recovering faces from images that are blurry/noisy/bad resolution, faces that are known to me often come out as a more generic face. Like an old picture of my kids don’t look as MY kids anymore, even though they are sharp and noise-free. Would it be possible to make the AI for faces go through MY locally stored images and make an additional locally stored training dataset, that the AI may use when dealing with my images. This would mean that each user could have a locally optimized AI.
I suggest this since I don’t think uploading my images to Topaz for general training would help, since it would “drown” in the general pool of data (millions of images) used to train the algorithms, so would not give enough “bias” towards recognizing the features of my subjects in particular.
Just an Idea, I don’t know if it’s even possible…

Regards
HAL

Late to see this post, but Stable Diffusion should be able to do what you want (after becoming familiar with the application and models available). Even without local model training, you can get fairly close to a good representation of the face of the person of interest, by use of the controls.

But you can also train your own model using your own photos e.g. using Dreambooth if you have a lot of them (or good quality video). This is all new stuff, free and time consuming to get to grips with but under continuous development and it’s just a matter of time for it to find its way into easily manageable software. Meanwhile, unless you are a DIY guy with a decent GPU, sit tight and wait…