Face Recovery for VEAI?

Do you guys think face recovery will be added to VEAI? The new face recovery option in Gigapixel AI is miles better than face enhancement. Adding this into the models in VEAI would really improve the quality of upscaled videos.


According to Developer, :grinning:

If you want Face Enhancement for VEAI, you can Vote here. :smiling_face_with_three_hearts:


I would love Face Recovery for video, but I would assume that face detection is a difficult problem. With individual pictures, you only feed it pictures that have faces in it that you want enhanced. With video, some frames don’t have faces or the face isn’t the main thing on the screen (it’s not large and centered).

Having played around with Deep Fake software, you get bizarre results when the face detection is a little off as it would try to paste a face on something that is not a face. So even if you correctly detect faces 99% of the time, 1% of your frames with “bad faces” on it makes the video clip really jarring, so you need to do a lot of manual work to fix those frames.

1 Like

I’m just thinking out loud here. It would be annoying, but as an option maybe they could let you put a square box around the faces you want to enhance. That way you wouldn’t get false positives. But you’d have to do that for the entire video, which would be a pain. Ideally, the ai will get smarter and smarter so that it is able to detect faces properly.

1 Like

I’m just trying out the new face recovery feature and it’s great! But when I convert a series of images from a video, it is not constant between the frames and it would be great to see it in VEAI. It just occurred to me that you could use the individual generated frames to train DeepFake and use it for fluid videos as a workaround?
I’m not sure but I think DeepFake has then other artefact problems?

EDIT: or maybe I can try cronos-model and blend the frames together?

Just want to take this opportunity to say VEAI is in the habit of ruining faces (and human skin altogether). Often faces/skin get the ‘goldfish skin’ treatment: a plastic-y, shiny, pink-ish surface, with the occasional blue in it (veins?).

Seems the ‘I’ in VEAI is not yet nearly as intelligent as portrayed. While it’s overall responsible for an excellent result, human skin often ruins the job.


I’d like to add my plus one for requesting face recovery in VEAI.

1 Like

I would actually love a feature where faces would be left alone. AI is great for reconstructing building, animals and cars, but horrible at faces.
See: https://i.imgur.com/FmTR2yl.png

Especially faces of people you know because you will see any mismatch.

And we can say Horrible, with a capital H. Often large portions of a face get smudged, ruining the entire clip/movie. Yes, you can denoise less, but the point is, that the rest of the picture does not suffer from this horrific smudging, while the faces generally do.

1 Like

I think face enhancement will be added to TVAI eventually but it will be tricky especially as you say for people you know, and also when people turn their faces away or they become partly obscured. AI has a long way to go before it can handle known low res faces well, many years I suspect.

Even the carefully chosen examples on the Topaz Products page for Gigapixel clearly invent features or subtleties that don’t exist in real life, or obscure features that do (even on the dress shoulder strap). For well known faces e.g. family/friends I suspect that we’d have to train our own personal AI to do it!

But for video containing only strangers, even actors in many cases, it should often work OK when they do port it to TVAI - though I still wonder if they will be able to handle faces turning away, in video.

As for AI sometimes misidentifying low resolution objects for faces mentioned up the thread, that could be handled by tracking software in video, so the ‘faces’ AI only had to worry about confirmed faces. It will likely be a long time before TVAI will or can include area tracking like that but their basic stabilization feature hints that perhaps they are thinking of that for the future. It might run very very VERY slowly though (like much high-end video software) and might never be suitable for anything other than short clips.

It is only a matter of time, especially if we can train our own model on the face we are trying to improve. But even without training, look at something like GFPGAN. It’s not perfect, but it’s still very, very good. It doesn’t work well with video because it doesn’t keep the consistency, but it works amazingly well for stills, and this tech is still early. I think in a year or 2 we will get there. Just like with all of this other stuff. You have to start somewhere. It won’t be perfect at first, but with consistent training of the model, the system should be able to learn how to upscale faces properly eventually.

Face improvements are absolutely necessary, because you can upscale everything around a person, but if the faces are off, it’s going to look fake/wrong.


We need truer A.i, that actally recognizes a face, and improves it, but doesn’t do so to a point where it winds up looking like the botched Ecce Homo fresco at Borga. :grin:

Agreed. A smart AI ideally should know when to stop and it should know what looks abnormal. For example in some low resolution faces there may be a square on the face that isn’t the same color as everything else. This is usually due to compression. A smart AI should know to remove that block and to make the face a uniform color/shade depending on the lighting.

1 Like