The problem with Iris is that it tries to put faces on things it should leave alone. Specifically, a video including an audience with hundreds of tiny faces. Iris puts lips and noses on them resulting in a ridiculous and ghoulish looking video. Iris should ONLY restore faces that are big enough so that the enhancement is successful and the faces look natural. Tiny faces should be left alone.
Iris is proteus with face enhancement.
Proteus seems to have the same or similar face enhancement also. It draws faces on blurry backgrounds like Iris does.
Absolutely and totally agree. I have had to throw out many Iris encodes because the faces on close-ups look wonderful, but there are assorted lips, eyes and mouths in the blurry background.
It does nasty things to faces, but does not try to restore them or treat them special. Human brains are naturally wired to treat faces special. Therefore we notice when they get messed with. Iris was created because of the nasty things that happen to faces when using Proteus on farther away faces. In general, if you can clean the grain up enough—and for DVDs, set Anti-alias/Deblur to -80—Proteus will leave far off faces alone and enhance the rest more accurately than Iris.
The shortcomings of this approach are that you must use Proteus Manual. You must use a tool outside of TVAI to reduce the grain (Nyx 1 can be okay but more often makes the final result more like a painting and similar to if you use Artemis MQ for denoising. [Nyx 2 is garbage]). And there will always be some video that it just doesn’t work well on—but that’s why there are more AI models.
Blockquote
Iris is proteus with face enhancement.
Don’t agree. There’s loads of improved texture in wood, hair, clothing, all kinds of fabric and materials.
Iris gets me to 4K while Proteus is fine for 2K / HD.
Even if I’m wrong or you disagree — what we ask is, please keep the face enhancement EXCEPT for the eyes.
I agree with you. Ideally, Enhancement/ Iris should come with a a manual mask, so only faces selected will be edited.
Then there is the issue with Iris restoring (or rather guessing) how to enhance a face and hair, whilst leaving hands and clothing completely untouched. To my way of thinking this too is an unwanted look.
It’s unlikely that TVAI can actually recognize faces or parts of faces. Most likely, the AI-created model is applying a set of enhancements that on average enhance faces, and it’s applying those enhancements everywhere. In order for it to apply less enhancement to small, distant faces, you’d have to reduce overall enhancement and accept lower effectivenes on closer faces.
That makes sense. We’d need new approaches to models training.
I’m getting incredible results on cheeks and other facial skin features in close-up shots — what sort of ratios for upscale are you using, @Jewelboy ? Mine is great at 200% even 400%. Better than the other models and I want to avoid them.
Whereas I have to limit to 150% or else the eyes turn into uncanny valley territory for long shots; it’s super dangerous for me to publish these, or else the AI stops being “invisible”.
Currently my only strategy is to upscale twice, in Iris and Proteus, then go back to my editor and just A/B switch from shot to shot.
AI has increased my workload but the possibilities have increased too.
Depending on how small the distant faces are, adding a bit of noise at the source may turn them back into an indistinct blur.
Yes, regardless of video format, CU footage seems to facilitate better results from Video AI Enhancing. But then, using other third party image/ face refining plugins works best with CU footage. One issue remains with Video AI: rendered clips in MS to MWS do not always come up as crisp as still images passed through Photo AI.
I work with SD, FHD, 2K, 2.5K and 4K footage in 4:3 or 16:9 formats. If I need to enhance and upscale footage I often do this in repeated passes in Video AI, as you do. The time this takes is bearable as it permits cleaner composites and even truer colour grades. That said, there are times Video AI does not deliver, and it is preferable to use another vendor’s upscale product that may not be quite as visually “didactic".
Currently I cannot render video in Video AI V.5 using any model or setting. The frustration of this is proportional to my daily use of the app’. I have reported this to Topaz Lab’s Support.
P
I came here looking for a similar control but instead of the face enhancement strength, I’d like to see a slider that allows me to set a threshold for face enhancement. I’ll explain, I’m applying the filter to old VHS tapes of dance recitals. When the shot is wide and very fuzzy, the enhancement turns every face into a character from the planet of the apes, but as they zoom, there’s a point where the face enhancement starts to looks pretty good and as they continue to zoom, the enhancement starts to look fantastic. I want to be able to manually adjust this for each tape so face enhancement is off until the threshold I set turns it on or ramps the enhancement up to 100% on.
There isn’t really “face enhancement,” because the model doesn’t recognize faces and treat them differently from the rest of a scene. It’s a set of enhancements that make the model work better on faces than the other models (at least in theory), but at the cost of making it less effective for scenes that don’t have faces.
False. Iris is basically Proteus 2.5 with Face detection and special enhancement upon detection.
Proteus 2 is terrible at far faces, that’s where all the “Planet of the Apes” faces come from—but once the Iris layer detects a face, it kicks in and makes it enhanced.
Here is the original announcement for Iris. Can you point to where it says the model actually detects faces?
To me that sounds like they took the ability to enhance faces (A specialized specific model) from Photo AI, and found a way to integrate it with a Proteus-like video model.
The search on these forums is not the most useful. I remember many statements about Iris before it came out, being something along the lines of how it was being made to detect and enhance faces.
Sure enough, now that it is out, that is exactly what it does—even if erroneously sometimes.
I would like to add that ‘detect’ is a big part of the nature of most AI programs.
A classic face detection program would use a math formula to ‘find’ or ‘detect’ a face in an image. The only difference in an AI program, is that training data is used to ‘write’ the math formula instead.
Topaz models are trained using AI. The app and its models do not actually use AI when running on our computers. If they did, the slow speeds we often complain about would seem like lightning by comparison. So what’s happening is that the model is detecting certain elements that are common to faces and applying enhancement, without “knowing” whether they actually are faces or not. That’s why we sometimes see people posting images in which ghostly faces have been generated out of someone’s hair or the foliage of a tree.
What would probably be useful would be if the app could highlight areas that might be faces and allow the user to deselect them. I think that Photo AI does something similar with its ability to identify and edit subject selection, but I don’t have that app and it’s been a while since I tinkered with one of the trial versions.
Agreed. That’s what we really want.
Along the same lines, I would love a log file that prints out the Auto values of all parameters when they change. And that would be useless without the ability to provide a modified version of that log file as an override to the parameters.