Iris without Iris enhancement

The Iris model is very good indeed, however, I don’t like the effect it has on eyes, which I guess is the point of the model and of facial restoration. Would it be possible to be able to disable this eye enhancing effect, while retaining the other benefits of the model?

Yes, especially as Iris does a real good job on deinterlacing and tidying up bad sources it would be great to have an „Iris light“ model without the face enhancement.

Iris was trained for all the people who wanted facial restoration and had been asking for years for it.

4 Likes

I know. But it also does a terrific job to old interlaced material, better than all other models. And some don’t want the facial recovery as it tends to alter faces into generic looking ones, so it would be nice to have the option to just turn the face recovery off or even better have a slider for that as it is in PhotoAI.

2 Likes

yes for a slider!

2 Likes

Sounds like you might be asking for ffmpeg’s bwdif filter… Since that’s what is used to deinterlace. You can take out the tvai_up filter in the command and see if you like the results.

1 Like

No. Iris does way more than just deinterlacing. It really cleans up those old videos very well, removing most of that (even sometimes really heavy) chroma-noise, and does a good job on reverting compression artifacts /blockiness without being too intrusive/creating things that should not be there.
And you cannot achieve even remotely similar quality in that with bwdif or other denoise / deblock filters in e.g. handbrake.

The only area where Iris does that (sometimes creating funny/scary things) is with faces, especially on the 4x upscale model. Hence the wish for an Iris with adjustable face recovery. We have sliders for many other parameters, so why not for that?

4 Likes

That’s fair.
In the videos I have tried, I get less loss of detail with a QTGMC deinterlace + denoise pass then a Proteus manual pass. No matter where I run Iris, either directly on the video, or after the QTGMC pass, it looses so much more detail. Because of that experience, I doubt the usefulness of this topic’s idea.

1 Like

The DIONE TV models used to be better - they used to clean up the source video like Iris does and retain detail better and provide more of a 3d like image whereas Iris flattens the image a bit. Now the DIONE TV/DV models exhibit some combing artifacts in some scenes as well as staircase artifacts on some edges. Fixing the DIONE TV/DV models would probably provide the best of both worlds

Hmm, not here. I have some DVDs that I’ve previously done with Dione in 2.6.4 that I’m currently reencoding with Iris due to the better result.

As so often: your mileage may vary.

So then vote…

I wouldn’t have considered Iris as better than Dione. I noticed how much jagged edges Dione produces when the scene is not changing much or stationary Well here we go again deinterlacing 51 episodes.
Did not want to mention you so sry.

@jo.vo I still have to clean up my composite videos to get a good result out of Iris. For instance, the dot crawl on red title graphics is so bad that I made an 8-field 4-frame timelapse blend on a colour matte. Then the pSF (progressive segmented frame) material is telecined fairly poorly in the early 1990s so I use bwdif to grab a full progressive conversion first (forcing its motion detection to maximum too) and then recombine each field separately into a new composited frame — looks better. Haha and not to forget, the field dominance randomly switches over sometimes so you really DO want to work on the separate fields to prevent double-exposure effects you can totally eliminate (but only if you remember to, when outputting to a film timebase).

Once you have a “clean” SD source, and sort out all the quirky aspect ratios, nominal analogue blanking and shifting crops, you can begin upscaling. But keep Iris quite clear of jumping straight from SD to 4K — it’s going to try too hard, no matter the manual setting. You have to do a step change from studio-quality SD to delivery-quality HD yourself, and apply some sharpening filters to suck out the ancient details from SDTV. Normally this is the point where you have to decide whether to bake in post-processing for online consumption, or produce a unprocessed image for consumer video tech (to do its magic and apply sharpening and HDR highlights to simulate CRT).

The good news is that Iris handles all the baked-in pre-sharpened content perfectly, and is perfectly suited to hand over a “made for laptop web” delivery product that has all the post-processing encoded into the source. The fundamental principle applies: Provide the best looking (non-AI) source product possible before you begin, and don’t rely on AI to do all the work for you. GIGO: Garbage in, Garbage Out. But equally, BIBO? Brilliance In, Brilliance Out.

The same goes for fixing 709-versus-601 colourspace quirks and in my experience the consumer-tech bake-in principle works equally well with HDR processing too: Apply a custom LUT (use Ben Turley’s LUTCalc to make yourself some .cube files) to throw an ITU profile across SDR film, or try the BBC profile for video, and pack it into a simple HLG colourspace so you don’t even need to worry about nits values. If you have an HDR monitor you’ll see “what god intended” for CRT SD video, but it even looks better with the usual downmapping everyone uses. Just stay away from the built-in “Colour Conform” tools that push a linear or vulgar upmap from SDR to 75% HDR etc.

You’ll find that Topaz Iris does even better at extracting the detail flavour when you give it an HLG HDR file before you put it in the oven. Oh yeah there’s a ton of analogies I can see with cooking and recipes. :drooling_face: :cook:

After a lot of work on the project — and very happy with Iris v1 — I want to return to re-upvote the OP’s point: The eyes, man! Iris sometimes goes a bit crazy with the eyes.

None of the notes about Iris v2 talk about handling eyes better. How does everyone else feel about Iris v2? Is it equally bad, but improved elsewhere, therefore worth moving over?

I have dozens of hours of footage to process so I’m thinking of locking down on a Topaz Video app version for a few months.

A vote for the slider option for facial enhancement like Photo AI. That would be a great idea. And since the tech exists for Photo AI, implementation may be easier to crossover.

1 Like

I’ve done some tests yesterday with iris-1 versus iris-2. The eyes thing hasn’t changed, and the other differences are well-documented. Some of the source material I have has cross-colour interference (chroma crawl) and — while there’s a proper high-tech way to filter it out (search “bbc transform decoder”) — I don’t have access to that, so I’m going to continue processing dozens of hours of footage on Iris v1.

I missed this comment the first time. It sounds like the Photo AI folks already know the issue and built a workaround. Do it!

Yes, I agree. To date I have taken Iris baked files in to AE and applied directional blur with masks to over sharpened areas.

For me, the biggest problem with Iris is, ironically, the facial recognition.

Iris tends to enhance faces that are intentionally out of focus or blurred (e.g. a person giving a speech, with the focus on the speaker and the crowd in the background deliberately out of focus). But it doesn’t enhance them all the way; sometimes an entire face will be out of focus except for the lips or one eyebrow.

I hope that Iris can be better trained to understand when faces are supposed to be low quality (i.e. because the focus is on something else).

2 Likes

@Jewelboy
50% :smiling_face_with_three_hearts:
50% :man_facepalming: