Two exciting new Enhancement models & interface updates (September 2023)

Yes, Iris V2 is optimized for HigherQuality Footage or 720p or higher Sources.

They say that a IrisV3 will optimized for LowQuality Sources in the Future :wink::+1:

But IrisV2 is absolutely unusable for lower Quality Sourcesā€¦

1 Like

That makes much more sense then. I wish Topaz would have just released it as such. The confusion is that they still have it labeled as LQ/MQ so everyones expectation was this was going to build on V1ā€™s excellent handling of LQ, so it seemed to be a let down.

Definitely on my higher quality 480p footage, there is some marked improvement with V2 with retained detail and refined enhancement, but as you said, unusable for lower quality sources.

Good to hear theres going to be some focus again on LQ with V3.

1 Like

with a clean 720 x 576 SD video, iris v2 works quite well if it is upscaled to 1280x720 or 1440x1152. by following the recommendations of robert.Thompson-4407 by thoroughly cleaning the source then upscaling with irisv2 it works really well, the only problem is racking your brain to find the right models and settings before so that there is no longer any artifacts, gaia is great because it makes the image a little sharper and removes a little aliasing by arranging the borders with slight artifacts but the slightest artifact outside the borders is magnified so you have to be careful of that.

Is there a way to adjust the intensity of the modelā€™s effects? Cuz the ā€˜AI lookā€™ is still quite visible on human features with big movements

The ā€œRecover Original Detailā€ slider can be used to blend back some of the original texture of the input video, I recommend trying a slider value of 50 or 60 as a starting point.

2 Likes

What is the unit of the blend slider? Percent?

The slider ranges from 0-50% blend strength for the Y-channel (luma) of the original input video. This can be used to reintroduce image detail without bringing back chroma noise.

2 Likes

Ah, I see. This is very helpful information. Thanks!

I would love to see more descriptions in the documentation of the value ranges of the slider and what they represent. Also those not exposed to the GUI, but are available from the command line like the estimation parameter that seems to take a number of frames as input value.

2 Likes

I used Nyx to de-noise a 35 minute wedding ceremony where from 3 meters away at ISO 800 on an R3 you can see the fiber patterns on the groomā€™s arm of his suit, Nyx doesnā€™t delete them and doesnā€™t have a plasticlook if you set it right.

1 Like

You are r right, both Nyx and Neat Video require work to determine the optimal settings to get a good result. They are not plug and play affairs. However, at the moment there is no comparison. Neat Video is leaps and bounds better than Nyx in removing noise.
You can get a decent result in Nyx, but I would not say that it is on the same league as Neat Video. Frankly, Neat Video seems like miraculous magic for denoising.

1 Like

Actually I was commenting like the other commentator on the skin in particular. There is a tenancy to make it plastic looking. Other than your description, I canā€™t see the results or situation of your video so I canā€™t comment on the specifics, but I would be curious to see how the skin ended up. After some of my own tests, I quickly abandoned Nyx, finding it slow to render and problem with plastic looking skin. Iā€™m comparing results to Neat Video. Neat Video, runs much faster and for now, at least when it comes to skin delivers more natural look when noise is removed. Nyx is in version 1 so it will probably take few more versions before Topaz team takes into account most of the complaints. Its also slow for me, and typically for Topaz is changes color of the original footage. So for the time being its not a professional level tool, although it has potential to be.

Iā€™m not sure what is there to set right, there are only few sliders with intensity, but model is what it is, its trained as it is trained. It does not work like Neat Video, which works on a very different principle. All topaz AI models have certain characteristics built into them based on the source material they were trained on and how they were optimized later. So, often its hard to eradicate some of the behaviour if its not done correctly in the training phase. Sure you can play with sliders to mitigate some of it, but it is what it is. You just have to rely on the native model characteristics.


EOS R3, Clog3 = Min ISO 800 + (Every frame as Single compression), Resolve, Nyx.

The forum compression did eat up every singe detail.

Yeah, as we were commenting. Maybe you canā€™t see it, but look at the skin of the bride. That is the effect we wish to avoid.

Its possible to add some film grain on top to give appearance of skin texture, but its not always what you want. Especially when its uploaded to Youtube etc. Because of the way they compress footage. It will turn film grain into microblocking artifacts of compression.

1 Like

See quote

Here in the forum you can not make a quality assessment, the image is compressed here when uploading for the 2nd time and detail resolution has more to do with distance, the closer the more detail you getā€¦

I know that in the photo field the skin of people or whole images is worse because cameras do not have the color depth and sensitivity to record everything.

I would have liked to shoot the wedding in RAW, but for 38 minutes, even 512 gb is not enough.

In some cases, this can even be a salvation because less compression banding then occurs.



by the way

720p-in-2010-720p-in-2020-meme-5489

3 Likes

No, no, you donā€™t understand. We are not talking about dither effect or film grain that masks the banding, we are talking about something else, that happens to make it worse when using highly compressed methods with variable compression strength. What happens in YouTube is that to save storage and processing power, they try to use a method of compression where if something does not move in a frame it will look really good, almost not compressed. But as soon as you have a lot of movement between frames, it requires a higher bit rate, but instead of using constant bit rate or very high efficient codec, they use usually h.264 which gives decent quality or even good quality for any static shots, but completely destroy anything that moves., turning it into really nasty looking compression artifacts. Film grain moves all the time. Different for each frame. so it treats it as movement, leaving not film grain to be seen but instead this ugly compression artifacts. Some call them micro blocking.

Something like this.

jpeg-artifacts
JPEG-compressed-color-image-showing-blocking-artifacts-in-the-right-image-portion

You canā€™t really avoid it, since its on the youtube end. You can upload 1080p video as 4K maybe to try to get YouTube to use less harsh compression but its another compromise. Point is that , where normally you might use film grain to add on top of noise reduction in order to give back appearance of fine texture, when uploading so social media, its a dangers game, since film grain is treated as movement and compression becomes very visible. Therefore, ideally one would got rid of actual noise, but leave all the fine details avoiding adding film grain on top, and therefore avoiding noticeable compression artifacts as much as possible.

About your 720p in 2010 vs 2020. Perfect meme hehehe. Good one. Its basically talks about how various social media companies in order to deal with so much footage and data, have really squeezed the life out of it, so its not really about pixel dimensions anymore, its more about the bit rate and compression. We have h.265 and Av1 and similar codecs to replace h.264 but much like sRGB its still hanging on.

That maybe so. But in my own tests at my PC, I can see the same problem. Its not so much the distance, its the way skin is ā€œcleanedā€ sort of speak. I think its just inherent to that particular model, because I get better results with other one. Proteus. So I think its just the way Nyx is ā€œtrainedā€. Unless they tweak it at source, there is little you or I can do about it.

In PhotoAI app they had similar problems with some of the up-scaling models compared to older GitaPixel. Same plastic style result. I suspect its related to quality control. Topaz is creative team of people, but man, they really suck at professionalism. Still no proper color management, and some of the results are good for causal use, but on a professional level, you can see some of the product managers are not professionals, lets just say that. Because they donā€™t seem to understand the difference between pretty and professional looking results. And judging how the Topaz apps have been progressing, the priority seems to be on rapid release of bunch of new models and features that are unpolished, buggy etc while pro features like color management still is not there, even after I donā€™t know how many years. Same with this skin thing. Iā€™m sure who ever approved the model though that was good enough, because they canā€™t see the different themselves. I can.

Well in Photo with any modern camera and post processing this is not an issue. its mostly a problem with video, because of the hardware limitations. Every time you record something if its 4:2:0 chroma subsampeling it means it throws away half the recorded color information. Even if sensor can capture it. Same is if its using h.264 at lower bit rate and bit depth. Another compromise. Also when you shoot high frame rate, lot of times there is pixel binning and other compromises like line skipping to record 120 fps on a mirrorless body.etc.

In photo world we usually can get pretty good RAW files, even those that are compressed usually have enough info , especially if you use better programs to demoseic and denoise properly.

I think RAW would be overkill for such project, even if you had the budget, because of the nature of events and audience and besides the data rate is too much. Shooting log footage with h.265 10bit 4:2:2 is best balance of quality and manageable file size and data in my opinion. If you are doing feature film and VFX work and all that, than something stronger is helpful, but for weddings. Nah. Raw is overkill. Too much.

3 Likes

I can neither answer in the affirmative nor in the negative, we are in between.
At the moment, when I look at the advertising that is played out on Insta, they want to become more professional.

I think the problem at the moment is what AI actually is, a statistical comparison machine.

Something would have to change fundamentally in the architecture of AI, AI is until now more or less grease, because of the statistics.

That goes ok up to a certain degree, but then the quality decreases, because the jumps that the AI must bridge are too large.

And manufacturers like Nvidia drive everything in front of them with their low precise fp16 and fp8 models, so that the numbers are as high as possible.

I think the ā€œskin thingā€ may be partly due to cultural differences.
It seems to me that in asian culture this is a ā€œwantedā€ look - just look at e.g. the photos and especially selfies those china phones with AI tend to create: people looking like display dummies.
So they might not find this as disturbing as we do.

Could you explain about the cloud backend and how itā€™s being done. I want to run some job on AWS using CLI only, but could not find much info about how to set it up.

Great! Thank you