Topaz Video AI 5.2.0.2.rhea (New enhancement model: Rhea)

Why is dehalo set to -50? You’re better off testing Rhea with auto settings, making adjustments like that will just skew the results and isn’t going to help get the model refined.

Stuff you do on existing models doesn’t apply to a new model that’s still in Alpha.

Once Rhea’s working well with auto, then manual adjustments should be made to your liking.

1 Like

In the first alpha, Rhea created a lot of ghosting, dehalo -50 remained the same from my last tests.

They can’t establish baseline performance when random settings are changed like that. I would stick to auto at least for now.

The Rhea model takes a gargantuan amount of VRAM - 20.37GB on RTX 3090 (it’s GPU dedicated bytes column on the right part of the screenshot), and, if there’s anything else taking up the video memory, processing with the model will become choked in no time. :slight_smile:

When there’s VRAM available, I get 5.6fps with 360p->1080p with Aion v8 in between (testing this ancient D&D cartoon), but if there’s anything else in the VRAM, the speed drops to about 1.1fps.

An example of the mentioned cartoon:


Speed with RTX 3090 while NOT having anything else (meaningful enough) in the VRAM:

5.5fps - not bad for the first attempt,especially when it’s 4x OX model file ONLY, at the moment. :slight_smile:

As other users reported with their examples, the dark areas of the scene in this SG-1 episode are being destroyed with blotches - Rhea model:

It’s happening with Proteus v4 as well:

…and this is why I’m still on VEAI 5.0.4 in stable channel. I just started 5s preview a bit further ahead:


Ooops. 0.06fps. :slight_smile:


This bug still prevails in betas and v5.1.3 stable, when I last tested. I didn’t try VEAI v5.1.4 stable, yet.

After ‘removing the group’ (with this borked preview, stopped at this point) I can still see the processed frame, blown in, despite the preview not existing anymore:

I absolutely agree… I redid the test on “auto” and… the result is the same.
It seems less pronounced (maybe), but the same “orange peel” texture is still there.

1 Like

I think that’s because of the internal upscaling by 4x or that the models are only ONNX and not optimized for Tensor engine, or maybe both.

1 Like

Thanks so much for bringing this model. I can’t wait to try it!

Really want to test on Text upscaling.

Can this model be augmented to incorporate an interlaced mode, thereby expanding its functionality and versatility?

The following issues can occur in low-quality situations


5 Likes

image

:rofl:

11 Likes

I primarily use Iris because it is unparalleled in detail generation. It is the only model that is capable of making the files look like native UHD, and it can handle almost anything I throw at it.

Proteus is the ‘jack-of-all-trades, master-of-none’ model. It has a tendency to make all inputs look computer-generated. When using manual adjustment, Proteus goes way overboard on higher values.

Concerns about Rhea:

  • Auto 4x internal scaling - how will this affect processing speed? At least one other post suggests that simply going from 320p to 1280p would use 14GB of memory. Upscaling beyond 1080p would not be an option for many users.

  • Recover original details disabled - Is this only for the alpha testing? If not, what is the rationale for this?

  • Model is already showing ghosting and desynchronization issues. Is Rhea expected to be compatible with all systems, or is it geared towards users with more advanced equipment? Will it be a “Pro-only” model?

The model represents a combination of Proteus and Iris. As far as pairing Iris with other models - for animation, I have had good results on occasion using Iris and Proteus together. However, for live action, I have achieved mind-blowing results using Iris in conjunction with Gaia HQ.

I hope things turn out well with the Rhea model.

2 Likes

Well I get as far as importing my sequence and that is all I can do.

No preview, no playback. Attempting to remove the source hangs and then crashes and attempting to simply select enhancement and generate a preview instantly crashes, so cannot test a thing.

I have not tried this yet, but in general, I have the following suggestions:

  • Noise and grain can be wanted or unwanted based on the luminance or brightness of the area behind the noise. I highly suggest to have separate controls for the darker areas vs the brighter areas, and maybe have a slider control that controls the separation of what is considered dark and bright. Otherwise, to get rid of all the unwanted noise in the shadow areas, skin can start to look like wax for example. I would still want to reduce the noise in the brighter areas, but less so than the darker areas. Same goes for blend with original and recover details - have separate controls for brighter vs darker of how much the blending happens. Maybe just have all the controls once, but have a dropdown that selects brighter or darker controls with the slider of where the midpoint of the two is by it, and then use all the other controls that adjust them separately (the values selected jump back and forth when toggling lighter vs darker). The tricky part of this will be blending the two where they overlap. Maybe have a slider for how big that overlap is too. This idea could be applied to any/all of the existing models, and have a toggle to turn off the feature of having dark vs bright areas.

  • Fix the really annoying bugs:
    o-The original vs enhanced preview frame does not always match
    o-The colors don’t match - have input and output colorspace controls, and/or try not to change colorspaces at all if this is even possible. Right now there are are input colorspace controls, but this is not available in Davinci Resolve. I would suggest to make this more prominently featured in the input and output selections sections. Or, at least use the same colorspace on the output that was selected as the source colorspace. If this is already the case, make that more clear by calling it input/output colorspace. Having control over the output colorspace is preferred though.

@eh117 I’d be very interested in hearing more about your workflow using Iris and Gaia together. Feel free to message me directly if this is not the appropriate venue for this discussion.

1 Like

Details are amazing on 720x480 upscaled to 2x or 4x.

But performance is horrible and the performance estimates are nowhere close to reality.
Ex: Select a 15sec preview at 2x with default RHEA settings.
Initial time estimate was 3-1/2 minutes.
Total time was over 8 minutes though with the time estimate continually ratcheting up.

Times seem to be getting slower as well.
Now’s it’s showing 41min for a 5sec render! Obviously unusable…

Oh, and “Looping” doesn’t work.

Nvidia RTX 2060 Super (CPU is i7-10700K but is < 10% utilized)

1 Like

If you read the first post:

1 Like

6 Sec clip (720x540 → 1440x1080) estimates over 5 Min as apposed to Proteus (v4) it takes 11 Sec.
I guess I’ll be testing this model and provide feedbacks when encoding times are more sane.

But from my initial test on my 6 Sec clip, Rhea destroys much more details and blurrier then Prob4.
For me personally Prob4 supersedes Rhea at this point of time.

Test was conducted at x2 (720x540 → 1440x1080)

The info line should be like:

  • The current version of the model even on high end hardware is too slow to be tested properly but with a certain tendency towards masochism feel free to give it a try anyway.
1 Like

I am aware of it, I read it. it was just a small side note.
I wasn’t expecting it to be 30x times slower… :slight_smile: Honestly

1 Like

Since you have a 4090 we both know that’s not true. I get 3.5 fps with Rhea - slow but not unusable :eyes:

I’m very happy with this model, I’ve tried it on low-resolution cartoons, and the results are amazing.

1 Like