Rhea represents a combination of Proteus and Iris. The model is intended to be more accurate in preserving fine details than Iris, while also handling text in a less destructive way.
There are a few key limitations for this alpha that testers should know about:
The current version of the model is not fully optimized and speeds of this alpha are not representative of speeds users can expect when Rhea is released.
We’re very excited to have testers working with Rhea. Thanks as always for your feedback and testing.
Preliminary tests - getting some nice looking results and skin looks like it is retaining more detail (auto settings). Did the model get changed to do that or am I just imagining it?
Ufff … hair looks really really bad in my first test and I have huge ghosting artifacts (kinda like 90s LCDs).
Definitely worse than Iris or Proteus in this case.
Will this always be restricted to progressive sources, or will it eventually be available for interlaced sources as well? I deal primarily with interlaced sources.
I just completed rendering 2.5 hours of footage using Proteus after many hours of running tests between Proteus and Iris. I finally settled on Proteus because, while Iris produced detail recovery that I can only describe as magical, the results from Proteus were more natural. The Iris results had what I would call an “AI shimmer.”
When you made this Rhea model available, I was hoping it might be a “best of both worlds” solution, but since Rhea is currently only available when Progressive is selected as the source type, it’s not ideal.
Is there any valid reason why new test versions appear unoptimized? Normally just enable the needed compiler options to get an optimized version should not be too hard.
Also, on further testing it looks like source details are still being destroyed. Check the below image comparisons for the black artifacts and also see how the fine details on the saucer section are being severely smoothed out/destroyed:
I think there is more work to be done on preserving detail.
Yes, I am assuming they don’t want to go through the time consuming process of converting the models to Tensor RT compatible ones until the models are finalized.