- Beta is only has an effect (maybe stability issues) on what was changed in the change log, the rest is exactly like the release version
- Beta does not touch your release/regular installation nor it’s presets, they are installed side by side. you can launch them simultaneously.
- as of now, very slow as it was not yet optimized, it renders at x4 regardless to your settings, then bumps it down to your selected size (e.g. x2).
- Answer No. 2.
Thanks
I just tested Rhea on a low-resolution, very compressed video at 640p x 2; Iris medium gives better results, in my case, for faces and hair. It follows a bit less of the general shapes of people’s faces, but it’s more pleasing.
Rhea seems more faithful to the images, even
It’s hard to decide what’s best. Hair is sharper, with more details, but noisier because there are no details that can be retrieved; it’s purely new pixels with iris. On the other hand, Rhea seems to keep the shape of the faces, with fewer details. But the background is more pleasing, with fewer angular objects.
Topaz should train on YouTube videos. Upload many videos in various resolutions, download them again, and train the original with the compressed version.
Edit:

![]()
Another thing, Rhea is bringing back too much some elements that are naturally out of focus.
Footage with less depth of field
Agreed, a couple of us have reported on the depth of field issues as well, hopefully the model can be refined to improve that before release.
No I first use Rhea then do a second with Theia fine tune fidelity fix compression 15 and reduce noise 9 for old noisy vhs. I will now use after the rheia in first instance theia fine tune detail v3 Fix compression 50 percent and reduce noise 9 percent it looks promising. Ok these mocdels do not work properly to improve Rhea, Gaia Does not improve it either Nyx fast is terrible but NYXv3 is indeed supported to improve Rhea on top of Iris and protheus. I could use the slider to get more details, but I see that it only restores noise… so I now use RHea with Nyxv3 NXv3 is fast in preview but extremely slow when exporting a bug ? Back to square 1 Fine tune fidelity V4 Fix compression 11 sharpen 4 reduce noise 23 for my Rhea mp4 . This second enhancement makes Rhea is the best Square 1
apparently adding Theia as second does not work I have to import Rhea again then use ne tune fidelity V4 Fix compression 11 sharpen 4 reduce noise 23 to do get the best result Rhea has a bit to much noise for some ugly old VHS tapes while Rhea does much better on high grade VHS tapes that have not degraded to much. and is fine on Super VHS. So there are VHS and VHS quality differences every type needs it’s own enhancement profile difficult to choose one fits all
I don’t expect any refinements. Beta testers give tons of feedback regarding GUI and models and rarely is anything actually fixed/changed. Or so it seems to me anyway.
Just tried both the Windows and Mac versions of 5.2.0.2.b and still it will not apply the default preset to any imported media (been like that for a number of previous releases now), you will see it appear as the ‘Preset’ as files import but as soon as that process has finished the Preset setting will revert to ‘None’ so you then have to select all input files and then re-select your chosen default to ensure it’s applied during export.
Still really could do with a global pause option for exports and better handling of batch encodes overall but I’lm guessing both features aren’t on the dev radar any time soon
To be fair, very few companies make changes in UIs or feature sets based on beta feedback. Those changes are usually made at the alpha stage. Betas are generally bug hunts on potential release candidates.
Re: depth of field - why not have a “map” of sharpness on the original frame, by looking at high frequency detail, and then making sure the enhanced frame is adjusted to the same map. Objects intended to be out of focus would then be degraded/less enhanced in the new frame.
I PM’d you a week or two ago, been testing since this group started as a private facebook group and have had good comms with topaz people up until now.
The fact this even made it to “release candidate” while not being able to handle inputs from network drives is absolutely absurd. The release version freezes up when i try to use it, this beta simply won’t load the file.
At this point i’m wondering why I bothered upgrading at all. At least the old devs listened to (and often rewarded) the good beta testers … You guys aren’t even taking the massive gaping bugs out of the beta versions, much less giving people a reason to want to be test mules for this product anymore.
No wonder why there are so few posts here and not the people like myself who have relentlessly tested this program for years … and now you’re taking features out behind a big paywall that were in previous versions?
Maybe listen to the people who are doing work for you for free, and maybe give them some incentive to keep doing it.
Not to mention the “temp” file goes straight to the directory i put the input file in, not to where the settings say the temp file is supposed to go, so this is really knee-capping using my 1TB NVME drives on some stuff as the output files exceed the drive capacity (assuming there were zero files on it to begin with )
I think it’s not that easy. think about noise / grain.
Also this would be a rather ugly workaround.
Ideally the ai model would learn to differ between focus areas itself
The models don’t “learn” anything in the app. There would have to be some way of “telling” the app to apply the models in different strengths at different locations.
Photo AI has subject detection that allows the user to tell the app to sharpen image subjects apart from the rest of an image. At least in theory; I don’t have photo AI so no idea how well it works. But applying any effect frame by frame would probably make the slowest Video AI model look like a racecar by comparison.
I’m not talking about learning locally, I’m talking about the training of models at topaz.
I know about subject detection, which is also just a workaround.
In the end it’s just a tool, which is good to have, but shouldn’t be needed to fix flaws in the models.
The sticking point here would be how the model is supposed to detect the difference between out of focus content it should make sharp and out of focus content it should make less sharp.
If you can “detect” it, AI can do it, too ![]()
Well, that’s the problem. It would take AI or some kind of manual input. The app doesn’t perform any actual AI on source videos and can only apply detection and changes on a blanket basis to an entire frame, so we’re back to needing some way for the user to manually tell it what parts of a frame to apply different enhancement settings to. And it would probably need to be done at every scene change.