Just curious how people like it (especially compared to proteus or pass 1 with artemis then a 2nd enhancement with proteus) and what kind of performance differences you have observed?
The oddest thing ive noticed in my admittedly limited usage so far is 1 section of the same 4 min video clip (1080p>4k) processed at like 21 fps but then a different section of the clip that was effectively the same (content and motion wise) dropped down to 1 fps avg and as low as 0.4 fps on a 7950x3d/4090 …which made me sad lol.
Results were pretty good though with slightly sharper more detailed video vs proteus in the same clip but i def need to mess around some more, the update landed while i was on a 2 week upscaling vacation (mostly because of the insane heat wave)
@Imo Yes it is very slow on my 4090 as well (~4 fps). I am trying to upscale 1080p remuxes (movies) to 4k. What is the best model and settings? Some on reddit say artemis, some say gaia. Would love your thoughts and tips, thank you!
I have very little experiences in 4 k upscaling yet because I normally upscale to full HD. My favourite upscaling method is Iris Medium in relative to auto mode where I tend to increase Fix compression, Improve detail and Sharpen a bit. Maybe not as much as in this example.
I use Proteus like… 85% of the time. If I use anything else it’s a first pass of Artemis and then the 2nd enhancement is using Proteus (usually auto or relative to auto if i think it needs a little nudge for quality)
Rhea works pretty well… I quite like it but damn is it time consuming. If I’m doing something like 1080p to 4k using Rhea ill usually see between 2-4 fps depending on the scene in question.
Oddly it seems to be faster the lower quality the input video because I’ve seen it get as high as 8 fps before but it’s rare. I’ve also seen it do 1 section of a video at 6fps but then a couple min later in the same scene it might only run at 2fps.
I don’t know if that’s because it’s still a fairly new model or just because it’s the name of the game.
I’m pretty excited to see what kind of effect the 5090 has on the process though… I’m sure upgrading from the 7950x3d to the 9950x3d won’t hurt but CPU isn’t really the bottleneck in this equation and that’s whether I let it run on CCD0, 1 or both. I’ve done quite a bit of comparison testing on all the models using different numbers of CPU cores and playing around with whether it has access to the 8 5.45 ghz vcache cores or the 5.9ghz cores without the extra cache but results were generally the same (slight differences in certain videos not withstanding).
For my experience it’s works best with access to both CCD’s but I set it’s parity to cores 2-36 so core 0 & 1 can handle general usage stuff but even if I just let it go the CPU has no issue doing whatever while scaling, it’s mostly a thermal thing for my OC.
Generally speaking I’m using either Proteus or Rhea and I don’t think I’ve ever used more than 50 for detail recovery because after that it makes the output look artificial. I usually don’t go over 30 for sharpen either and if I can help it I will limit compression fix to a max of 60 but only in truly awful source files
I have some really fuzzy SD Video from customer VHS tapes that I’ve been using Proteus on with decent results but distortions at times are too much but better than what I found with Iris. Rhea on the other hand looks amazing, even on really poor fuzzy video, distortion is minimal at 2x. Unfortunately, Rhea takes 32 hours to render an hour of SD to 2x on my GPU. With results like this, I might just invest in a 3090 or 3080 if that’s enough power.
Hi,
Personally I really like Rhea and Proteus. A small advantage for Rhea with which I find that the results are a little finer than Proteus.
I currently use it with HD sources and convert to UHD (2x upscale) and Focus fix ‘Normal’ (When you click on normal, the software switches back to the original resolution but you can then force it to 2x upscale).
With focus fix on normal, the software first divides the resolution by 2 (960x540) and as I requested an output in UHD (3840x2160), it does indeed enlarge by x4. All with an encoding at about 20 fps with an RTX 4090.
To be tested…
I lost so much time fine tweaking Proteus, Nyx, Artemis, Gais that I that I used to use until now.
I tried Rhea, and it changed everything. Topaz did a great job with this model. Its accuracy and constancy are awesome, I mainly set to “Manual” > “Estimate” and full Sharpen to max or almost and double “Improve detail”, that’s all !
The drawback is if you go off the x4 scale, the perfection is clearly lost, so my 1080p turn into 8K that consumes much space and encoding time.
Where I save time also, is Rhea does not need a second pass with another model to “polish” the result, it’s a ready to consume stuff !
After upscaling to 8K, I downscale with a final coding AV1 to 4K preset 1 crf 8 gop 239 and. It’s long, clearly, but quality and weight are so worth it.
Reading such file pass very well on my RDNA2 player, don’t know for smaller decoder
So Rhea + AV1 is for my use a so huge step ahead ! Now I feel my license is worth it
EDIT : There’s however cons …
FFV1 is wreck @ 8K, it shouldn’t and I wasn’t able to apply my ffmpeg settings inside the .json file to tweak the parameter. The mix TVAI model + HUGE 8k uncompressed throughput turn my system mad in few minutes
Rhea eats so much GPU and VRAM, I feel sorry for those who don’t have an RTX 3090 or 4090. My RTX 3070 are suddenly almost worth nothing relegated to doing modest encodings or 576p
My Rhea drains 450W GPU, 16 to 20 GB RAM and 140W CPU, so … Do buy solar panel to encode with it
The PROS
Rhea seems to drain much less CPU charge than older model and this is a good point, because it started to become a bottleneck
From my observation (since late '23) Topaz’s models seems to be designed by the illustration that Topaz appends to each model’s short clips.
e.g. Rhea is introduced in a gif/short clip and that the source clip is very low-quality (~360/480p… cant recall which at the moment) Rhea enhanced and upscaled that clip to the desired export rez, ect.
So, IOW, it seems like Rhea was not designed for 1080+ source material but instead, Rhea was designed to enhanced poor quality sources instead, just like Iris.
Which kinda addresses another concern. The most recent models seems to only caters to 720p and below material.
No recent models, since Artemis and that’s years ago, addresses mainly 4K up to 8k (just because it was originally shot / scanned natively in 4K doesn’t mean it is a good CLEAN source) material. I know that the HDR model is allegedly “coming soon” but everything doesn’t need HDR just to have a quality 4k or 8k project.
the final product looks great! but it takes way to long, and it only really works well doing 4x upscales? so i used it the one time and that was kind of it.
Rhea v2 is supposed to be a bit quicker, i guess. But tbh, when are we getting the next proteus! the last update for that model was a year ago. I’m not asking them to update the models all the time, but they could move some resources away from the many unnecessary and unwelcome UI changes.
I think you don’t even imagine the resources needed to make a nice neural engine to run properly.
You need 2 conditions :
A very well trained one.
→ Can be achieved with goo back propagation and weighting. Topaz task, and it’s not a slim task.
A huge neural array : to be simple simple, you have input neurons, compute/hidden neurons, output neurons, (formerly named "type"tron)
→ In TVAI we have many inputs parameters, so inputs neurons are way higher than usual models. This increase a lot the computation.
Other apps are way faster because they don’t ask you for those parameters, you can only change the model and this changes everything with ANN.
I am not surprised at all with resources that are now used with modern model.
Also by seeing the behaviour of Rhea toward Proteus, is fully done by GPU+VRAM, so CPU is discharged of this task.
CPU does it very well, GPU does it 100x faster with an algorithm. It’s not stupid to think Rhea does it 1000x time faster with moreover quality added.
Don’t forget that with algorithm (mathematical) method you CAN’T add details, with an AI model, you can
Me either considering a local copy of llama 3 can easily fill the 4090’s vram (and plenty of sys mem on top) and then it can spend a decent amount of time processing tokens (text) just for context before it even begins to write an output.
These things aren’t cheap resource wise by a long shot and I’m pretty sure video processing is significantly more intense than text
To be efficient language model need a deep training, but you can find now many models allready trained, that performed well on individual computer.
I’m also setting an AI model on my individual computer for image quality compare, this is fun to do it
You have now all the frameworks to set such things in place with minimum knwolegdes in computer science.
Keep in mind, a VERY well trained model isn’t that huge or underefficient, but it MUST be well trained, and it’s truely a pain if you’re alone in the dark
Does going off the 4x scale with Rhea really have that much impact on output quality?
I done a few test runs of Rhea at 2x (1080p->4k) and 4x (1080p->8k), then done a frame by frame comparison (with zoom in) on my 4k monitor and the results were fairly indistinguishable.
This was with animated content though. . . I’ve not tried with anything else atm.