Topaz Video 1.3.1

Thanks Topaz for not leaving the Founders behind.

Hopefully, along with these models, Starlight Precise is also finally coming Topaz Video soon. Although I’m sure it’ll probably be the most resource intensive Starlight model yet, I’m still excited to try it out locally, after being thoroughly impressed by its Cloud variant on Astra.

5 Likes

What is starlight precise? What are its strengths?

Adding my opinion here in my most cases, thr result from Iris MQ as last model to produce final 2160p or 1080p output still depends how clear from the original 360p or 480p + output from SLM x3, especially for human face far away from camera, while Iris MQ can remake the face closer to camera very well but mostly not quite good for the further faces. For those I use either models from Theia to slightly sharp the softness output from SLM x3 without creating the ghost faces.

Other than that, I found SL Sharp, Fast 2, HQ are more picky as first enhancement; overall to me, SLM x3 or x2 will still be the best first enhancement for 360p, 480p and 720p original videos.

2 Likes

Starlight Precise is part of the Astra web tool available via cloud processing only. Its based on the original Starlight research model that was first released.

Here is the full list of models and modes available on Astra.

2 Likes

I’ve tested it and found Starlight HQ underwhelming:

The current best starlight model IMO. Specializes in human subjects, skin texture, face, hair etc. It’s really good. Kinda sorta like the Iris model, of the starlight family.

Interesting. It can beat starlight mini in output quality when upscaling low resolution 480p footage?

1 Like

it stopped working for me as well on the same card:

1 Like

After testing the new models, I honestly don’t see much value in them. I tried various 480p and 720p sources in different quality levels.

While Starlight Fast 2 is indeed very fast compared to the other Starlight models, I wasn’t fully satisfied with the results in a single case. What’s the point of speed if the quality just isn’t there? I’d rather wait longer and use Starlight Mini or Sharp.

I find Starlight HQ even more questionable. Without any upscaling factor, the results on my test material were highly disappointing. Classic models like Proteus, Iris, or GAIA deliver better results, and with significantly shorter render times.

It becomes even more problematic when using the model for upscaling at 2x or 3x. Yes, the quality improves, but the render time increases so dramatically that it essentially disqualifies the model. What’s the benefit of slightly better results if the render time is completely disproportionate, especially when older models can achieve similar quality much faster?

In general, I think the idea of having specialized models for specific use cases is good and makes sense. But I’m starting to question whether the goal here is still to provide real value to users, or if it’s more about maximizing profits by artificially bloating the software and creating FOMO.

Otherwise, I can’t explain the incident with version 1.3.0, where Founder users supposedly by accident were not given access to the new Starlight models. That might have gone unnoticed if a user had not saved screenshots of the CEO’s earlier promises.

At this point, I’m honestly just waiting for them to release new and supposedly better models under names like Moonlight or Sunlight, so they can turn around and say that those are no longer part of Starlight.

Overall, this leaves a very bad taste, and I doubt I’m the only one who feels that way.

11 Likes

I’m assuming this is a bug, when rendering with DNXHR and starlite when finished there is no audio gone, anyone.

Agree with most of this. These new models don’t beat starlight mini/starlight sharp in terms of quality. I’m not sure what these new models should be used for. For example, if I’m upscaling an old tv show, quality is the #1 most important thing, not speed. If I’m making something for my own personal collection, the quality needs to be the highest available and the speed doesn’t matter because once it is done, it’s done. If the choice is to have the highest quality episode possible or to have a lower-quality episode delivered twice as fast, I’m going to choose to have the highest quality episode. The speed tradeoff is not worth the decrease in quality.

Making starlight mini is the best thing Topaz could have done. They said, what is the highest quality model we can make regardless of speed, and they made it. This was the correct decision. Yes it’s slow, but in the future they may be able to optimize it for speed or as time goes on and new hardware is released, it will speed up due to that. A speedup is inevitable, either from Topaz or from new hardware.

The videos that need upscaling the most are the ones that are 480p or below, that contain people. In my opinion, that’s where the primary focus should be. Going from 720p to 1080p/4k is nice, but that should be a secondary objective. (and should also be easier to achieve)

What I want most out of future models is:

  1. A new optimized version of starlight mini that is faster without any quality loss
  2. Starlight Mini 2.0, which would be an improved version of starlight mini with improved quality compared to the original starlight mini. This would be a model that introduces new details smartly, so that the new video retains the original subject matter and doesn’t change faces or objects in a way that is unrealistic when compared to the original video.

A speed optimized version of starlight mini seems the most possible in the short term. Topaz has said they have had difficulties speeding it up, but tech is changing so quickly now, maybe Nvidia or someone else will release something that allows them to achieve a speed increase without losing quality.

A starlight mini 2.0 may be further away, but I really don’t know. Eventually, there will be a model that can take a 480p source and it will be able to upscale it perfectly to 1080p or higher. This is inevitable. The question is when will this happen? 1 year? 2 years? 5 years? 10? But it’s coming. Starlight has shown that it’s possible and it seems like we are very close to achieving this. I’d say starlight mini is maybe 70-80% there. It’s not as sharp as a 1080p video so there is still room for improvement. The details need to be accurate to the source material enough that your brain can’t tell the difference. I think I’m rambling now, so I will end the post. :grinning_face:

6 Likes

Can you share the logs with the support team for this process so we can review what happened? Just ran a few tests on various clips and was not able to replicate the situation when using DNxHR output codec.

help@topazlabs.com

I’ll say, 90% of the times, yes it’s better than Starlight Mini. Feels like the first genuine starlight upgrade since mini. The only times I’ve seen Starlight Precise struggle is when the footage is particularly noisy, but overall I’m impressed.

BTW, all this only applies to the current iteration of Starlight Precise, i.e. 2.5. I’ve also tried the previous versions and they weren’t particularly impressive, and definitely inferior to Mini.

Also, I think anyone having Founder’s License and some spare video credits can try it out on Astra. It’s open for all.

You’re expressing exactly what I also fear and have already mentioned in the forum: that it might not actually be about significantly improving models, but rather about constantly creating new ones that may have their specific use cases, yet aren’t really “better” than Starlight Mini.

Why would they do that? Well, there’s the cloud—so they don’t end up competing with themselves. If that’s the case, and this is just my speculation, then I think it’s completely the wrong approach. The majority of users rely on the local Topaz product because they either can’t or don’t want to afford the cloud. And that makes sense—they’re not generating any profit from upscaling private videos; it’s purely for personal use, so intense cloud use falls flat.

This might sound a bit presumptuous, but I would strongly recommend that Topaz make the cloud product more attractive while also enabling better models for local users. A coexistence is definitely possible. There’s nothing wrong with releasing new models in the cloud first and then making them available locally later. However, creating new local models that are only “so-so” in quality just to keep customers engaged is not a business model that will work in the long term.

At the latest, when alternatives like SeedVR and similar solutions become better and easier to use, customers will leave—because they will have seen through the “game” and have a viable alternative.

9 Likes

Starlight Mini still is the best, for sure… :wink:

1 Like

When you will add starlight precise 2 model to local?

5 Likes

Holy cow, Starlight Fast2 absolutely cooks my RTX Pro 6000!! I left it to upscale a 20 minute clip and came back after an hour, and my poor GPU was at 92c with fans at 100% and MSI Afterburner flashing a “warning!” sign!

Do the devs know why this model more than any other imposes Furmark-like torture on the GPU??

For reference, Starlight Mini is no problem, and Starlight Sharp is much heavier but doable - e.g. ~78-80c with fans at ~85%-100%. Other models like Rhea are relatively light.

2 Likes

Something definitely seems off with the Starlight Fast 2 model. It is the only one that pushes my system to its absolute limits. I am running an RTX 5090, 64 GB of RAM and a 9950X3D. Temperatures are not an issue with my ASUS GeForce RTX 5090 ROG Astral, but the model causes my entire system to freeze briefly at irregular intervals.

Out of my tests with the new models Starlight fast is the one that has failed the most for me. I stacked up the same clip with each variation and let it to get on with it (which I have done a few times) and 2 out of 3 times I come back to find the Starlight fast has failed.

As I was writing this message Topaz was doing another encode - the same clip in SLM and SLF. The SLM version took 30min and the SLF 24½ mins. But when looking at the SLF one it got to 99% after about 9 mins and then did not finish for another 5. Not sure what was going on there.

Stable diffusion based models are essentially GPU torture tests.

1 Like