Topaz Video 1.0.0 (New Studio Release)

I can understand that you want to go down the path of least compression loss. I absolutely agree with this approach.

Topaz Video was clearly not designed to enable geometric corrections beyond standard aspect ratio issues. It’s also a question of weighing up the effort (development) and the benefits (number of cases like yours).

I am also disappointed that, apart from LUTs, there is no way in Topaz Video to quickly adjust contrast, brightness, or color saturation. But how many users still have this problem? And how far can this go… The next person might want to correct the gamma values, but then please also provide separate settings for RGB values.

My contribution was not intended to be a stopgap solution, but rather a simplification. There are only two steps (I’ll discreetly ignore the 13 steps in VLC :wink: ) : correct the geometry with VLC and render to the desired size and quality in Topaz Video.

Or use the command line. I haven’t worked with it yet.

What matters is the result—I hope you find the right one! :slight_smile:

1 Like

Hello and welcome.

You’ve come to the right place to ask questions about this topic.

If you move the pointer to post #1 on the right side of the timeline, you will find an introduction to the studio version and information about hardware requirements.

Yes, I’m familiar with FAQs and ‘general’ hardware requirements.

I’d like to get user feedback on potential bottlenecks for Starlight local rendering. So, I’ll go ahead and list out my system here, you guys please tell me where I’m going to bottleneck, and by how much:

RTX-5090 - Almost pulled the trigger 2X this week for MSRP/$1999 cards. Both times went OOS while I was noodling on the rest of my system. (So assume I have/get a 5090, to go with the items below):

  • Intel i5-11400 (does have iGPU so I could monitor output here)
  • 16GB DDR4 3200 RAM. I could upgrade to 32GB 3200MHz or even 64GB 2666MHz with some SO-DIMM to DIMM adapters I might buy.
  • 1000W SuperFlower Titanium PSU - unlikely any issues expected with the PSU.
  • Open-air frame/case with dedicated 120V power.

Okay. Got it. Unfortunately, I’m not familiar with Windows. But this thread contains various comparisons and results, including manual adjustments to configuration files.

There’s a lot to scroll through.

I do AI generation with WAN e Hunyuan on ComfyUI, with various optimizations I generate 6-second videos in FP16 in a couple of minutes, even less with WAN and about 3,5 minutes in Hunyuan, with a 4070ti and 12 GB of VRAM, on Topaz Video AI I easily upscale 6 seconds of video with Rhea XL, stabilization and everything else in about 40 seconds. In Starlight x10 it takes me 20 minutes for six seconds of video, it runs at 1 fps… do you understand that this way it is unusable without a 90 series? I don’t even try, it’s not affordable, I continue to use Rhea and classic models. I hope for some heavy future optimizations for Starlight, because this way it is unthinkable in local without video cards costing thousands of euros/dollars. There’s no point in spending 20 minutes on six seconds of video.
There’s no point in using that model; I’d be much better off learning a workflow on ComfyUI, where I can apply a lot of optimizations, among Loras for acceleration, nodes and adjustments, for generative upscaling then using Starlight. I continue to use Topaz Video AI because I bought it a few years ago and have a perpetual license. Rhea and the standard models are much faster than node upscaling on ComfyU and I definitely prefer using Topaz in this case, but in its current state, Starlight is simply unusable. I don’t understand. I generate 6 seconds of FP16 in 2 minutes and upscale with Starlight in 20 minutes? It doesn’t make any sense. Interesting, but I’ll skip it immediately in favor of the old models.

2 Likes

I upscaled an image from this forum with SeedVR2 and am sharing the results, especially look at the faces, eyes and other details by zooming in.

https://imgsli.com/NDE4ODg5

https://imgsli.com/NDE4ODg4

3 in 1:
https://imgsli.com/NDE4OTQx

1 Like

It’s doing 0.2 FPS

Three different professional outfits that I inquired with about getting the best quality from VHS tapes for inserting into topaz.. All said they used handbrake to deinterlace.

With that example I do like the SLM one better - it’s less detailed, of course - but also seems more natural.
What SeedVR delivers there I can sometimes also achieve with Iris MQ or Rhea. But both of those only look good in still shots. If you look at that in motion and on a really big screen it’s mostly crying “AI” in your face.

(Here the ultimate test is playing back the upscaled video with the beamer on the 260" canvas)

1 Like

yes I can see it, seedvr2 looks promising and I want see more. Now i’m fighting bringing ComfyUI-SeedVr2 to work :smirking_face:

1 Like

I agree, it has to make sense from the business side of things for TL to do anything about it. I was just hoping that if they do introduce “Aspect Ratio Lock” checkbox, that it will do the same what other programs do, Changing the ratio of the pixels. I see that this function is there but it doesn’t work. There are many issues with these apps. Many are QoL improvements, in the area of cropping, trimming and the whole UI which is changing depending on what external developers they assign to it. Most inconsistent UI development I have seen around. And as you mentioned, the color grading is non existent, LUT is just a gimmick that doesn’t really help in the workflow. Thanks for your expertise and VLC workflow, I do appreciate it a lot!

What impresses me is how seedvr2 can “cancel” noise, but on the other side for me it seems to be it does overemphasizes for example faces, what can look artificial. Starlight has a lower “wow” effect here, but seems to work more subtly to me, this is just my early impression

If I have a noisy source, SeedVr2 could show its strengths ,maybe? I don’t like it when there is a grey veil over the content and working with NYX, is not a solution either.

1 Like

Take a closer look at the new test.

https://imgsli.com/NDE5MDA0/2/1

“defogging” is better, faces are more pronounced and probably because there are so few pixels, they are distorted in SeedVr2, it does to much on faces here. And you’ll also see a lot of new invented textures that don’t exist, but it looks good.

Andy, I saw this earlier and handbrake does an excellent job of deinterlacing…. It’s free and works beautifully on the Mac.

I think the other products mentioned will do an exceptional job as well but…

Handbrake is just easy and works.

I’ve been looking for really high-end VHS tape transcriptions…. And when I talked to 3 different outfits that have the really high-end stuff with $10,000 time based correctors and the really old, super VHS and Canopus devices….

Every one of them said they used handbrake to deinterlace their product if the customer wanted a de interlaced file.

Your call and good luck.

1 Like

So many subscriptions, I think I might just stop all support everywhere soon.

2 Likes

I think you’ve misunderstood what you’re saying, SLS does the texture matching, SeedVR2 preserves the original.

Hello John, thank you for your contribution. I rarely work with Handbrake, but I tried it and was not really thrilled. Can you please post me your settings in Handbrake?

But one “problem” remains: it is compromised. I actually wanted to rule that out. But let’s see what the result gives with your sets.

Handbrake is just a graphical user interface for ffmpeg—the same thing that TVAI uses.
Looks like I was wrong to some degree. It tends to use the original program for things rather than only being able to do it through ffmpeg.

Well, if they have a lot of RAM, they work faster - Or let’s say: better … :wink:

But it’s true - the resources can supply cascaded systems. The choice remains private: a lot of money for a lot of graphics cards and a lot of money for a lot of electricity - or a lot of patience for little money.

1 Like