Topaz Video 1.0.4 - Patch

That still doesn’t make sense. The TensorRT models are there and they don’t suddenly vanish only because AMD and AppleSilicon models are added to the app.

So why should they not use them on supported NVIDIA platforms and the others on AMD/Mac?

Besides, the Studio versions aren’t really different in speed on my NVIDIA PC systems compared to the older TVAI 7.0.2.

Plus, the Mac versions behave quite a bit different than the Nvidia ones so seem different code.

If you’re only talking about the Starlight models, that’s different. I don’t know much about that.

Beats me. That’s not how they operate, though - hence the recent palaver with the below-par (for Nvidia users) Starlight version that they subsequently withdrew.

The fact is, if you delete all the .ox files from your Studio models folder then the program won’t run until it’s downloaded them again.

The Starlight model files are the same for both versions of the software.

I’m getting a surprising amount of pushback on this. If you really want to test it, remove all of your .ox model files, then disconnect your internet and see if the software will run without them. Or, ask the developers directly - they’re obviously much better-informed than I am. I’m just repeating what my own research and experience have shown.

No updates since Nov 6. Topaz is cooking something interesting? Or christmas/new year holidays?

5 Likes

With all due respect, I don’t care about the cloud or which models are easier to maintain for a subscription model. I want my desktop version to run as fast as possible by any means necessary. And I don’t care about AMD/Intel/Apple GPUs because that’s not what I have. If TensorRT decreases the processing time, then that’s what I want.

10 Likes

Topaz Video 1.0.5 loading…

Same here - that’s exactly the point I was originally trying to make.

Topaz announced new Starlight model. But we don’t know if it’s a cloud or local processing model.

1 Like

And it’s the same procedure as every time: instead of getting Starlight working correctly there’s the new model.

2 Likes

That’s what I’m afraid of, and sure artifacts in the new model won’t be fixed either…then they release an another model afterwards, they can keep doing that over and over again.

2 Likes

That’s why I use SeedVR2 for the heavy lifting and Starlight as a final refinement.

Join us!

3 Likes

This gave me a little chuckle. I ran across my TrustPiolt review I did for Topaz Labs last year. No need to alter anything about it.

3 Likes

Nvidia released TensorRT RTX earlier this year and Topaz Video is using TensorRT RTX,

This new framework allows us to convert/compile the TRT engine from the ONNX file on the users machine. The compiled models (trt) files are stored in trtCache inside the models directory. We are working on a blog post with Nvidia explaining this change.

Topaz Labs has partnerships with every major hardware manufacturer. So we will always ensure optimal performance on AMD, Apple, Intel, Nvidia and Qualcomm hardware.

Let me know if you see any performance differences between different versions of the apps with logs and examples, we will work on getting it fixed.

@efox31 @ForSerious @jo.vo @cbrillow @Moebius

6 Likes

Thank you for addressing this. Honestly, understanding exactly what this means is a bit over my head, but perhaps it will satisfy others who have voiced similar misgivings about this issue.

Recommendation: Don’t make any default folder locations located within the user folder where OneDrive often does backups.

1 Like

Is there plan for Studio version to support RT models? Those run faster than onyx for nvidia no? Or would running Video AI go faster?

Based on this quote:

The models downloaded to your local machine are ONNX. If you have an NVIDIA GPU, the ONNX models are cross-compiled locally to produce native TRT models, which are then used to perform upscaling.

The TRT models are cached locally on your machine, so the cross-compilation only has to happen when a new ONNX model is downloaded.

The NVIDIA Optimized Inference AI library also allows for cross-compilation of ONNX models to other GPU architectures, so in theory, AMD and Mac could benefit. This would require cross-compilers to be developed for those architectures, though.

2 Likes

Hi Suraj, does change CUDA Tile anything performancewise in the future?

5 Likes

Hello everyone !

I am unsure if this is the right forum to ask, so apologies if it is not.

Do you guys know if there is a way to force Topaz to downsample to a fixed fps ? I downsample a lot to fps 16 to re-import into comfyUI in vid2vid VACE Wan2.2 workflows, but most of the time downsampling with Topaz will yield variable fps (the average fps will be 16 but it will vary). I was wondering if there is a trick to force Topaz into downsampling to a fixed fps 16 rate.

Thank you :slight_smile: