That still doesn’t make sense. The TensorRT models are there and they don’t suddenly vanish only because AMD and AppleSilicon models are added to the app.
So why should they not use them on supported NVIDIA platforms and the others on AMD/Mac?
Besides, the Studio versions aren’t really different in speed on my NVIDIA PC systems compared to the older TVAI 7.0.2.
Plus, the Mac versions behave quite a bit different than the Nvidia ones so seem different code.
Beats me. That’s not how they operate, though - hence the recent palaver with the below-par (for Nvidia users) Starlight version that they subsequently withdrew.
The fact is, if you delete all the .ox files from your Studio models folder then the program won’t run until it’s downloaded them again.
The Starlight model files are the same for both versions of the software.
I’m getting a surprising amount of pushback on this. If you really want to test it, remove all of your .ox model files, then disconnect your internet and see if the software will run without them. Or, ask the developers directly - they’re obviously much better-informed than I am. I’m just repeating what my own research and experience have shown.
With all due respect, I don’t care about the cloud or which models are easier to maintain for a subscription model. I want my desktop version to run as fast as possible by any means necessary. And I don’t care about AMD/Intel/Apple GPUs because that’s not what I have. If TensorRT decreases the processing time, then that’s what I want.
That’s what I’m afraid of, and sure artifacts in the new model won’t be fixed either…then they release an another model afterwards, they can keep doing that over and over again.
Nvidia released TensorRT RTX earlier this year and Topaz Video is using TensorRT RTX,
This new framework allows us to convert/compile the TRT engine from the ONNX file on the users machine. The compiled models (trt) files are stored in trtCache inside the models directory. We are working on a blog post with Nvidia explaining this change.
Topaz Labs has partnerships with every major hardware manufacturer. So we will always ensure optimal performance on AMD, Apple, Intel, Nvidia and Qualcomm hardware.
Let me know if you see any performance differences between different versions of the apps with logs and examples, we will work on getting it fixed.
Thank you for addressing this. Honestly, understanding exactly what this means is a bit over my head, but perhaps it will satisfy others who have voiced similar misgivings about this issue.
The models downloaded to your local machine are ONNX. If you have an NVIDIA GPU, the ONNX models are cross-compiled locally to produce native TRT models, which are then used to perform upscaling.
The TRT models are cached locally on your machine, so the cross-compilation only has to happen when a new ONNX model is downloaded.
The NVIDIA Optimized Inference AI library also allows for cross-compilation of ONNX models to other GPU architectures, so in theory, AMD and Mac could benefit. This would require cross-compilers to be developed for those architectures, though.
I am unsure if this is the right forum to ask, so apologies if it is not.
Do you guys know if there is a way to force Topaz to downsample to a fixed fps ? I downsample a lot to fps 16 to re-import into comfyUI in vid2vid VACE Wan2.2 workflows, but most of the time downsampling with Topaz will yield variable fps (the average fps will be 16 but it will vary). I was wondering if there is a trick to force Topaz into downsampling to a fixed fps 16 rate.