Why do you think it’s good, and have you compared it to the Video AI model? Generally speaking the real-time model is not as effective as the offline model, it does not make much sense.
Someone here on forums said they can run it on MPC-BE, Media Player Classic - Black Edition video player, on PC. If you have RTX nVidia card on PC you should be able to install MPC-BE and use it to upscale videos in real time with this method.
I think what Topaz offers is a different kind of service and from what I can see results are noticeably better with Topaz, but off course its not real time.
…
MPC-BE (aka Media Player Classic - Black Edition) is a free and open-source video and audio player for Windows, based on the original Media Player Classic (MPC) project, but with additional features and bug fixes.
…
MPC VideoRenderer with code to enable NVIDIA RTX VSR & Intel Xe VPE scalers.
Tested with the Feb 2023 MPC-BE 1.6.6 release, also known to work with clsid2’s updated MPC-HC fork, and most likely other DirectShow-based players too (ie: anything that worked with madVR)
As you can see from comparison, the already existing methods are very similar to nVidia so-called Super Resolution. That being said, nVidia also helped Adobe implement their “super resolution” into Lightroom and Adobe Camera RAW for stills, but compared to Topaz Gigapixel AI it’s not even close.
So I’m not sure that Nvidia is really that advanced and there is more hype behind it than it’s warranted in my view.
I’m not sure he said that precisely; more like inquiring what VSR could mean for TVAI. Not much, at the moment, I’d say, as it’s only meant for enhancing videos in browsers. Having the ability to enable this, at driver level, for all video processed, might be more interesting. But since the driver wouldn’t know by how much, or even if, you’re going to upscale, its practical use, outside a browser, is likely close to zero.
Well, like I said, if nVidia added VSR to their drivers, as something you can enable for all 4K output, I could see some benefit for general preprocessors. But if you’re using TVAI, indeed, use the best tool, and not something which may be good, but is still only meant for real-time processing (so necessarily weaker). If it really could be done real-time, at the level of TVAI, Topaz would already have done so.
Well, I run studio driver so I can’t run it right now to test for myself, but the sample video they showed in that YouTube video when upscaling 1080p to 4k looked decent enough as a first implementation; the 360p to 4k didn’t look good, that didn’t interest me.
Given the technology being built into a driver level, could it mean that Topaz models could benefit from a speed improvement at all? Have they looked at or considered the implications of this at a driver level?
Realtime to remove artifacts of 1080p or even 4k video as an additional model within topaz might be a welcome addition, if a driver level optimization can be leveraged now.
Your comments are fair enough, and well thought out. My post was in part to start a dialog, so thank you for contributing.
For me, I’m happy with the topaz product, but I am wondering if there are optimization for speed at current quality, or better quality for current speed. My 3090 doesn’t seem to break a sweat with using topaz on many of the models I use, to the point where I can load up topaz as two instances and sometimes three before the GPU is taxed.
The products/solutions you provided are not ‘better enough’ for me to dive into them, but thank you for pointing them out - if in your judgement its not better than Topaz, and it’s about the same quality as the nvidia’s then it’s probably not that interesting to me. My angle was more about how to take what Topaz does good now, and make it better with the potential driver updates to support the scaling.
Thank you for reframing my post in a better way. I was in fact trying to leave an open question about how topaz could benefit from the work NVidia did at the driver level. The sample in that video I linked for 1080p to 4k to remove artifacts and rainbows at pretty fast pace was interesting. the 360p to 4k was not usable in the sample video they demonstrated.
The browser benefits are not interesting to me as I have more than enough bandwidth, and most of what I watch is at least 1080p which looks fine to me for YouTube stuff. I probably wasn’t clear enough in my question, but you captured it well, thank you.
Fair enough. Of course we all want catch-22, both speed and quality. But someone raised similar issues as you did , a couple of weeks back or maybe a month ago, when Topaz Video AI was at 3.0.12 or version around there. It was buggy and slow. The question in the forum was raised about speed vs. quality. If I recall correctly, most suggested stability first, quality second, speed third. I think I’m in that same camp, how about yourself?
Also, few things to note as well. To their credit topaz team did increase speed with the last few iterations of the version 3 of their software. I am sure they will continue to work on it. But they (topaz team) did release their road map to mention the things they are working on at the moment. So maybe that provides some info as well.
Video Roadmap Update (Feb 2023)
I think Topaz team in last few versions mentioned increase in speed for some cards and drivers, but off course with so many different hardware configurations there will be trade offs. Ideally hardware would be the only limitations, but I think software can do a lot to optimize the process.
Speaking of speed etc. I personally feel that no amount of speed is as important as stability. And if you are frequent reader of forums here you will notice the many bugs people still are experiencing. I hope Topaz can fix those first. Because what is the point of having speed if video render skip frames, app crashes, changes color of the original video, UI has bugs etc. In my opinion the current version of the software is still in beta stage and its being slowly coming out of it. So I hope we first see stable , truly stable release and than optimized release. I have a feeling many would agree with that.
I agree stability first… It’s rare that I have crashes, but you are correct that stability is necessary, especially when you’re doing a project that can take days or even a week. I do see stability as part of performance, often when you optimize software you can achieve both stability and performance - I am hopeful that as the developer codes the software, they organically give us both.
I’ll check out the roadmap update in the morning, thanks for that link.
I’m kinda raking my brain on how this could potentially work. Say, I feed TVAI a native 1080p input. Unless the nVidia driver has already done its A.i work, the driver doesn’t know yet, at that point, what you will do with that source. So, you can imagine a scenerio where the driver starts applying VSR when TVAI actually starts to output/process in 4K. Not sure that would be beneficial per se, TVAI would be doing its own thang, and then the driver would barge in, saying “Oh, I see you’re upscaling to 4K now, let me enhance this picture for you.” But at a preprocessing stage, where the driver has already dealt with your 1080p input, VSR might have some added value. Especially if TVAI had a check box, like ‘Apply VSR for input’ or something.