Linking my reply from the other Roadmap thread here:
You’re not listening Tony. I do not wish, nor will I put any of my footage into a cloud server.
It doesn’t matter what security protocols are in place - my footage stays local.
I am well aware of Topaz’s policy toward indefinite use of a previous version released during a customer subscription year.
But, as technology moves on, do you mean to tell me that legacy upscaling models with be indefinitely supported?
If not, that promise of indefinite use rings a little hollow.
I just look at all the comments related to the lackluster releases that are version 6. And, honestly the company’s response feels tone deaf.
You and other satisfied users will look at my comments and perhaps resign to the notion that I just want to gripe. No. That is inaccurate. I became a customer of Topaz and ALL its product because of the company it was. A company it no longer is.
I’m infinitely sad about the realignment of the company, I’ve been there since the first beta, but what’s happening here is indescribable, it’s all about how much money you can get out of people’s pockets, only maximum profit with minimal effort, and you’ve led people little by little to where you wanted them, that’s also the reason why you get almost all the feedback, wishes, thoughts, suggestions Beta testers ignored, the feedback is worthless because it doesn’t generate any money, and now we’ve reached the end of the road. I definitely won’t take part anymore and support a company like that, ciao
I think one way to think of it is as separate components:
- “Topaz Video AI version X”, the software you purchased an indefinite license to
- The 12-month upgrade license included with purchase
- Ongoing early access and free experiments, including free Starlight cloud renders
Precisely.
I can see why you may feel that way at first, but we are offering a considerable amount of free compute to allow all users to see what is currently possible with maximum GPU resources. I definitely recommend trying the Starlight free access while we work on more ways to access it.
First off, my sincere thanks for taking on Project Starlight. I hope we don’t sound ungrateful, as this has the potential to be a big step forward.
That being said, while increased speed is great, I’d greatly prefer the ability to process locally at all vs needing to wait for a faster version.
If need be, pop an “early access / feature preview” label on it and make it opt-in. Cloud processing is a non-starter for my use case, unfortunately, so it’s better to have the feature at all than to sit and wait for a faster version to roll along.
You didn’t answer my question. You’re deflecting. Yes or no, will legacy models be indefinitely supported?
We hear you on that, if the situation was “Starlight technically could run on some desktop cards but it’s slow enough that we don’t think people will want to use it”, we’d probably make that available anyway to avoid a lot of the assumptions we’re seeing around cloud exclusivity. Right now, we really do need server-scale for this experience.
It depends how support is defined – we already have one or two models that do not run on older OS versions. This will happen over time, and may or may not be resolvable.
The overall offering of local processing is what I’m referring to. We plan to advance our local model quality in the future.
Yet many of the latest Macs can indeed support this much unified memory out of the box. It’s less common, yes, but it means there is an audience of users who have the necessary system memory to handle this locally. In theory, the product could be released for these users first and gradually expanded to users with lower system spec ![]()
Sure, you offer free computing so users can see what is possible – and when enough folks hop on board, can you honestly tell us that the product’s trajectory won’t gravitate more and more toward a cloud based solution?
Again, cloud based rendering for many users, protective of their footage is not a solution that fits.
I’m done trying to get you to understand where I and others are coming from.
There’ll be another company who’ll figure out a better, more supportive customer-oriented solution, and Topaz will lose market share through its bull-headed determination to ignore the very customers it hopes to retain.
So, the promise of indefinite use is inaccurate. Got it. Thanks for clarifying. Not something to hang your hat on.
I completely understand where you’re coming from. You want the latest models and best quality in the app you bought a license for. That’s totally reasonable, and also our goal.
At the same time, our team is a research unit that needs to explore and iterate on the latest methods that could advance video enhancement quality. That is the core of what I and everyone else who works on this app wants to do.
In this case, we are starting with a jump in quality that is much more notable than many of the previous updates to other models. Along with the jump in quality, we have a jump in compute requirements that we are confident can be decreased.
I don’t want to change your mind about wanting to use cloud rendering, but I’d like you to know that Starlight isn’t meant to replace everything you like about Video AI already.
No. I want a company that listens to its customers, and its respectful to where THEY are… no where you WANT to them go. You have people bitching about your product, and yet it’s all duckies and bunnies in your world.
[Topaz Compare](ST DS9)
With the popularity of AI, it’s not unheard of at all that people have bought used GPUs on eBay - 2xA6000, 2xL40, 2xRTX8000 etc. MacBook Pro and Mac Studio comes with 96 to 192 Gb vram options etc.
My next MacBook is guaranteed to have at least 96Gb, and I could technically run this model even if it took days per run.
To be honest, I’m still more offended by the multi-GPU being locked away to a Pro user than I’m at Cloud rendering. It’s the combination of disallowing us to run locally regardless, and only offer this as a therefore too large model that irks me.
If you removed Pro / opened up multi-GPU to anyone again, I wouldn’t be angry at all that Starlight for now required 80Gb of VRAM. I would just be amazed at the possibilities, dream or plan of getting new hardware, look forward to seeing results from those that could run them, wait for smaller variants, and potentially pay for Cloud rendering in the mean time.
Nah the cloud compute aspect and powerful server hardware doesn’t magically produce better results, technically you could make a model that have system requirements that is difficult to match. but as long as your system is capable of running the model then it should produce the same results even if it takes an eternity to complete the job.
And I guess that’s fine. Some people would be happy to leave their computer rendering for a week, as long as they can do it locally and only pay the electricity.
And with machines like the M4 Mac mini for example, those electricity costs won’t be too noticeable.
if so:
“VRAM requirement: Upscaling the provided toy example by 4x, with 72 frames, a width of 426, and a height of 240, requires around 39GB of VRAM using the default settings.”
I found the last sentence more interesting
VRAM requirement: Upscaling the provided toy example by 4x, with 72 frames, a width of 426, and a height of 240, requires around 39GB of VRAM using the default settings. If you encounter an OOM problem, you can set a smaller frame_length in inference_sr.sh. We recommend using a GPU with at least 24GB of VRAM to run this project.