Starlight: Stop Hiding Behind "Cloud-Only" Excuses — We Know Why

It’s time to cut the crap.

Topaz keeps claiming that Project Starlight must be “cloud-first” because it’s “too large and complex for local machines.”
That’s nonsense, and we all know it.

Your cloud hardware — A100s, H100s, RTX 6000 Ada, even 4090s in some cases — is fundamentally the same architecture as consumer GPUs now widely available (RTX 4090, 5080, 5090).
There is no such thing as magical cloud hardware.

The reality is simple:
If you released the Starlight model for local execution, people would copy it, reverse-engineer it, optimize it, and competitors would begin to emerge.

That’s what you’re actually protecting — not the user experience, and not technical feasibility.

So let’s be clear:

  • If Starlight really can’t run on a machine with an RTX 5080, 128GB RAM, and modern NVMe storage, tell us specifically why.
  • If it’s simply a business decision to control access, then just own it… and stop insulting the intelligence of your professional users.

We are your customers — the ones who invested in serious hardware specifically to avoid cloud bottlenecks and paywalls.

The professional community deserves a straight answer:

  • What resources would actually be required to run Starlight locally?
  • If none, admit that cloud-only access is about intellectual property protection, not technical necessity.

Enough marketing spin.
Either respect your customers and just give us the hardware requirements and fork over the model, or simply admit this cloud nonsense is only about business control — not technological impossibility.

Waiting for a real answer.
Thank you in advance.

They have already said they’re going to release a local version.

I expect it to change facial expressions just like the cloud version, so it’s not something I’m ever going to use.

2 Likes

The team is working on a version of Project Starlight that will be able to run locally on some machines with certain hardware requirements.

The current model runs using H100 GPU clusters with 80GB of VRAM per GPU. While some of the architecture is similar to consumer cards, the specs are not. Comparing the H100 to even an RTX 6000 is a stretch based on what they are designed to do and the workloads they can handle.

Reverse engineering an AI model while technically possible, is decently difficult in reality to achieve. Creating a similar model though is more likely and that still takes time, large data sets and a lot of work to do. The existing models we have in the Video AI app have not been duplicated, there are many other competitors out there on the market and yet we still have a lot of users who include Video AI in their workflows.

With that being said there are times where other models are more ideal for a certain workflow or use case. Even Project Starlight is not meant to be used in all situations.

Back to the local Starlight model option, this is in the works as was stated shortly after the launch of the cloud only version and it is getting closer to being ready to roll out. However, there are still a lot of things to work out with it and the team is doing that at this time.

Thank you for sharing your thoughts, the company nor its support team are providing any marketing spin when we interact here on the forums. We have the utmost respect for our users and we ask for the same in return as we are all people here trying to push the models and our apps as far and as fast as we can.

4 Likes

Thank you for confirming that a local version of Project Starlight is in active development. It hasn’t been front and center, so I haven’t seen any announcement that was truly the case–only a ‘maybe.’

To help users set realistic expectations and plan accordingly, can anyone clarify:

  • Will the local version of Starlight deliver the same output quality and architecture as the current cloud model?

or

  • Will the local model necessarily involve compromises (lighter architecture, different results) to accommodate consumer hardware?

I recognize that individual forum team members may not be responsible for technical decisions, but accurate guidance is critical for those of us making serious hardware investments.

While performance differences between cloud and local environments are understandable, it would be helpful to clearly distinguish between throughput limitations (e.g., slower speeds) vs compromises in model design or output quality. This is to say: We recognize that faster throughput on H100 clusters is expected; what matters most to users planning workflows is knowing whether the results themselves will be equivalent.

Sure Starlight isn’t a silver bullet for everything, but for those of us doing restoration work it really kind-of is, since there isn’t any other model that comes close, as far as results. That’s the heart of the issue I am making.

Thank you again for your time and for your commitment to supporting professional workflows.

1 Like

The team has been working on the options and did not want to over-promise anything as far as a local option of the model until they were actually able to get something working as this is a huge task to get worked out.

From what I have heard the local model option will be a lighter version of the existing cloud-only option but will deliver similar results. I have not tested it myself and info is limited as the team is still working hard on it and changes are still being made based on results and testing.

As the team has more data and results, more info will be released for users.

4 Likes

Thank you. The technology is incredible and, like impatient children, we all want it today! Always looking forward to what is coming!