The process of improving video quality with Project Starlight

The H100 maxes out at 96GB

The AMD Radeon Instinct MI325X has 288GB of VRAM, so a Terabyte seems unlikely even if true those who have access to server hardware can use system RAM as a substitution in some cases.
Edit: Additionally some motherboards directly support up to 2TB natively, like Gigabyte’s MZ30-AR0.

Can read more about it here: GIGABYTE's new AMD EPYC motherboard supports 1TB of RAM

And here:

Pretty sure H100s can be pooled and share their VRAM… but I wasn’t paying too much attention to the details since I will never be buying such a system.

1 Like

Depends on the system.

I dont think that big customers want to show you 720p in Pay TV or buy a 5090 for processing.

Having now read every single post in this interesting thread, My take on this topic is as follows:

  1. Topaz has clearly pivoted the last couple of years, and I would not be surprised if the studio / enterprise business now exceeds the consumer business. Nor would I be surprised if the “chart” points up sharper on the enterprise side than the consumer side. As such it may very well be that Topaz is shifting focus entirely, but it all depends on the numbers. And only Tony/Eric have those. The point is that “If you go Cloud, we’ll not be your customers” may be irrelevant to TL depending on the numbers. If the numbers favor Enterprise, then Consumers as customers are not the future, and I’d expect TL to be fully aware of this (an informed play, not something we need to tell them).

  2. The arguments made by Eric, Tony and the research lead that Starlight is a research project and not a customer product (a tech demo if you like) makes perfect sense. I trust them completely on that one, having worked in R&D before. But… as others pointed out, and #1 alludes to, it also seems plausible, even likely, that the model qualities will be tiered going forward with the SOTA performance being offered only under the “OpenAI” premise; Give us your data and we’ll process it for you (amongst other things). I’m sure they’ll optimize both the H1/200 version of the model for production use (studios etc), and some distilled, less capable models to run locally.

I’m still rooting for Topaz, because they are one of the last “Local first” players still out there. The new angle of “Let’s see what we can do without constraints” is a dangerous (for local-first advocates) precedent. Constraints are the best source for innovation. Those are what lead to actual innovation. If you remove constraints (e.g. memory limits or GPU processing requirements) then the product will bloat to fill however much capacity is available. That’s why Nokia, Apple, SpaceX etc were forced to innovate; To overcome practical and firm resource constraints that forced them to solve the problems differently. Also called R&D (Research influenced by Engineering and vice versa). If you give researchers infinite resource (a fleet of H100s) then they will be satisfied once a “tech demo” shows results worth writing a paper about. Translating that to something that can run on 24GB VRAM is a monster of a challenge, and will likely involve completely new research. And I’ve not met many researchers that are too happy when engineering comes knocking “Look, it needs to fit in [box]”.

So the prior customer argument and concern raised here along the line of “Why don’t you develop models that fit the resource constraint envelope?” does have plenty of merit. Distilling a model that was designed with practically no constraints, down to an order of magnitude smaller envelope, will not produce the same quality as if it had been designed with the latter constraint to begin with. That is a common R&D pattern. So there is cause for concern if “infinite resources” is the future way in which TL will develop the components for the product. From what I’ve read, Eric’s father (TL founder) for instance was acutely aware of the importance of resource constraints. He was (is?) an engineer, and those constraints led to a very special way of tackling video enhancements, through a “Local first” mindset. My main concern is that there are signs Topaz is moving away from that fundamental perspective. Only Eric and Tony can speak to the company’s actual vision and future mission, but this is what I’ve gleaned from their responses here.

2 Likes

Interesting.
Folks here on this forum keep saying how much Adobe lost in customers when they switched to subscription based.

That’s not quite true.

Actually they almost doubled the amount of customers that now have subscriptions…in less time.

What happened is they lost non-pro but gained many more pro users which was a planned shift in Adobes product, marketing and vision.

I have a suggestion, you could shred input data and distribute it to many shared GPUS that your users individually have each processing a small portion video all reassembled at the initiating machine. Basically a GPU cloud. Users could opt in even get credits for allowing use like some mining software does currently

EXACTLY! THANK YOU!

Topaz spent WAY too long rearranging the user interface a bazillion times, totally lost their way in the process, and now they’re struggling to catch up. See my post from a year ago.

5 Likes

they can do cloud rendering all they want as long as they don’t let the base product die, but that’s exactly what seems to be happening slowly right now

7 Likes

Grabbed this from the “X” Starlight page. If this is an improvement I’m afraid to see what isn’t. Massive colour shifts, highlights where there shouldn’t be any, skin texture obliterated (hate flat waxy skin in the name of “beauty”), heavy handed denoising and overly sharp edges that look fake compared to the rest of the picture as with previous models. Many of these problems exist with older models and they have never been addressed regardless of how often they’ve been pointed out. As other people have already stated this cloud based processing is just an excuse for a pay per minute and gets a big thumbs down from me.
You should be focusing on getting current versions bug free and not keep changing the interface for no reason. People have paid for a working program and honestly any performance improvements should be given out freely not just for people with newer versions. Optimization should be a base function not an upgrade!!

6 Likes

Ran a little test using some public domain clips (from the Reagan Library) from Liberty Weekend in 1986

Since there were two major fireworks displays, I chose to use the clips to test the lowlight performance of this model

Start clip
End clip

Since it is a lowlight clip, color is fairly negligible, so this might not be representative of the current state of the model

This might be interesting for those who want to try out a Topaz model before paying $300 for a license, or those who cannot run topaz models locally due to memory size (me included, currently on a MacBook Pro with 8GB unified memory, plan to upgrade in the near future)

It’s fantastic. Keep up the great work.

Let your customers decide if they can run something. I think I’m done, I see where this is going. I’m canceling my sub.

3 Likes

im a little late to reply but i just wanted to drop my two cent on starlight.

first off any online server render is useless to me. my internet isnt very fast so having to upload raw video to a server for processing is out of the question.

you guys were doing great till you all started doing the user interface updates which imo is still confusing to work with and being difficult to just find manual settings. (which are tucked away)

my license expired a few months ago and i havent renewed cause there hasnt been hardly any improvements to models (the part that makes topaz what it is) then i saw the announcement for starlight which further discouraged me renewing my license for topaz as i have no interest in server rendering and see it as a pointless feature. i have a powerful computer for a reason.

imo if you guys focused on refining proteus model which imo is almost perfect. it just has one flaw where a object moving around on a static background caused detail to be lost where the object was moving around on. which wasnt an issue in the version 3.0 of proteus but version 3 had a flickering issue.

anyways i get that this is experimental and that it will eventually come to home computers but i wont be renewing my license if this is where things are going.

Please focus on the things that matter.

4 Likes

I hope starlight can be adjusted to run locally from our desktops soon, even if processing times are very long. I’d prefer the option of leaving my desktop running for a few days to process a 5 minute clip with starlight quality, over the current cloud and credit system.

A five minute clip might take a week or two. Would you be okay with that?

Personally, yes. I’d just leave the desktop running in the background while I go about my daily life.

Yes, for sure. Why not?

The computer runs 24/7 here anyways and on the Mac stays fully usable/responsive while TVAI runs and doesn’t use that much power.
So:no, absolutely no problem with a task needing a week or more to finish.

There was a March 6 update to output 4K from Starlight. I updated to latest version and sent two clips to Starlight, but they still came out as FHD (same as input). How do I get Starlight to upres to 4K?
Also, it output the results as h264 despite me selecting ProRes HQ as the output. How do I get the highest quality output?

I agree with everyone, the model should work offline. I don’t care about how long it takes to complete as according to ChatGPT what I have to do is to replace thermal paste on chip from time to time.

1 Like