Project Starlight - Video AI 6.1 Beta


Today’s release is a very important moment in the history of Topaz Video AI. We are releasing Project Starlight, the first-ever diffusion model for video enhancement.

This is a significant evolution over our existing models, and it’s the single largest increase in model capability since Video AI launched.

Even the most challenging footage can be enhanced with Project Starlight. So please, try out the “bad” footage—footage that may have previously produced artifacts with current models.

Because of its size, this model is significantly slower and more expensive to run than previous models, and requires cloud processing on server-grade graphics hardware. But this is just the beginning. Our mission as a product and research organization is to advance visual quality to the furthest extent possible—it just takes a little more processing power to get there right now.

We see a clear path towards expanding access to Starlight and other models of this generation to very high end desktop GPUs, and Project Starlight is just the start of a new series of models.


Project Starlight Research Preview

Active Video AI users can currently process three 10-second weekly previews of Starlight for free. Results will be viewable through unlisted, shareable links with the new “Compare” button in Video AI.

These previews take about 20 minutes to render if servers are available immediately, and all exports are set to 1080p.


Starlight Early Access

After testing for initial server capacity, we’ll be enabling up to 9000 frames of processing in a single export using cloud credits. We are currently pricing this service under cost, and working to offer more usage at lower prices.


Our research team is eager to receive feedback and discuss this step-change in AI video upscaling. We appreciate your involvement on this journey to Video AI’s next frontier.




Project Starlight announcement on X

Project Starlight is now also available in the Video AI web app.

6.1.0.2.b

11 Likes

I don’t test new models on Cloud services.


I had an idea, a AI model that is able to manage its memory needs by itself, reducing the need of fast memory (and or big), by managing the size of the data in GPU Cache (L2/L3) depending on GPU architecture.

But its only an idea. :face_with_peeking_eye:


Or next step, let an AI manage the memory Pool by itself.

System Memory, GPU memory, Drives and Cache (if possible).

Putting data that is needed soon at the right place in the system.

Reducing the need of fast and big GPU memory.

8 Likes

I really HOPE that this model (or a lighter model but with similar results) will also be made available for local rendering, and not ONLY in the cloud:
the way TVAI was born, ALL upscaling models must be primarily for rendering from the local PC, and only as an OPTIONAL to have cloud rendering for when I am “away from home” and necessarily need upscaling on a less performing PC!

To charge $299 for a product and then $99 for each annual ugprade, and in addition REQUIRE compulsorily to pay for OBLIGATORY online rendering credits for the upscaling model, seems really too much to me!

:roll_eyes:

30 Likes

The one year license extension costs $149.00 currently, almost as much as my inital purchase for $169.99 back then. This and taking into account the immense time and money I spent during hours of testing including the expensive hardware needed is an absolute no-go for me. :frowning:

20 Likes

Same.

10 Likes

NVIDIA 5000 support?

1 Like

Are there any Samples available?

3 Likes

See below.

Could be anything from 4x (or 8x) Nvidia A100 to H100.

Or anything above 24 GB.

1 Like

How much will my Voyager project cost if I have to upload 150 videos and render in the cloud? One Million?

I was hoping this does not happen what it seems to be happen now, new models are only available in the cloud “due to inadequate customer hardware” yes of course :thinking: Bye bye Topaz when you go this way, this does not work

16 Likes

I want to know when RTX 5000 for existing models is supported (atm 5090 is up to 40% slower than 4090). Next beta was supposed to have it. This is next beta. Doesn’t seem to have RTX 5000 support.

I am also not sure choosing a very similar name to the politically very divisive topic of ‘Project Stargate’ is such a great PR idea.

2 Likes

Not thrilled with the fact that cloud processing is required for this model. Effectively, those of us who choose to exclusively process locally will be locked out of these new models.

This is a problem since we are all paying customers and purchased a product under the premise of being able to use it locally 100% of the time.

19 Likes

Holy mackerels. Diffusion model for video processing? Testing time!
I have VERY compressed Andrzej Fidyk’s “Defilada” (about North Korea regime in the 80’s) 1989 video to testo on. :slight_smile:

That’s how it looks like on Rhea v1 (XL isn’t viable because the model increases the artifacts exponentially on less ‘active’ frames):

In fact, the name of the AI project is very close to that of Stargate.

Although I live in Germany, I immediately thought of the Stargate project.

But I don’t know if Phantasos or Morpheus would have gone down better.

Something is wrong, either it has to do with AMD’s shift from RDNA 4, with the driver that doesn’t fit (the memory bandwidth of the 5090 alone should be visible) or simply the support in the various programs.

For me, there is more behind the poor availability.

Oh shit. Time to wait for subsequent model optimizations! :slight_smile:

Bug with TEMP wipe still applies from the previous v6 betas… :slight_smile:

Isn’t this basically this? GitHub - NJU-PCALab/STAR: STAR: Spatial-Temporal Augmentation with Text-to-Video Models for Real-World Video Super-Resolution

6 Likes

since my workflow does not involve Cloud rendering, then I will not be participating in this release testing as to my understanding there isn’t any new features/AI to test on local machine. Is that correct?

2 Likes

This is going to be soooo cool!

Oh. Never mind.
Huh, my reaction was the same as everyone else that responded.

I’ll give it a test because I don’t really care what happens to my test videos, but this is not a product I would spend my own money on.

5 Likes

We knew this would be a common question around Starlight, and I’m happy to discuss in more detail:

Topaz Labs, as a team of researchers and engineers, must pursue the very best visual quality possible today.

In some cases, this will mean a very large, complex model that requires the best hardware available. We are committed to creating a product line that actively supports desktop local models along with cloud models. Starlight, and the technological shift it represents, will evolve and be available in more formats over time.

This is just not accurate. We are offering free access to Starlight at cost. The price we are offering for the paid renders of up to 5 minutes are truly very close to our cost to run the model.

We’re close to releasing RTX 5000 optimization for all current local models (pre-Starlight).

5 Likes

@tony.topazlabs What are valid inputs? I just tried an interlaced mpg2 in an mkv container and got an invalid input error.

1 Like