Project Starlight - Video AI 6.1 Beta

I manually cut the clip down to 10s before adding it to TVAi and that worked.

However, the input was in HDR and 10-bit. The resulting video got it’s hdr metadata removed, as well as 8-bit instead of 10-bit. Looks like when you play HDR with a media player that can’t handle the colors resulting in washed out, greyish colors.

Input vs Output

No idea if that works. But here’s the comparison.

Edit*
06-000091

I think the model’s handling of eyes might need a second look, no pun intended.

3 Likes

This is a major part of our product, but our overall mission is to increase video quality using the latest technology available.

Let’s say for now it’s Cloud only in “Season 1”

2 Likes

As with other cloud-hosted solutions and services, laws like Section 230 of the Communications Decency Act would place the responsibility on the individual that provided the content.

Totally serious question.

So you’re thinking that your servers qualify as “platforms,” and are therefore not liable for any content being uploaded by users even though your company is the party making links to it shareable?

How long is cloud processed content being retained on the servers?

3 Likes

Which then would mean that people outside the US could have a hard time deciding if the content is too explicit - rendering the use of your cloud service impossible in many cases where nudeness is involved.

And what if sensitive video material of a client gets leaked due to a hack / privacy breach on your side resulting in the vids floating all over the internet? Who would be liable?

Oh I do know why I do like offline processing…

2 Likes

This is why I was an original (v2.4/2.5) topaz AI customer. I could upscale videos and clean up compression artifacts on a gaming laptop overnight w/ help from Handbrake for pre-processing. No re-activation nags, no Topaz-server dependent tokens that makes my software dead if the company goes down, etc.

Now we’re seeing more features of our software be taken out, again with ever changing UI that adds and removes bugs and features repeatedly, and potentially an open-source model being rented out to us now because there’s no consumer hardware to run it due to VRAM requirements (painfully so).

I don’t think, or know if STAR on github scales with multigpu, but it’s a bit convenient right that MultiGPU is now a “Pro”/enterprise Topaz Video AI (which wasn’t a thing until recently!) :confused: feature which in theory could let future GPUs (ie, 4X 3090s or 2-3 5090s if they ever exist in volume) run them locally, allowing us to just purchase the hardware and software once, and of course, do our own upscaling if we don’t care about electricity costs.

Also, not being discussed yet in detail is the question of cross-border dataprivacy issues, especially for somebody usually outside the USA like myself. What happens if someone doesn’t know, or remember that this software now requires foreign data storage for processing their videos? How does a EU customer know their data and video, of any sort, is safe? What happens if someone uploads illegal content, or adult, but legal content yet geographically segmented/ID required? Does Topaz report you to authorities, is it in their licensing agreements at the moment?

I left a negative review of 6.0 in December, and I’m just going to hope people are aware of this shift in business model honestly. Nvidia has updated their RTX Video super resolution to use less resources with better quality upscaling again, and comes with all recent RTX GPUs. Someone could easily make a quick, free and dirty upscaler from that, I already do with MPC-BE. Topaz was good for large offline videos and larger or more specific processing, but with the focus moving to cloud, eh…

7 Likes

We’ll look into this, and make the range clearer for different framerates.

This is expected for now, but the model is capable of HDR output. We’ve chosen a single type of export for this early access rollout. 1080p, h264, rec709.

Not only the eyes, but (not surprisingly) also the text. There should definitely be a preserve text option.
Oh, and look what it’s done to the cigarette :oops:

ok maybe it’s not fair, the source is really bad. We should compare the cloud result with Iris/proteus and all things we can do local is the proof.

Hahaha that’s hilarious! Completely missed that.

I should try with the 3s of the clip that is before this, when he walks from way back. I tried a few different settings with local models which generated good results once the dude got within a couple of meters to the other guy. But when he’s far away, his face looks like a mashed potato.
06-000093

2 Likes

Again, we take steps to secure our cloud systems but we also understand why many users and companies would want to run models offline. We offer and support both options.

That’s fair. I’ll try with the first part of the clip, using the source which is 8-bit, non-HDR.

1 Like

That should have little/no color shift. We will have more to say on HDR later on, but good to know there’s interest in it.

What about a preserve text feature?
This is - together with the monster faces and sometimes ugly repetitive patterns - one of the main annoyances with most of the current off-line models.
So everyone instantly can see: this was done with AI…

@tony.topazlabs
Can i close the software while it’s cloud rendering?

@jo.vo For sure. We feel really strongly that Starlight is much better at the “monster face” issue, and adding a way to exclude text will help even more for a usable result across the entire frame

Once your cloud job has started, you should be able to close the app and come back.

This is the first beta so I’d recommend trying on a video right when it starts just in case.

Yup, worked just fine.

Here’s the result:
Test run #2 Project Statlight comparison

It’s handling out of focus as it should which is impressive, while enhancing what’s in focus. It’s not perfect, but a good result none the less.

Perhaps it’s meant for lower quality clips.

4 Likes

I’d say this is a pretty good example, but as you said it really excels with even lower quality inputs