OK, is it controlled by “Allow anonymous data collection”? if so, it is turned off, yet you are still trying to connect to amplitude.com… I can see it with Little Snitch on the Mac…
Thanks @dakota.wixom for this information.
Than my describted problem would be the same, cause I tried several times the original 7 version and thought, that the beta version would work better and faster.
Are there any news belongs to decrease render times for Starlight Mini?
This isn’t something they can magically pull off. This is where you have to make a decision whether the financial cost is worth it to you or not to make it faster.
Sorry, but that’s just reality. If you want this to run faster, you need to pony up the money, and it’s still going to run slow, so you’re going to need to spread the load across a couple machines that you can dedicate to using it (the new intel stuff coming out in Q3 might bolster the ability to run large models like this at faster speeds locally) but the faster they make starlight, the less accurate it will be… Sure, there is likely room for plenty more optimizaiton here, but the fact is you have to decide whetehr you’re wiling to pay to play or whether you need to get a new hobby.
I’m not saying this to be mean, this is literally just physics and hardware limitations. If you want to swim in hte big boy pool, you’re going to have to spend big boy money. That’s the way the world works.
I don’t like it eitehr, but it’s reality and we all have to find a way to adapt to reality or move onto something else.
I’m still on this beta, works better for me too.
There’s a thing called reality, and we all have to live in it.
You want performance, spend money.
It’s perfectly useful for someone who is willing to dedicate a machine to it. Old topaz versions used to be substantially slower back when everything was CUDA .. but we dealt with it to get what we wanted.
If you’re not willing to dedicate a macihne to tghe process, it doesn’t mean it’s not useful, it means itr’s not something that you can use.
It’s plenty useful to people who have multiple dedicated machines for video work with top end hardware. It’s not idea, but like i said – it’s the reality of the situation. If you want the results, you have to spend the money and put in the time.
Tecnology will catch up, I’m sure there’s room for optimizations here, and hardware is going to keep catching up as we move ahead. Yes, its’ still gong ot be somethign people probably have to sepnd 5-10k onwhen looking ahead a few mo nths.. but it’s also something that not so far in the distant past would have cost 6 figures.
If starlight can keep getting more accurate and utilize a lot of vram and the gpu companies focus more on locaally run ai clusters… they’;re going to sell just fine.. peopel will absolutely spend 5-10k for something that runs this with a high degree of accuracy and at a speed that’s more “normal”
I mean, stabilization a few years ago could take a week or more on a video that’s just a couple hours long with the best hardware. ..
We’re still years away from this being super affordable just because vram is in such demand for this kidn of stuff.. overflow into system ram is slow.
either way, take it or leave it. it’s not a problem with the program (and i critricize topaz plenty) — they could have chosen to not even give people the option.
It’s not their fault even the best current consumer hardware isnt’ very quick.
I’m fine with runnign processes for multiple days. Did it plety when scaling to 4k back in the early topaz days.
That was my initial observation as well with mini. But instead of Rhea, I just went with Iris MQ, for the second pass, and it sharpened the image with no observable artifacts introduced [1].
Overall I think this is a fantastic model that gets to the heart of why I bought TVAI in the first place; to salvage ancient footage that was too low quality to be used for anything. So big Kudos to Tony and team on this one!
A lot of perf opportunities for us to look forward to (fusing kernels and eventually trt) in upcoming releases, as it currently seems plain pytorch with a lot of “redundant” I/O.
Will be tracking this one closely due to the improvement potential for a model that makes a similar quality leap akin to the “Artemis → Proteus” jump, which Mini already exhibits over Proteus.
[1] Iris-2 often fails to retain image detail if the source isn’t clean, adds artifacts and over-sharpens a bit. But with the SL-M output as a source, it seems a perfect match; clean image and little for the model to do but to apply its “natural” over-sharpening, leading to a crisper and more natural output.
Yes, this should be easily fixed, since the “runner.exe” the GUI uses actually provides this verbatim for it to display in the GUI.
Here’s an example of what the runner outputs for the GUI:
...
[INFO] total number of frames written: 8, 16, 46040, 517328 bitrate=5313.2kbits/s speed=0.0464x
{"timestamp": "2025-05-30 15:45:15,081", "level": "INFO", "message": {"status": "RUNNING", "frame": 46040, "progress": 88}}
[INFO] total number of frames written: 8, 16, 46048, 517322 bitrate=5316.4kbits/s speed=0.0464x
{"timestamp": "2025-05-30 15:45:26,724", "level": "INFO", "message": {"status": "RUNNING", "frame": 46048, "progress": 89}}
...
Re: Value, economy & speed
I think that for what this model delivers, there’s a lot of cases where the cost of running it makes sense.
A back-of-the-napkin calculation of 0.7 fps on a RTX 4090, and the “cheapest” cloud cost of $0.61/h for renting such a GPU means the cost per frame currently is about $0.00065.
So processing an hour of US (NTSC) footage @ 30fps will cost $70. An hour of “european” (PAL) footage $59.
And chunking it up with parallel rendering will make the process almost arbitrarily fast. E.g. for realtime rendering of 30 fps footage, you’d just chunk up the source and fan it out to 43 GPU machines (multi-GPU would be wasteful due to the CPU<->GPU memory bottleneck already on a single GPU setup), which of course would cost roughly the same as if you’d wait two days on a single GPU machine. Well, +~10% since you’d use some overlap in the chunking due to the nature of the (video) problem (both spatial and temporal signal carrying across source frames).
So technically there is no reason why using this model can’t be as fast as you’d like. The only variable of import is cost; if “salvaging” 1h of poor footage is worth about $70 or not. For some it is not, for others it definitely is.
Can’t wait to try Starlight Mini on my Mac M4 Pro… hoping it comes soon!
Are you figuring in the cost of the licenses? You only get two machines for each one you buy, and probably won’t be able to run more than three or four instances for each on the best available hardware.
I was Not. Only hardware compute cost included.
Thanks for pointing this one out. My statement only made sense in cases where the software license is permissive to customers. And for TVAI it is not, so a false statement.
I’ve been playing too much with the water-marked “trial” mode to see what workflows are possible, that I forgot for an instant I actually need the license file to get rid of the watermark for real production use
So in that sense, the Topaz markup of about 4x (for the “credit system”) compare to the compute cost seems more reasonable than at first glance. Not dissimilar to Microsoft adding a Windows tax to windows-VMs on AWS to get a “slice of the pie”. Only difference materially being that TVAI is more of a niche product than windows, so the markup needs to be higher to pay for the R&D and operational cost.
And in this model it also allows “<big studio/network>” to negotiate a massive discount on the credit system, which makes both the procurement peeps and topaz sales happy. Leaving only us non-enterprise customers the draw the short straw in that model (as usual).
EDIT: got curious about how the software license affect the estimate, so made a cost deep dive. TLDR; it doesn’t affect the estimate much. It’s about 1/18th of the processing cost.
Here’s the calculations and my assessment of the options for Starlight.
- Hours required to process 1h of 30fps footage on a single 4090 GPU system: 30 (@ 1 fps)
- Hourly single 4090 system cost: $0.61/h
- TVAI yearly license cost: $300
- Hourly License cost: $0.03425 ($300 / 365 / 24, amortized)
Cost per hour of processing 1h 30fps footage:
- Processing cost: $18.3 ($0.61 x 30h)
- License cost: $1.03 ($0.03425 x 30h)
- Total: $19.3 ($18.3 + $1.03)
This means the lowest cost if one transcodes 24/7.
For typical “hobby” use, perhaps the average utilization is more like 1h processing per day, the per-hour-footage cost would then rise to $43 ($1.03 x 24 + $18.3).
In comparison, the hardware cost is about 18X that of the license cost when used all the time. So if you use it on average 1h 20 minutes per day, then the license cost will equal the processing cost (break even).
Now let’s also look at the Topaz cloud credits and see when those prices make sense. Let’s look at the cheapest option in terms of $/credit (9000 credits/month).
For 1h @ 30fps (1080p output) one needs 5400 credits. That is to say, it costs $300. (same cost as a full 1y TVAI license)
So the cost comparison between self hosted 24/7 and the TVAI-built-in cloud rendering shows a 16X markup compared to self-hosted.
The justification for using the built-in cloud rendering that I can see are if you have no confidentiality concerns about your footage and you:
- A) Are swimming in cash / someone else is footing the bill, no questions asked.
- B) Don’t know how to roll your own processing pipeline / don’t want to bother / have “more important things to spend time on” / are lazy.
- C) Use Starlight very rarely, on average upscaling < 3 minutes of 30fps footage per day.
I would assume camp B is what the cloud rendering is targeting primarily, and A secondarily. I don’t see why they’d want someone Not to use the model ( C ).
So in short, if you’re not using the software professionally, then just buy a couple of licenses, rent a few GPU machines in some cloud you trust, or have your basement farm chug away. I mean @oliver.martin bought a friggin A6000 Pro ($8500 MSRP). That’s 440h of footage worth. And even with an A6000 you’d have to wait forever, vs. just buying 28 licenses and rendering all that work in 20 days. I doubt an A6000 can do 28fps with starlight mini, or even a fraction of that
The alternative to buying A6000 GPUs or renting cloud machines is to dump the “440h catalog” on Topaz cloud rendering, and pay them $132 000 for that privilege.
Any upcoming new BETA???
What’s your speed on the 4090?
My speed on this beta version is lower compared to 7.0.0.2b. On that version I get .6 fps and on 7.0.0.4.b I get .3 fps
BUG: If I add a video with resolution 640x480 and select Starlight mini, it will automatically select upscale 3x 1920x1440. I then change this to 1280 x 960 (minimum). Hit “Export As..” and yet the processing starts with upscale 3x instead of “minimum” what I selected. How to keep the resolution for upscale persistent?
EDIT: This is for production version Topaz Video AI 7.0.1.
When I click the “Help/Give Feedback…” button within the App on Windows. I get 404 Page not found. So I can post it here instead.
There is a workaround for this bug in 7.0.1. Change model to Proteus, change the output resolution there, then change back to Starlight Mini and it will recognise your new output resolution when you export next time.