Hello Eric Yang,
A short time ago I downloaded TL’ Video AI Beta V6.1.06b. Encouraged by your very thorough advertising I tried x2 scanned analogue film clips to see first hand what Project Starlight’s result would be.
For the uninitiated, subscription customer’s can receive free Project Starlight’s processing on the Cloud, if we’re willing to format or drop other sized video file in to a 1080 file. I found the result impressive. However, Project Starlight seems to strip aged analogue film too much of the very thing that characterises its aesthetic. This is to say without customers having access to parameter controls for this tool, it appears to over compensate for blurriness, softness and grain.
Looking ahead, I expect I would make use of this tool in the following way:
Import a scanned analogue film clip in to Video AI and make an light, initial enhancement pass using a model such as Proteus. Render.
Import the VAI clip in to After Effects and use third party plugins to reduce “salt and pepper” grain that VAI left behind. Apply a gentle edge blur and render.
Import the clip in to a dedicated motion film restoration app’. Run a dust busting and stabilisation pass. Render.
Re-import the clip in to VAI and apply Project Starlight. Adjust the parameters so as not to “strip” the image of it’s analogue character. Render.
If necessary re-import to clip in to the motion film restoration app’ and paint out further anomalies. Render.
Finally, bring the clip back in to After Effects or Premiere Pro, make a grade and add grain. Render.
Another week into February. Three weeks since the 50-series were launched and we still don’t have support for blackwell.
At this point I’m considering asking support about a one month extended subscription.
Three weeks is too long, when open source projects running generative AI released blackwell support on the 25th of January. As soon as the SDK was released.
What is taking so long? It’s been over a week of “We need to make sure it’s compatible with older gen GPU’s.”
Something which does not seem to be an issue for other developers.
Btw Kevin, is it possible to add dehaloing into Starlight? If not, all good. I usually have to do a second pass with Proteus (Dehalo at 60) to soften the image a bit so it’s not too oversharpened/crunchy looking. I also see it adds contrast and saturation to the image a bit too much on the shots I’ve tested. Just something to think about for the future.
Topaz’s lead will soon be gone. They simply do not have the financial backing. Video enhancement is going mainstream now. Enhancing video’s with “AI” is not that complicated. You just need a freaking huge data lake and computing resources. (Both which Topaz doesn’t have) As the whale’s jumped in Topaz is wettening it’s pands. This whole cloud jump is just a panic reaction to stay ahead. (for as long as it lasts) They can better stay with their niche and focus on licensing local AI models for enduser devices
Yes, it’s ridiculous that we still get terrible performance on RTX 5090 several weeks later, when all they need to do is update TensorRT and compile a new engine. Nvidia already supports it. What is taking so long ?!
The inability of the Video AI web app to handle AVI files in a dealbreaker. A lot of low quality video was shot on AVI by me and a lot of other people 20 years ago. If I have to degrade video by using a conversion to mp4 as first step, I am not going to be able to evaluate the Starlight diffusion model. I also resent the pig-in-a-poke approach of describing processing time costs as credits/minute. I don’t want to have to sign up to find out what this means. Convert to dollars/minute and we’ll talk.
So what does this mean for innovating existing models? If some of the messaging here is that this project is the future, then its going to consume the majority of resources to push the boundaries of next level AI video processing, and I have the sinking feeling that existing models are going to get very little development from this point forward.
The appeal of this platform at one time was ongoing model development and “free to subscriber” updates every year with renewal. However, to use the latest model, we are being forced to pay for cloud credits on top of our already hefty and continously growing renewal fees?
This is going to quickly become unaffordable for the majority of your user base. We’re not all big commercial production studios that can absorb these costs.
I can respect that new technology requires more capital and server load to develop, especially while in its infancy, so I get the need to charge for early access, but if this is going to become the new norm and onsite processing and existing model development is going to be abandoned, I will have no interest in continuing with this product if its going to cost me an extra $400-700 to process one hr long video, depending on how you purchase the credits, with lastest and greatest model. No thank you!!
I’ve been sitting on the sidelines, reading the comments until now. Given that I’m principally a home user/hobbyist and only process videos occasionally, I don’t foresee using the TVAI Cloud in the future as, for me at least, it’s too expensive and I can’t justify the cost vs benefit. Beyond cost, my views are jaded by coming from a corporate IT background with a client that had very deep pockets and for which security was paramount. Were I to reconsider, I’d expect a SLA that addresses such things as security, privacy, non-reuse and availability among other things.
As far as the desktop version of TVAI goes, it appears to me that it may have reached a dead end. At least that’s the impression I get. There have been no model upgrades in months, bug fix requests from others seemingly go unanswered, and only UI tweaks of questionable value are forthcoming. My annual subscription payment has been used to develop a product, Starlight, that I likely will never use. Disappointing. I will carefully consider whether to continue my subscription into next year.
To be fair, I think I understand TopazLabs position: to survive and grow the company requires much more than the desktop product can provide; no company survives by resting on its laurels. Fair enough.
So for a lot of these old films that are being restored, I find that it smooths the noise too much. I think it would be great if Topaz can also dedicate a team to study popular film stock used in movies (Kodak Portra 400, Kodak Ektar 100, Ilford HP5 Plus, Kodak Gold 200, CineStill 800T, Kodak Tri-x 400, Kodak Portra 160, Kodak Portra 800, and Ilford Delta 3200) and find a way to incorporate that as an option in Starlight to retain that gritty feel of film. I don’t find the “add grain” slider as a substitute to emulate film grain or film stock. I’ve done comparisons and it isn’t the same.
I share the opinion most users expressed here. I will not renew my subscription to Topaz Video AI next year if there isn’t a clear commitment of the staff to make Starlight, or its variants, available for the desktop app.
As most fellow users already said, I have no interest in uploading my videos into the cloud and I certainly won’t pay expensive extra fees for pay-per-unit enhancement in addition to my subscription.
You guys at Topaz need to seriously rethink your strategy if you want to keep your customers. I hope you will hear us.
And you should be aware of that people who use Topaz are people interested in the whole video world, as a hobby or as a profession. We already have high-end hardware because we do video editing and open source AI image (and lately Ai video) inference - I just bought myself an A6000 Ada for my birthday…
Please, do not commit the mistake of thinking a ‘closed cloud’ strategy will be successfull. You cannot offer a paid (quite expensive already) subscription system with a variety of -minor- models and simultaneously offer a pay-per-unit cloud system with one -shiny- model : shall your customers think your desktop add is mediocre and still pay for it ?
Is there a place where we can give feedback on project starlight? I uploaded a short clip of the world famous cartoon Mr. Rossi’s Vacation. The results were not bad. It took away all the dust and scratches and generally improved the quality. It did thin some of the lines on the characters, but one thing it did that I didn’t like is it changed the colors of the cartoon. It made everything more faded like they turned down the contrast. I hope that improves in the future.
Sounds like a good step forward. I realize that for practicality, you have to pay more attention to features and benefits for the customers/users which have the most money, typically the larger business or enterprise users, but for the product and company to survive, it also has to benefit and excite the smaller, hobbyist-home users as well. This is where grass roots and word-of-mouth excitement for a company and product takes root and spreads. All in all, I’ve been sold on the possibilities of the Topaz products, and am pleased with the gradual improvement of the products over the couple or so years I’ve been a customer. I am a macOS based user, who depends on the Apple silicon versions of software for my graphics processing. I have a local installation of Stable Diffusion via Automatic1111, and have had to depend on PyTorch with ML processing support to abstract the typically Nvidia-specific calls for graphics rendering for any processing. Even though these models are huge, your company could gather valuable real-world data on the performance and any tweaks that need to be done to the software and models by allowing users with higher-end hardware to at least have the option of trying out the newer software. I’m not sure what the server requirements are for your models, but even if it is slower on my M3 Pro Max 64GB memory computer, I and I’m sure others would like to see what the progress is locally, and not having to depend on cloud services. You could have an option to require an agreement to have local data fed back to your servers for analyzing any issues or tweaks that need to happen to user level hardware. Also, it is great to have lots of options to process videos and also images, but the beauty of the AI in the software is not just the processing and results which are a hit-or/and-miss situation many times, but the beauty is in you being able to scan the video to be processed, and then use AI to figure out all of the steps and settings needed to best process that to be the most clean version possible for the output requirements chosen by the user. This has been a point of frustration for me. With it taking so long to simply process and even view previews of quality, it can still take hours to get things right before the processing even starts. Improvements in this part of the workflow would do wonders for excitement over the product. Just saying. I do appreciate that Topaz seems to be committed to advancing the product for everyone, though, and hope you keep this focus on the rest of us.
What if we invested in and own server-grade hardware? Some people have 48GB, 80GB, soon to be 96GB local VRAM from a single card. You should be letting it be up to the end user whether they have the hardware requirements to run the models or choose to buy into the cloud services if they do not. We’re already paying for the product annually. This is how people get turned away and to the open source community which has been making massive advancements in AI over the past two years, including video generation.
Please consider releasing the models for local usage. This is a really bad look as it stands right now and I am backing what others are saying here that I will drop my subscription when it is up next if there isn’t a clearly laid out plan to bring Starlight to run locally.
I would not be surprised to learn that Starlight needs more than a terabyte of VRAM per video. I would be more surprised to learn that it uses less than the 96GB you mention for a five minute clip.
There’s really no good reason for it to take that much VRAM. There are plenty of locally running V2V models already that can produce high detail 720P+ outputs on as little as 48GB RAM. This is effectively doing V2V with filling in details. But I’m not here to argue what the requirements are and should be. I’ve said what I needed to say.
Awesome, let me know by email along with the usual stuff you send out when it’s ready for my desktop, I will never use the cloud for this, the cost is massively prohibitive and it takes too long, I have many hours of footage to work on over the year. Until then I’ll continue with the last version I have that works beautifully which is 5.3.6.