Topaz Video AI 7.1

Which is why Nvidia recently overtook Microsoft and Apple as the world’s most valuable company….

For now….

Competition is a wonderful thing that spurs innovation and lowers consumer prices.

I personally hope AMD, Intel, Microsoft and Apple catch and even surpass Nvidia quickly…and things get a lot more “balanced” as I just want to throw up thinking about paying $3000 for a damn video card for a PC.

Starlight 10x faster …in the cloud - I’m afraid we’ll get new gimmicks (like the scene detection) but hardly better/faster models

1 Like

I remember the thinking 2500$ for the RTX Titan was too much especially since it was basically an RTX 2080 with more VRAM.

It would be nice if someone could put Nvidia in their place, but “When number go up” None of the competitors want to ever see it go down again.

Has anyone experienced that when running a local SLM, longer videos cause the program to automatically close, and after restarting it, you have to start over? Is anyone else having this issue too

So true…. I am convinced that Covid and Tariffs and virtually ANYTHING gives companies an “excuse” to raise prices. And it’s funny how those prices never go down once raised…..even when the “crisis” is over.

But even a small price decrease because of competition is better than the insanity going on ATM.

I won’t call it greed.

It’s simply Nvidia taking advantage of the window they have created for themselves with their foresight on AI.

And windows close and tech and competition marches on…..so maybe we will see slightly better prices in the future as other competitors close the gap…..let’s hope!

3 Likes

Well, as @jo.vo hinted at and dakota confirmed, that speedup wasn’t anything to do with the speed of the model itself; They just split the uploaded clips and parcel them out to their various servers, then concatenate the result at the end.

This is the SOP for any cloud based sequence (video or other) processing, since it costs the provider basically nothing but yields massive value for the customers. That value for customer is saved time, the scarcest of resource in the world.

I’m certain you already know this, but most people tend to forget the basic economics of computation.

$ = <number of calculations> x <cost per calculation>, or for sequences like video breaking the second term into two <element sequence length> x <# calculations needed per element> x <cost per calculation> fits better with the notion of <pixel frame dimension> x <calculations per pixel> x <frames per video clip>, but it’s still just the same <amount of work> x <cost of work> expressed differently.

For most of us that typically only do one processing at a time, on one saturated machine, with the hardware already paid for previously, we don’t tend to think of this cost equation and instead just focus on “how long the job takes”. But that formula still holds true even for local use. We can calculate that $ amount by just amortizing our hardware purchase cost + electricity across the time frame we expect to have that hardware around.

Now in the “cloud”, where you have practically infinite machines and rent them rather than buy them, that formula above is all that matters, since you only pay for the machines when you’re actually using them. No ammortization needed (for typical SMB operations), as opposed to local hardware which we pay for even when we’re not using it.

As such since the cost is a function of number of calculations and cost per calculation, it doesn’t matter how fast or slow the processing is. The cost remains the same.

So if you have to pay $X for a given processing job, why not choose an option that does it fast rather than slow given the cost is the same?
E.g. split the source clip into 100 parts, send them out to 100 machines, have each process its little chunk for 10s and then stich the result back at the end. 10 seconds of processing time instead of 1000 seconds using only one machine; at the same price.

So it seems to me the TL people made a rational business decision here; “Why Not make cloud processing a fast experience, given it costs us basically nothing?”

* Now of course, I’m simplifying a tad bit. There’s usually ~20% additional cost to segmenting work, since you have to do a bit of extra/redundant/duplicated processing so you can blend back the chunks without visible artifacts in the stitched clip, but that’s a rounding error overall.

Well, I think you could have picked a better example, like introducing carousels, moving fields, menus and buttons around in the GUI etc, but I get your point.

Scene detection was actually a very relevant feature for me, since it was a problem I’d solved in my decade old prior pipeline using avisynth filters to preserve hard cuts. I found it always a shame that TVAI didn’t have this 101 rudimentary feature that any video processing requires. Without it someone has to go in and manually replace those ugly blended hard-cuts, which are now warped transitions, into frame duplications. Time consuming and error prone work. Or they’d have to write their custom tool to detect scene cuts and automate that exact same correction themselves. Not many users gonna do that, leading to bad output results unless the person that requested the framerate upscale was a professional who only processed one scene at a time. Again, I doubt even you do that with your DVD-rips :wink:

1 Like

Nevermind, it’s Starlight Mini.

[s]
Hello,

Can someone please tell me what “SLM” means?

I keep seeing “SLM” being referred to, what is “SLM”?
[/s]

Thank you.

Understood about the price/speed of processing in the cloud, which is great until they start charging “credits” that you then need to buy over and over. $ $ $ … I think that’s why, at least for me, people want to use the hundreds of dollars they paid for program locally.

1 Like

Weeeeell, I don’t know about that. There was some bird flu scare last year that “Drove egg prices up”, but some investigators have found that the biggest company in the USA that sells eggs, wasn’t really affected by the flu and just ran the number up anyway, because others did, and made a ton of money.
I feel like that’s similar to how AMD’s been pricing things. “We know our cards aren’t as good as Nvidia, but we’ll set our 400 dollar card to be 50 less than Nvidia anyway.” So they end up putting the price at 700 dollars because Nvidia put theirs at 750.

Anyway, I think greed is part of all of it.

7 Likes

Sure, I’m not arguing against using the software locally. I only use the software locally and would never pay for any “cloud use”, since I need control over my data. And I think dakota alluded to this as well when touching on “enterprise use cases”. You wouldn’t believe the paranoia at media companies wrt controlling their “assets”. And not to mention external regulatory requirements like the MPAA stuff etc.

So “hobbyists” aren’t the only ones for which the cloud option is mostly a no-go. Also for many of TL’s big customers, it is. So there our (consumer) interests align with the cash cows’.

I was simply clarifying the business driver that underpins a “feature” like speedup in a cloud setup.

3 Likes

That doesn’t work because the script STILL tries to download from the wrong URL. I even tried renaming the models to the model the script tries to download but it doesn’t matter. If it encounters a 404 then it will error out regardless, even if you have the model already downloaded. Its stooopid.

That is kinda correct and wrong at the same time.
While I understand your calculation of local rendering costs it’s not really valid here:

I didn’t buy the PC specifically for Tooaz applications (nor did I configure it differently). Same goes for the Mac.
Both rigs are there anyways for work (and, of course also personal things) - it’s just that TVAI can run on those together with the applications those computers were primarily bought for.

And power consumption doesn’t really come into place as our PV produces more than we can use except for three months a year. We’re 95% self-sufficient over the whole year (including powering an electric car).

TVAI running in the background doesn’t really affect all other/work things (this especially not on the Mac) and those computers are running 24/7 anyways (have to be accessible by RDP at any time).

So, HERE it really mostly comes down to the cost of Topaz software (and, of course, their nicely disguised annual „subscription“ costs).

2 Likes

SLM = Starlight Mini, which was originally what they called the version that ran locally on computers equipped with compatible nVidia RTX GPUs. We used to think of Project Starlight as being the cloud-based version, and SLM as the slimmed down, local-running version.

But now the lines have blurred and we’re no longer seeing the term Project Starlight as the offering is maturing - it’s now called Starlight Mini even in the cloud version. I think the ‘Project’ was an umbrella term that Topaz used refer to the emerging technology which now appears to be refined and spreading into other products, such as Astra.

Just my take, not a definitive answer…

2 Likes

Exactly…..

Gas and oil prices I am convinced are leveraged constantly based on some war or ship sinking or disaterer or government change which actually only affects a very small percentage of world wide market\use.

But it is what it is…..

Bottom line…it only changes (if ever) when folks vote with their feet….and walk away from the sale.

Which is what I will do at $3000 for nothing more than a video card for a PC.

Talk about driving folks to the cloud. :grinning_face:

Except….sigh….cloud prices are even worse right now… :slightly_frowning_face:.

1 Like

Every boom is followed by a glut. Give it some time and graphics cards will be cheaper than eggs.

Indeed, Competition is always good and useful for us. Only if it was a real perspective. Make a wish as they say.. The truth is amd is at least several years behind nvidia if we talking ai workloads. And that’s “at best” some folk says amd gpu division in the same place where amd cpu’s were in 2011 with their infamous Bulldozer architecture(

Yes same price why not split uploads over servers and customers gets uploads faster but when model itself could not get speed improved, the costs are the same which means fewer customers than it could be and this is bad for both, Topaz and customers.

Yes thats true there are better examples. The new feature seems half implemented. Ok I have not tryed out yet, but I know from my video editing software auto split scenes nerver works 100%. Sometimes scene transitions are not recognized, and if they set it too sensitively, it cuts at unwanted points. The correct approach would be to allow users manually correct detected scene changes before “cutting”.

I recently noticed something interesting with SLM and wanted to share it here. Early on, I used SLM (back in version 7.0) to upscale a 320x240 video to 960p. At the time, it only used about 15 GB of VRAM on my RTX 4090. The quality was decent and clearly better than with any other model, but there were serious issues. Most notably, a pair of glasses kept changing shape throughout the video, which distorted the face. On top of that, SLM sometimes placed the eyes on top of the lenses instead of behind them.

Since I wasn’t happy with the result, I recently re-rendered the same scene again—from 320x240 to 960p, still using SLM. This time, the quality was noticeably better. The glasses and face appear much more stable and defined. While there are still occasional issues with the eyes being placed incorrectly, the overall distortions are greatly reduced.

That got me wondering: has SLM been improved silently through minor updates, even without a new Topaz version? Or is this improvement simply due to better VRAM usage—now using the full 24 GB instead of just 15 GB?

If VRAM usage is the reason, that could explain why people are getting mixed results with SLM, depending on their hardware.

The reason why the same filter gives different results is completely related to the hardware, always run the GPU at 100%, and do not do heavy work on the computer while processing, let Topaz use all of the hardware, this is when the quality increases.

1 Like

SLmini seem to have gotten a silent update as the model files were re-downloaded some days ago even on machines where the model was already stored locally. And that without changing TVAI version.

1 Like