On the latest 1.0.4, I am getting good quality output at 4x upscale, but my performance in terms of fps is 0.2, which I consider disappointing on my 5090.
I tried various versions, from 7.0.0.1.b - thats giving me 0.2fps, 1.0.2 (or 1.0.1) gave me 1.0fps! Which was great, but poor quality of course.
I’d settle for 0.4fps. Is there a version (or settings) that will give me good quality but also good performance?
Here’s what I’ve tried:
Updated to the newest NVIDIA studio driver, clean install
GPU memory usage, tried 100% and 85%
Number of processes, tried 1,2 and 3 (no difference)
4x upscale 480p to 1920p, outputting a ProRes422HQ (tried TIFF, same 0.2fps)
AI Processor NVIDIA geForce RTX 5090
(I bobbed my original interlaced footage to 59.94fps.)
Hi Ian, I’m sure resolution was not the same for the comparison. Speed of 1.0 fps is when you do 960p, higher resolution is slower. Choosen output resolution matters (not really scale factor). When going to 1920p/2160p, then I also get about 0.2fps with my 5090.
unfortunately, the output resolution displayed in the output window is not always correct. It can show you 1x or 2x but does 4x or vice versa
I guess I’ll just have to accept it.
I did try and frame blend with optical flow back down to 29.97, I’ll do a test to see how that affects the motion.
Otherwise I’m stuck with a 15 min video clip at 59.94, 4x upscale which might take a day and a half or so.
I’m trying to work out the electricity bill how that might be affected. Maybe $15 for that day-and-a-half job?!
Hi Ian, to bost the speed i use MSI Afterburner, it is overclocking the GPU, i use +260 for the GPU and +200 memory clock, i usualy restart my pc and only start topaz and afterburner, then i use Topaz over the night and run no other programs, my feeling is that it work fastest that way.
I uppgraded my system from dual xeon to AMD Ryzen 9, 7950x3D i did it with the older Topaz video, befor starlight sharp version was out, it gave me a performans bost off around 10%
I did the opposite and limitd my 5090 gpu to 850mV (and ram clock can be maxed out). So here I wrote the TDP drop ist bout 150W but it’s more like 200W less or 30% lower TDP. Sure there is a speed loss, about 8% cpmpared to the factory card but for me it was worth because of much lower power consumption, less heat, handwarm 12V power connector
Yeah and here in Switzerland we have simillar energy costs. But I’m not just concerned about prices, I’m also concerned about waste heat and noise levels, and primarily about protecting my expensive card. Are you familiar with melting 12V connector?
Ian, i ask ChatGTP and got this formula, kWh=Volt *Amper * time / 1000 (divided with )
measuring (with CPUID HWMonitor ) when i run SLS with OC (+260),
i get this power cosumtion
1.050 * 46.82 A * 16h = 737.4 / 1000 = 0.737 kWh pleas check your self.
my elecric consumtion for my house in sweden , heating the house in october, cooking, warm water …. was 1709 kWh. my guess is that 36 h cost you 0.258$ plus the other part of your pc so topp 0.5 $ for your 36 h work. if someone find my calculation totaly wrong pleas coret.
I ran that through my ChatGPT thread, and it gave me this feedback.
"
Compare that to his guess:
“36 h cost you 0.258$ plus the other part of your pc so topp 0.5 $”
He’s under by around a factor of 6–8 because his measurement only “sees” ~49 W instead of the 500–700 W your whole system is actually pulling.
He measured the hamster running inside the GPU and forgot the entire treadmill, room lights, and air conditioner. His formula is fine, but he’s multiplying core V × core A, which dramatically underestimates real GPU/PC power.
"
Yes, you can calculate it much more easily. Simply take the average total PC power consumption during rendering and multiply it by the runtime, the number of hours, to get the amount of energy in kWh. There are energy meters that do this for you. They are not very accurate, but they are perfectly adequate for this purpose.
A fast PC with a 5090 card will very likely pull more than 700Watts for the whole system. The gfx card alone likely draws up to 600W under heavy load (with some spikes even higher).
Then there’s the CPU, RAM, mainboard, HD, fans - if you leave the monitor on that’ll use maybe 100Watts, too.
Monitor will be off.
I will see what happens. I don’t know how hard Starlight (Mini) is working my GPU.
I did try Starlight Sharp, and that model is working my GPU very hard, like having a jet engine in my room. SLM seems not to work it that hard.
I might get an energy monitor to check it out, I see them on sale for about $12 now.
SLS giving me 1.8fps, but not the quality on ground textures, and some “distant” faces it distorts.
SLM giving me good quality, but 0.2fps.
I have put a power meter on my PC: Less than 600W while running SLM, at the moment it shows me highest peak of 565W. My 850mV undervolted 5090 keeps below 400W consumption (read out with TechPowerUp GPU-Z), my CPU Usage is most of time below 50% load. There are three SSDs into my PC and eight fans
I think a better way to do speed comparisons for the Starlight models is to set the processing speed display to seconds/frame instead of frames/second.
yes it is, but we are not used to, higher values meaning slower. They could just add an extra decimal place to the fps, was already requestet, but without Topaz reaction