The bug: I’ve downloaded starlight mini, and tried it on multiple different videos. Every single one of them gets permanently stuck at the “loading model” stage.
I’ve tried it on several different videos, restarted my PC, updated to latest NVIDIA drivers, reinstalled Topaz Video AI - nothing worked.
I just updated to the latest driver: NVIDIA Gamer Ready Driver - 576.40, and retried upscaling. It has moved past loading model and has started to upscale at 1.0 fps.
For me it works only with lower resolution video. 480p definitely works every time but 720p or 1080p it just stays stuck at “loading model”. I have tried waiting for more than 2 hours, and the only thing that worked was lowering the input video’s resolution. I have a 4090.
when no video source is working, maybe delete the programdata model folder and download starlight mini again. Some reported the download could be not completed (model download can also happens hidden). If nothings works, try older Version, you can also try one of the Beta which can be installed parallel to the live. So Topaz Labs has some bug fix work to do…
I just left mine run and eventually it did start. Took about an hour total for a 3 second clip (480p source, 4x output, rtx4090fe). The progress bar wasn’t very informative once it did get started. It went from “loading model” to a time estimate which always said “11:15 remaining” any time I looked at it. It did eventually finish. 57m32s. Not sure if it included the model loading time there or just the encode, but the loading took at least 30 minutes.
It looks to me like the software is bundling shared GPU memory with dedicated GPU when it determines how to use memory. Watching usage when GPU usage set at 100% loads significant info into shared memory. I have 24GB 4090 and 32GB shared GPU memory. When I set to 40% or less allowed memory usage 24/(24+32), the software did not attempt to use as much shared memory and successfully loaded the model and processed. It still used >0 shared memory, so might be room for performance improvement in how Topaz is accounting for shared GPU memory, and it should probably not attempt to use shared GPU memory at all. But in the meantime this hack setting seemed to work for me.
You must restart TVAI after lowering Ram in the settings takes effect. Also have a look here seems to be shared memory is nvidia driver thing, but having RAM sharing in Diffusion model seems special worse lowering performance.
I can confirm that setting the memory usage in settings from max to 50% (I have 64GB) seemed to do the trick for me, the model is now loading and working