I didn’t need to do anything special. I believe the options I had were 1280x720 (Minimum), 2560x1440 (2x Upscale) or 3840x2160 (3x Upscale). Everything else I’ve tried has been a lot smaller (typically 640x480) so I was forced to upscale, but this one seems to have been large enough already.
Yes, the face issue is a real shame. It doesn’t always happen, but when it does it can get pretty bad. There’s a similar problem with text, which tends to be turned into gibberish unless the original is extremely distinct. I look forward to the day when we can render previews to check for this sort of thing before committing to multiple-hour conversions.
Yes, I was disappointed by that - I’d assumed that was how it worked. Clearly, it doesn’t refer either forwards or backwards as the face becomes unrecognisable every time the camera pulls back from close-up to a long shot. A future development, perhaps (but goodness knows what that would do to the processing time!).
If you’re getting a P6 or P7 error during export, it’s because TVAI is finding an older ffmpeg in your PATH and using that to do the export.
Open a cmd prompt and type
where ffmpeg.exe
to list the copies of ffmpeg.exe on your computer. Either modify the PATH so TVAI finds its own ffmpeg.exe first, or upgrade the older (non-TVAI) ffmpeg to a newer version.
Ok, thanks a lot for this advice! I will try it. Unfortunately I am back on monday, but then I will give response. I am pretty shure to have an old ffmpeg…
I’m also encountering the same issue and I’m using a 5090. I did manage to get it to work once, but since restarting the program I haven’t been able to get it to work again. Just gets endlessly stuck on loading model while pulling 130W and using a ton of VRAM on doing what appears to be absolutely nothing.
It’s a shame because Starlight Mini looks really promising. I’m even tempted to resubscribe to actually use it for real without the trial watermark, but only if the thing actually works in the first place.
I just want to shout out how amazing Starlight Mini is. Sure, it’s slow – 0.7 fps on my RTX 4090 – but the results on old VHS and 8mm are mind blowing. Starlight Mini so good that I feel like my M2 Mac doesn’t even run Topaz Video AI any more – despite it being fine for other models. Starlight Mini can clarify stuff I thought was lost forever. Sure, the output doesn’t look like I time-travelled to the past with a 4k camera, but it takes things from essentially unwatchable to looking like good old footage. And sometimes even like decent new footage.
Congrats to the Topaz team. Looking forward to how much you can speed up and/or improve the model, and whether it’ll ever run reasonably on Apple Silicon. The people are never satisfied, it seems, but take a quick breather and my thanks for releasing something truly amazing.
Theoretically it should be possible to do. There has been software for years that is able to pull a face from a video. This Topaz model would just need to do a quick scan of the video to see if there is a larger face in the video at any point, and if so, to commit that face data into memory so that it can be used later.
In theory almost anything is possible. In practice, no.
This is a complete oversimplification of an extremely complex task.
What about cases where there are multiple faces in the frame? Or similar faces? Or the same face at different angles, or distances? Or the lighting, or anything really, changes and the face is not recognized? And when the scene changes? Or the person puts on glasses? Or their face is partially obscured by an object?
There are so many variables here, what you want to do is light years away, like I said.
is CLI not working with Starlight mini? I’m always getting this error?
Unrecognized option ‘start-frame-idx’.
Error splitting the argument list: Option not found
What does “light years” mean to you? I’m figuring a couple of years before that capability is available. The growth of AI capabilities seems to be Moore’s Law on steroids.
I don’t expect to ever see that capability on SW sold to the general public. The first time someone splices a clip of Taylor Swift into a porn video and instructs the AI to map her face onto one of the performers, there’d be a dozen bills sponsored in Congress and multiple states across the US to make SW manufacturers liable for the misuse of their products. No company is going to want to commit it’s resources to defending against that potential backlash.
I got caught up in all the AI upscaling hype with the initial launch of version 1, believing this technology could take some of my old VHS tapes and upscale them to 4K resolution, as if they were originally captured that way. That’s the marketing hype, but it wasn’t until version 2 launched that I realized the technology was many years, if not decades away before it can fulfill those promises. Sure, it can make old videos look better, but nothing like content originally captured in glorious 4K resolution.