This update to Video AI 7 beta includes some changes to the Starlight Mini UI as well as some updates to in-app messaging. We also included plugin fixes for both Davinci Resolve and Adobe After Effects in this release.
Starlight is now easier to access for rendering locally or in the cloud with a dedicated section in the UI above the standard models:
Previously, when a live export was terminated or encountered an error, you would be unable to view the error properly. Now, the app will show a visual notification of the error encountered:
I really hope Starlight Mini is not going to eat up the whole 24GB of VRAM during testing. Especially, when Iâm preparing to (unfortunately) downgrade it to 16GB with 5080⊠GPU chip is practically an advanced 3090, but NV decided to play coy with VRAMâŠ
Edit: I left the Starlight files from previous beta. Thereâs still this stupid NVENC p7 error on RTX 3090, this beta still chokes on VRAM, eating up the whole thing, and, additionally, runner.exe process takes a whopping 40GB of RAM (and 23.8GB of VRAM) before the processing even takes place (it never starts). Ugh. Could you add x264 or x265 software encoding option, too?
This is during the lengthy âloading modelâ phase:
The Starlight Mini (to JPG images) doesnât seem to start processing a 6s 450kB video and âloadsâ a 3.09 GB model for over 20 minutes⊠This beta needs a bit more fixing, I believe. After deleting the Starlight Mini files this beta doesnât seem to download them anymore.
I had to fully uninstall the beta, with the config removal, to then be able to re-download the SL Mini from within the reinstalled beta. Still takes the whole VRAM, but it seems not to take as much as RAM as before. 0.1 fps though, probably because of overfilled video memory.
This version is now working for me âas-isâ with no tweaks to Nvidia driver settings or workarounds on a RTX 5090. And hit 0.9fps on a 50 second 720p test clip with stellar results. Impressed (even with the snails-pace render)!
Nice. Iâm excited to try it, but I just started a render 7.0.0.2b and itâs got about 24 hours left Curious, were you able to get 7.0.0.2b working on your 5090 and what was the speed in that case?
7.0.0.2b was not working for me (continued getting the same sm_120 errors as v1b) â know others were able to resolve via manual intervention but I wasnât interested in messing around with such tweaks so waited for next release to test and appears to be working without issue now.
Good morning, ladies and gentlemen upscalers ! Pleasingly surprised by the quality of Starlight Mini and the frequency of updates (and how fast and informative the community reacts). Thank you Topaz team and thank you community
Now I have a question to the Topaz team. Are there ways to improve the rendering speed by tampering with pytorch+cu (as long our GPU supports it) as you would do in a virtual environment for open source AI scripts ?
I have tested all three available betas, 7.0.0.1.b runs at 0.3 fps with a rtx A6000 ADA GPU. 7.0.0.2.b and 7.0.0.3.b run at 0.2 fps, for the same input video (old NTSC VHS 29.97fps) and the same output settings (1440x1080 4:3 29.97fps).
The new beta version runs almost identically to the previous version on my system. I couldnât notice any difference in rendering speed on my 4090 â still around 0.2 to 0.3 fps. Maybe itâs a bit more stable than before, as I no longer drop to 0.1 fps.
Edit: The issue has gone away after closing and reopening Topaz Video Enhance AI. I donât know what specific thing I did caused this issue, so I canât help out with tracking down the issue again.
Iâm getting a error where only a small portion of the video is visible in the exported video.
Input video is at a resolution of 480x270
Output resolution is 4x 1920x1080
Output model is Starlight Mini running locally on a NVIDA RTX 4090 on Window with driver 576.28
I left everything else at default and just exported.
The exported video is 1920x1080, but only the top left quadrant of the video is actually visible in the video.
Hereâs a side by side screenshot inside Topaz Video Enhance AI that shows it. Left is input, right is output.
Aha. With the default configuration, I get it, why the TVAI betas wipe the insides of TEMP folder⊠Still Iâd like it to be less destructive, when I set the TEMP folder manually.
2025-05-07 16-18-19.598 Thread: 20556 Info Cleanup directories C:/Users/[user]/Documents/Topaz VideoAI Projects/Default/temp
2025-05-07 16-18-19.598 Thread: 20556 Info deletePath:: "C:/Users/[user]/Documents/Topaz VideoAI Projects/Default/temp"
Concerning SL Mini routines, they donât like source video filenames with spaces, or with special characters (French), I believe - the export throws an error immediately.
Yeah, unfortunately this seems to be a structural problem with diffusion models from what I can tell. The Recover models for example have a similar problem with some styles of vegetation. To me it almost looks âdreamyâ. The creative upscalers like Redefine seem to do better with this.
@dakota.wixom, did you observe on your test bench(es) with RTX 3xxx card(s), if this beta also takes the whole VRAM while trying to export the video file? And if NVENC works?
I have to fall back to JPG images export to see any results (which are great, when it works) from this model, and also the speed is abysmal, probably because of overfilled VRAM.
âŠthe program actually is already enhancing the video - itâs a VERY destroyed old Ordy anime opening, and itâs already great to see perfect results. Please fix the video compression output for RTX 3xxxâŠ
Wow, when this is fleshed out, itâs going to be a blast.
Thanks Dakota! Glad to see you are working on Apple M1 Max compatibility. Really could care less if itâs slow as molasses as most of my videos are max one to three minutes long at a time (montages), so I donât mind waiting a couple days or weeks just to finish a two-three min video.