We have a new beta available for testing. Starting with this release we’ll be posting releases of Video AI for Linux in a separate thread to keep discussion active and organized.
Thanks for the headsup! Last Linux build I tried (4.0.5.0?) didn’t work with 2 GPUs and since I just picked up a 2nd 4070 I abandoned it for Windows, where it does work. In Windows 2 GPUs do not work nearly as well in v4 as it did in v3 when I was using 2 1080ti’s, it would max both of them out. I have no idea if this has anything to do with the fact that they were using SLI and that is not a thing anymore.
Since 2 GPUs isn’t really very beneficial in Windows I found a different workflow where I use the other 4070 for a different process. If I can get that process running in Linux I can hopefully stay switched over, assuming TVAI’s Linux bugs aren’t too bad.
This is working fairly nicely so far. I had issues with the earlier build running the GPU out of memory (it seemed to try and allocate 2+ TB of memory on the card). The newer build so far is behaving itself and I will see what the output looks like in a few hours.
BTW, was any time spent in Topaz on reviewing performance discrepancies between Windows and Linux? It seemed that Linux was running 2x more slowly despite apparently fully loading the GPU, per nvtop. It would be good to close that gap if possible.
We have limited data on Linux speeds compared to Windows, but we will be looking into this. It’s possible that the models are loading without TensorRT on the Linux build, which is something we plan to have fixed in a future beta release.
So this seems to fail reliably with ‘out of disk space’, even though there’s copious amounts of free space on the system drive. It appears to Export to /tmp/617599564/previews/ even though the output directory is set to ‘.’.
I’ll make an explicit change to the output directory and see if this helps things go through.
Just following up on this, I changed the ‘tmp’ folder to ~/tmp (freshly created) and also explicitly set the export directory. It seems that export generates in /tmp and then the file gets moved to its final location. It my case, it looks like /tmp has some kind of mountpoint and its size is limited compared to the output size from Topaz. Under the home folder ‘tmp’, this issue is not present.
Not sure why /tmp is being used for export, and I didn’t see this behavior with 3.x
You may have caught a bug there. I’m running Windows 4.0.5 ATM, Linux just had way too many issues…
Not sure if it’s still a thing but /tmp used to have a size limit of half of the installed RAM. I have plenty of RAM so I wouldn’t know if this is still a thing. I posted about a bunch of odd things the Linux version does with exports including seemingly just deleting or overwriting them as they finish. There’s never any word on whether the Linux specific things are fixed so who knows, I just try it once in a while hoping it’s better.
The reason it’s using /tmp might be a bug you just picked up on. after reading your post I checked my running Windows version and they removed the “use temp directory” option. I swear this was there just a version or 2 ago. You can still choose the temp directory in File>Preferences>Directories, but not the option to use it or not. When the option was there if selected it would do exaclty as you describe with exports. If unchecked it would work like 3.x and just put the partial file in the export directory. In the Windows version it seems to only put the previews into the temp directory and the export builds in the export directory. So it sounds like they removed the option, but left it to default using the temp directory in Linux.
I also don’t seem to be able to persuade Gaia to work 99% of the time. It will generate previews without complaint, but on export it fails with ‘success’ after 5 minutes or so, leaving an unusable video file in place.
We’d mainly appreciate just seeing both if and how well the Linux version works across a variety of GPUs.
I’d be particularly interested in how the A2000 performs myself, as that’s not one we’ve tested Linux with internally. It should be using the same models as those two 40X0 cards, but it’d be nice to see how it performs compared to them.
New to TVAI, got this build running in a podman [docker] container on my NixOS workstation, my container’s got some niggles (have to close and reopen TVAI after browser activation, stuff like that)…
I am passing through my RX 6900XT to the container and TVAI sees it in the processing preferences, however when I kick off a job its running all on CPU and rocm-smi shows the card basically idle, am I missing something or does this only work with Nvidia cards?