Topaz Video AI Linux Beta v5.0.3.0.b

On Debian Bookworm, the GUI loads, but as soon as you click to browse it core dumps:

2024-05-20 10-56-16.275 Thread: 140673579629952 Warning qrc:/videomanager/TVideoManagerPane.qml:729:5: QML FileDialog: Failed to load non-native FileDialog implementation:
qrc:/qt-project.org/imports/QtQuick/Dialogs/quickimpl/qml/FileDialog.qml:4 Cannot load library /opt/TopazVideoAIBETA/bin/Qt/labs/folderlistmodel/libqmlfolderlistmodelplugin.so: (libQt6LabsFolderListModel.so.6: cannot open shared object file: No such file or directory)

Environment is Debian Bookworm - KDE.

Just to add, I have also tried the Alpha listed in here:

To launch:
LD_LIBRARY_PATH=/opt/TopazVideoAIALPHA/lib:/opt/TopazVideoAIALPHA/bin:/opt/TopazVideoAIALPHA/bin/Qt/labs/folderlistmodel/:${LD_LIBRARY_PATH} ./Topaz\ Video\ AI\ ALPHA

Crash:
2024-05-20 11-03-03.625 Thread: 140564429819264 Warning qrc:/videomanager/TVideoManagerPane.qml:729:5: QML FileDialog: Failed to load non-native FileDialog implementation:
qrc:/qt-project.org/imports/QtQuick/Dialogs/quickimpl/qml/FileDialog.qml:4 Cannot load library /opt/TopazVideoAIALPHA/bin/Qt/labs/folderlistmodel/libqmlfolderlistmodelplugin.so
: (libQt6LabsFolderListModel.so.6: cannot open shared object file: No such file or directory)

@gregory.maddra Is the application designed to not retain downloaded models locally on Linux? It seems odd that a fully licensed tool cannot operate offline because it seemingly has to go and download the models each time. That’s a waste of bandwidth and also quite irritating if your internet is down.
Ideally, given an activated installation, it should be usable offline for most purposes, to be as useful as possible.

The models should be being retained. If you’re seeing it attempt to redownload each time you run, it may be checking if TensorRT models for a model where it was not previously available is now available. This can take a bit of time at the start of each run, where it may appear to be redownloading.

Is there a way to tell from the GUI or other inspection whether a model is running using CUDA or the tensor cores? I’m seeing 30 series running Nyx at ~ 8 fps with heavy GPU and moderate CPU usage. (4 cores of a 5800X and the 3070 Ti at 93% usage). I’m not sure if it should be doing better than that, all things considered. Footage is 960x720.

Unfortunately we’re not able to distribute builds with GPL encoders enabled. You can compile a custom build of our fork of FFmpeg with your own choice of encoders however; the required header files to use --enable-tvai are included in the deb package.

I suspect this is related to the version of Qt that KDE is using. If you’re able to check, does the file chooser work for you if the app is run under GNOME?

Hi Gregory and Topaz team,

I recently switched from Windows 11 to Ubuntu 24.04 and have been using the Topaz VEAI 2.6 setup through Wine, which has been working surprisingly well, not knowing Topaz Video AI Linux Beta existed since ~4.0. I’ve encountered an issue with the Topaz Video AI Linux Beta v5.0.3.0.b.

When I enable Frame Interpolation using any Apollo V8 through to Aion AI models, the resultant mp4 output is a blank screen, a black video at the given fps. If I disable Frame Interpolation, the video output is good. I’m trying to use Frame Interpolation to fix duplicate frames caused by NTSC PAL standard conversions and the like.

Here are my system details:
Hardware Information:
Hardware Model: ASUSTeK COMPUTER INC. ROG Zephyrus M16 GU603ZW_GU603ZW
Memory: 32.0 GiB
Processor: 12th Gen IntelĀ® Coreā„¢ i9-12900H Ɨ 20
Graphics: NVIDIA GeForce RTXā„¢ 3070 Ti Laptop GPU
Disk Capacity: 2.0 TB
Software Information:
Firmware Version: GU603ZW.311
OS Name: Ubuntu 24.04 LTS
OS Type: 64-bit
GNOME Version: 46
Windowing System: X11
Kernel Version: Linux 6.8.0-31-generic

I’ve previously posted a bug report via the VEAI Help report, including a screenshot and logs, but I can’t find the message in my profile or this forum. I’m posting some of the details here so you are aware.

Topaz VEAI on Linux is an awesome development. I was unaware of this, I must of missed an email somewhere from the beta testers group. I switched to Ubuntu for performance, ML, LLMs, Stable Diffusion and so on. Cheers.

  1. The bug / behavior you have encountered
    Topaz Video AI Linux Beta v5.0.3.0.b - Enabling Frame Interpolation produces blank video output. I am not sure if it is related but I see a model downloading at the beginning of every run. I have included 3 screenshots (2 pics for the editing view is big due to fonts display big and editing view window is not re-sizable in width, on pic with Frame Interpolation disabled) and logs.

Hi Gregory and Topaz team, thank you for this beta and to all the customers posting requests for a Linux version. I recently switch from Windows 11 to Ubuntu 24.04, and I was happy to read a post somewhere that mentioned Topaz Linux beta is available; happy days.

The reason for my bug report is for the " Topaz Video AI Linux Beta v5.0.3.0.b" version, the Frame Interpolation section produces a blank video output when using any Apollo V8 through to Aion AI models. When enabled the video output is a black video/frames at the giving fps. If I disable Frame Interpolation the video out is good (awesome even) thanks to the new Iris, Proteus… models. I want to use Frame Interpolation to fix duplicate frames caused by NTSC PAL standard conversions and the like.

  1. Your system profile

System Details Report

Report details

  • Date generated: 2024-06-02 22:32:43

Hardware Information:

  • Hardware Model: ASUSTeK COMPUTER INC. ROG Zephyrus M16 GU603ZW_GU603ZW
  • Memory: 32.0 GiB
  • Processor: 12th Gen IntelĀ® Coreā„¢ i9-12900H Ɨ 20
  • Graphics: NVIDIA GeForce RTXā„¢ 3070 Ti Laptop GPU
  • Graphics 1: NVIDIA GeForce RTXā„¢ 3070 Ti Laptop GPU
  • Disk Capacity: 2.0 TB

Software Information:

  • Firmware Version: GU603ZW.311
  • OS Name: Ubuntu 24.04 LTS
  • OS Build: (null)
  • OS Type: 64-bit
  • GNOME Version: 46
  • Windowing System: X11
  • Kernel Version: Linux 6.8.0-31-generic
  1. Your log files (Help > Logging > Get Logs for Support)
    logsForSupport.zip (16.3 KB)

  2. Any screenshots as necessary
    Frame Interpolation enabled



    Frame Interpolation disabled

[Please be sure to have searched for your bug before posting. Duplicate posts will be removed.]
I have checked the Topaz Video AI Linux Beta v5.0.3.0.b group, and there are no posts about this issue in Topaz Video AI Linux Beta v5.0.3.0.b .

Thanks.

2 Likes

Do you have forecasts for when more models will be TensorRT enabled? Which models are currently expected to run on CUDA vs TensorRT at the moment?

You’re running Linux?

Thanks, we’re aware of that issue and I’ve passed on the info to some people on the model side to take a look.

1 Like

We’re hoping to be able to do a larger scale conversion for Linux after the next time we update TensorRT in the Windows release. At the moment 20 series cards are likely the consumer option with the best support.

To answer your previous question as well, you can check the file name of the model files that are actually downloaded. TensorRT files will usually end with something along the lines of rt###-8517.tz while you’ll see -ox.tx or -ov.tz if you’re running with onnxruntime+CUDA or OpenVINO respectively.

1 Like

Hi Gregory, thanks for all of your hard work on the linux version.

On Arch Linux (Gnome), I am able to launch the GUI and it correctly launches my web browser and allows me to log in.

I installed TVAI the ā€œArch wayā€ of creating a simple Arch PKGBUILD. I used dpkg to extract the .deb file and then just copied /opt and /usr files (Hopefully, I didn’t miss a directory!)

However, after logging in, it appears the auth token is not saved. I am unable to download any models, and when I quit the GUI, the next time it launches I have to log in via the web browser once again, and still cannot download any models due to lack of token

I tried with the beta and alpha releases
5.0.3.0b
5.0.3.2a

Any ideas on how I can log the problem with more helpful details to troubleshoot?

[EDIT] - leaving this here in case it helps someone else who is dense like me:
On my Arch system, the PKGBUILD installed TVAI into /opt with owner root group root and permissions 755.

To fix the authentication I changed the group to something else and made sure my user was a member of that group, with write permissions.

1 Like

You replied the Linux Beta thread again, lol

Just curious if you might have a forecast for this, and an updated build. The current one has some flaky UI still under Wayland (elements go missing under the mouse, etc.)

I did some new tests with the latest v4 and v5 versions of VAI to assess what progress has been made since the initial beta versions. Below are the tests performed, results and an assessment where priority should be focused regarding Linux.

Versions tested

  • v4: 4.2.2.1.b
  • v5: 5.0.3.1.b

Commands

ffmpeg -f lavfi -i testsrc=duration=10:size=640x480:rate=30  -pix_fmt yuv420p -filter_complex tvai_up=model=prob-3:scale=2:preblur=-0.6:noise=0:details=1:halo=0.03:blur=1:compression=0:blend=0.8:device=0:vram=1:instances=1 -f null -
ffmpeg -f lavfi -i testsrc=duration=10:size=1280x720:rate=30 -pix_fmt yuv420p -filter_complex tvai_up=model=prob-3:scale=2:preblur=-0.6:noise=0:details=1:halo=0.03:blur=1:compression=0:blend=0.8:device=0:vram=1:instances=1 -f null -

Linux (RTX 3090, Ryzen 5950X)

  • v4 prob-3 @ 640x480: fps=25 speed=0.825x (GPU: 36%, CPU: 1405%, GPU-Memory-peak: 508 MiB, model: prob-v3-fgnet-fp16-480x384-2x-rt806-8517.tz)

  • v5 prob-3 @ 640x480: fps=26 speed=0.843x (GPU: 36%, CPU: 1437%, GPU-Memory-peak: 580 MiB, model: prob-v3-fgnet-fp16-480x384-2x-rt806-8517.tz)

  • v4 prob-4 @ 640x480: fps=12 speed=0.402x (GPU: 54%, CPU: 1170%, GPU-Memory-peak: 24576 MIB, model: prob-v4-fgnet-fp16-480x384-2x-ox.tz)

  • v5 prob-4 @ 640x480: fps=17 speed=0.563x (GPU: 45%, CPU: 1211%, GPU-Memory-peak: 1220 MiB, model: prob-v4-fgnet-fp16-480x384-2x-ox.tz)

  • v4 prob-3 @ 1280x720: fps=9.2 speed=0.307x (GPU: 35%, CPU: 1775%, GPU-Memory-peak: 708 MiB, model: prob-v3-fgnet-fp16-384x672-2x-rt806-8517.tz)

  • v5 prob-3 @ 1280x720: fps=9.2 speed=0.304x (GPU: 36%, CPU: 1612%, GPU-Memory-peak: 708 MiB, model: prob-v3-fgnet-fp16-384x672-2x-rt806-8517.tz)

  • v4 prob-4 @ 1280x720: fps=5.1 speed=0.171x (GPU: 48%, CPU: 1203%, GPU-Memory-peak: 24576 MiB, model: prob-v4-fgnet-fp16-384x672-2x-ox.tz)

  • v5 prob-4 @ 1280x720: fps=6.7 speed=0.223x (GPU: 55%, CPU: 1476%, GPU-Memory-peak: 1502 MiB, model: prob-v4-fgnet-fp16-384x672-2x-ox.tz)

Windows (RTX 4090, Ryzen 7950X)

  • v4 @ 640x480 prob-3: fps=32 speed=1.050x (GPU: 24%, CPU: 30%, GPU-Memory-peak: 1213 MiB, model: prob-v3-fgnet-fp16-480x384-2x-rt809-8500.tz)
  • v4 @ 640x480 prob-4: fps=38 speed=1.250x (GPU: 34%, CPU: 31%, GPU-Memory-peak: 1216 MiB, model: prob-v4-fgnet-fp16-480x384-2x-rt809-8500.tz)
  • v4 @ 1280x720 prob-3: fps=16 speed=0.518x (GPU: 29%, CPU: 43%, GPU-Memory-peak: 1333 MiB, model: prob-v3-fgnet-fp16-384x672-2x-rt809-8500.tz)
  • v4 @ 1280x720 prob-4: fps=19 speed=0.631x (GPU: 34%, CPU: 45%, GPU-Memory-peak: 1312 MiB, model: prob-v4-fgnet-fp16-384x672-2x-rt809-8500.tz)

Observations

  • The issue with the GPU memory ballooning at the start of onnx16 processing is finally gone in v5 :tada: :champagne:
  • Performance is identical between v4 and v5 for TRT version on Linux, but noticeably better in v5 for onnx16 versions (likely due to the above memory fix). :+1:
  • Linux and Windows TRT performance seems to be about on-par now (precondition to seriously consider Linux as a target platform for real workloads). :+1:

Conclusion

  • TensorRT version of prob4 is sorely needed for Linux.
  • Massive improvement on the Linux side has been done to the VAI engine. With a 24GB graphics card prob-3 and 4 could reliably be used on the latest v4. v5 makes prob-4 a realistic option even on lesser graphics cards.

Remaining high priority issues

  1. The Iris models still do not work on Linux (neither iris-2 nor iris-3). Please look into that since prob4 and iris3 are the go-to models for VAI, and the reason most people buy the software. :cry:
  2. Lack of Tensor-RT version of prob-4 for Linux massively limits its usefulness. A TRT version would yield about 3x speed improvement! :eyes:

Note: the CPU utilization numbers are a bit different for Linux vs Windows. Windows calculates usage as a fraction of total logical cores (which is incorrect, :shrug:). Linux calculates it as number of cores saturated. So on a 16 core machine like the Ryzen [5/6]950x with only 16 physical cores, 1600% means the machine is saturated and anything above means work is waiting in queue to be executed, like for the ā€œv4/v5 prob-3 @ 1280x720ā€ benchmark runs.

(EDIT) PS.
Provide a working TRT model of iris-3 and of prob-4, plus a hefty renewal discount. Then I may consider renewing my VAI license that expired 3 months ago. For now I’m content alternating between v4 and other tools that do similar things on Linux and actually work, so until I can use linux as an alternative to windows, there’s no point in renewing the license.

1 Like

Gregory, perhaps you should have added the workaround for users who don’t want that annoying delay every time a preview is requested or a render starts.

The workaround is to edit the metadata json file for the model(s) you are using, and that VAI keeps trying to ā€œre-downloadā€. Just remove the ID for your graphics card (or all of the IDs) from the tensor-rt section in the model’s (metadata) json file, and VAI won’t try to download those non-existent files over and over again.

E.g. I’ve removed ALL graphics card identifiers from the Proteus 4 configuration file, since there’s no TensorRT version available for it. Speeds up start of processing significantly.

diff --git a/prob-4.json b/prob-4.json
index 92eaa8c..d77ff02 100644
--- a/prob-4.json
+++ b/prob-4.json
@@ -467,9 +467,6 @@
         },
         "tensorrt": {
             "capabilities": [
-                809,
-                705,
-                806
             ],
             "model": "ir",
             "parallel": 1,

PS. you find those files in the models directory.

1 Like

Hi Gregory, is TopazVideoAIAlpha_5.0.3.2.a_amd64.deb alpha the latest to try, before next beta release?

There should be a new build in the next couple weeks, but as to when TensorRT will be updated, I’m afraid I don’t have a forecast for that yet.

You should use the 5.0.3.1.b linked at the top of the thread. It contains the same changes as 5.0.3.2.a

We’re looking into this, along with some related issue with frame interpolation on Linux.

3 Likes

Are you able to give a 3 sentence superficial summary of the challenge of adding a TensorRT model in Linux vs Windows? Are NVIDIA APIs a lot different between the 2?

The difference between Windows and Linux processing for those models is two to three times faster using TensorRT models – IMHO, it should be the #1 priority for development on the linux side.
Linux

Windows