Linux support

Suraj - I hugely appreciate your effort here. It would make my, and probably other users in this thread, experience so much better.

Personally, I’d be fine without GUI support. I’m comfortable with the command line and with ffmpeg.

For my system specs, I’m on Ubuntu 20 with an Nvidia 1050 TI. With luck, I should be upgrading to a 1660 TI soon.

Sorry, haven’t been checking this much in a while

Yeah, I stopped using the QEMU Passthrough thing because I was literally sacrificing 1/2 my system which isn’t optimal. If I’m away from the PC I’d rather use the whole thing for VEAI or whatever and if I’m using it…well it’s a powerfull enough system that I didn’t notice it that much unless I wanted to use the 2nd GPU for something.

This should help you if you are on an Arch based distro, there are versions of this for Debian based as well. https://github.com/pavolelsig/passthrough_helper_manjaro/blob/master/manjaro_help.sh

I would have thought AMD would be much easier on Linux. AMD drivers are open source, Nvidia’s likely never will be. AMD tends to work better for gaming as well using the wine/proton translation layers assuming for the same reason.

I built a Windows box pretty much just to run VEAI. I will say that while I’d prefer a native Linux build, VEAI running in wine would probably work very well if you supported some GPU workload in Windows that Linux could properly translate in wine. I tested it and the GUI and everything worked fine in wine and it would run on the CPU and detect the GPU’s. IIRC debug in wine was pointing at ONNX and directML was never implimented in VKD3D

This is great news. I hope it all comes to fruition.
I imagine Linux support would be an issue for the team. Perhaps it could be suggested that there is no official support but that us Linux users could form community support and help each other like in many other Linux based projects?

I run Arch (BTW) with a 3700X CPU, 32Gb Ram, and an RTX 3090 GPU.

A few additional points. It was mentioned that gpu detection on Linux might be difficult but that is not something I have experienced. Topaz has always correctly identified my (Nvidia) gpu (granted this is through WINE). The problem as far as I can tell is the windows specific DirectML which is not implemented in WINE. Worth noting that CPU (Intel) works fine as I think Intel’s framework is cross-platform. I think it behooves Topaz to move toward more open, cross-platform frameworks. This in my opinion kills several birds with one stone as it mitigates the need of juggling multiple different versions for each operating system. Also offering Linux versions as AppImages or Flatpaks again removes the need of juggling releases on a per distribution basis (debian, arch, etc…). At best a dedicated Linux version would be fantastic, however just getting gpu support working when running the Windows version in WINE would be the next best thing.

Thank you for giving it a go!

I am also fine without GUI support, even if it was there I would not use it.

Ubuntu 20 LTS, Ryzen 5800x, 32GB RAM, NVIDIA 3080 + 3060.

I know AMD GPUs are popular within the Linux community, but NVIDIA GPUs I would argue are more popular within the ML and video nerd communities.

It is more complicated than anticipated.
At first it we will be only able to support Nvidia GPUs using cuDNN/TensorRT. Then will look into adding support for AMD GPUs with RocM . Intel CPU and iGPU support will also be there.

5 Likes

Linux support would be great thanks for working on it. I cant be alone in wanting to offload this work to a linux machine rather than a windows machine. Rendering, sharpening, machine learning is sent to the Linux boxes because thats all they do.

Bonus to be able to use the desktop client as well, as everything in my workflow is Linux based.

1 Like

I’ll certainly throw my hat into the ring as a potential Linux tester. I’m currently running VEAI in a KVM VM, because I’m not willing to sacrifice a high-end box to Windows.

(As an aside, if Linux isn’t supported, it would be really nice to at least support the product in virtual environments with GPU passthrough or vGPU. Playing the “we don’t support you because you’re using a VM” card for every minor problem really is pretty darn unfriendly.)

As I read through this post, I see a lot of discussion about desktop market share, but I don’t think of VEAI as a desktop product (or at least not purely a desktop product). Upscaling a video can take multiple days; that’s definitely a server-type workload, which I would like to run on a proper server.

1 Like

In the very early stage, VEAI (Gigapixel for Video) only run on CPU and Nvidia GPU. This time for linux, is it possible to release the veai ffmpeg filter for Nvidia GPU first?

Heck I’d be happy with just Gigapixel working on Linux. VEAI can take so long I have a dedicated PC for that, but Gigapixel I often just want to upscale a few pics I found online and having to switch to a different PC just to do that is a pain.

Some people claimed it works in WINE using CPU but I never managed to get it to. I’d be fine with it not using the GPU, if it worked at all.

try this

Actually strike that, touch wood Gigapixel is working on WINE. I had to set concrt140.dll to native in winecfg.

I would guess that it is working without the GPU being involved, though, correct? I saw no way to make the GPU work with any of the products up to now. VEAI reports using the GPU, but it’s clearly running only on the CPU due to the processing time being reported.

Actually it appears to be using the GPU but being only a GTX 1650 its not exactly fast. It wont work at all if Face Recovery is turned on, so must be another library involved I need to deal with or something.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.68.02    Driver Version: 510.68.02    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0  On |                  N/A |
| 23%   45C    P0    74W /  75W |   2540MiB /  4096MiB |     94%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2499      G   /usr/libexec/Xorg                1213MiB |
|    0   N/A  N/A      2873      G   /usr/bin/kwin_x11                   1MiB |
|    0   N/A  N/A      2897      G   /usr/bin/plasmashell               81MiB |
|    0   N/A  N/A      4020      G   ...alexatkin/firefox/firefox        9MiB |
|    0   N/A  N/A      9408      G   ...AI\Topaz Gigapixel AI.exe      676MiB |
+-----------------------------------------------------------------------------+

Curious. At least for VideoEAI, this is what my overrides look like, and the GPU is not used :

[Software\Wine\DllOverrides] 1652891865
#time=1d86ad599fb2846
“api-ms-win-crt-conio-l1-1-0”=“native,builtin”
“api-ms-win-crt-heap-l1-1-0”=“native,builtin”
“api-ms-win-crt-locale-l1-1-0”=“native,builtin”
“api-ms-win-crt-math-l1-1-0”=“native,builtin”
“api-ms-win-crt-runtime-l1-1-0”=“native,builtin”
“api-ms-win-crt-stdio-l1-1-0”=“native,builtin”
“api-ms-win-crt-time-l1-1-0”=“native,builtin”
“atiadlxx”=“disabled”
“atl100”=“native,builtin”
“atl110”=“native,builtin”
“atl120”=“native,builtin”
“atl140”=“native,builtin”
“concrt140”=“native,builtin”
“msvcp100”=“native,builtin”
“msvcp110”=“native,builtin”
“msvcp120”=“native,builtin”
“msvcp140”=“native,builtin”
“msvcr100”=“native,builtin”
“msvcr110”=“native,builtin”
“msvcr120”=“native,builtin”
“msvcr140”=“native,builtin”
“nvcuda”=“disabled”
“ucrtbase”=“native,builtin”
“vcomp100”=“native,builtin”
“vcomp110”=“native,builtin”
“vcomp120”=“native,builtin”
“vcomp140”=“native,builtin”
“vcruntime140”=“native,builtin”
“winemenubuilder”=""

I’m wondering about that nvcuda thing, though. What does your set-up look like in comparison?

I didn’t do anything special so it ONLY has:

[Software\\Wine\\DllOverrides] 1652108307
#time=1d863b53d245b82
"concrt140"="native,builtin"

I probably need to look into it further to try to get face enhancement working though.

I guess VEAI just works differently to Gigapixel. No shenanigans seem to put the load on to the GPU. Will have to try the image tools later this week.

Did you try just removing the “nvcuda”=“disabled” line?

Yes; doesn’t seem to really help. Output during a ‘clean-up’ takes 8 seconds per frame, significantly slower than Windows native :

$ nvidia-smi 
Fri Jun 24 14:17:59 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.05    Driver Version: 510.73.05    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0  On |                  N/A |
| N/A   52C    P5    20W /  N/A |    193MiB /  8192MiB |     34%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2928      G   /usr/lib/xorg/Xorg                 95MiB |
|    0   N/A  N/A     40320      G   ...opaz Video Enhance AI.exe       96MiB |
+-----------------------------------------------------------------------------+