GPU Selection Bug during SLS

Due to physical size constraints, I have a RTX 2070 8GB on my main PCIE x16 slot and a RTX 4080 16GB on the second. I’ve been using Video AI 7.x.x to run Starlight mini on the 4080 and Proteus/Iris on the 2070 without any issue.

Today, I tried to experiment with Starlight sharp in Video 1.1.0. After a clean rebooot, I started Video 1.1.0, selected RTX 4080 manually (not auto), and attempted to export a sample clip.

I notice that both the 2070 and 4080 ramped up very high in RAM and core usage. On very short exports from the beginning (~1min), it works, even if the GPU activity is strange. On longer exports, it always fails.

Due to the 2070 being highly loaded even when running just Starlight sharp and nothing else, I suspect a GPU delegation bug.

If you could look into it, it would be great. If more info is needed, let me know.

Can you reach out to support and send the app’s logs and the link to this forum post? help@topazlabs.com

An update: I have since upgraded from 1x 2070 + 1x 4080 to 2x RTX 4080 on one computer.

When a Starlight Sharp file is being exported, both GPUs are running very high (95%+), just like before.

The crash from before is likely because when I told Topaz to use my 4080, both my 4080 (16GB) and my 2070 (8GB) ran the same job. Since I restricted the cards from using system RAM after their VRAM is full, this job crashed on my 2070 with only 8GB RAM.

Now that I have 2x 4080 (16GB each), the SLS export does not crash, and the output file is slowly built frame by frame. However, both GPUs are occupied, which means a waste of power and my inability to use the second card for something else.

With the export not crashing, I do not have logs to show. If you want me to try something, let me know. I will also send an email to the address you gave.

Thank you.

Hi,

In your setting, how did you set the GPU processing? Auto, Single or Multi?

In my dual RTX3080Ti setup, the rendering only load the GPU that selected as main display, regardless how i set. I wish that both GPU can share the load and render the same file in parallel and stitch it back to complete file after done.

Confirmed: on Windows, when using Video AI pipelines involving DXGI/DWM and CUDA–graphics interop, the GPU driving the display is effectively required to be the same GPU used for rendering.

While CUDA compute alone can run headless, any pipeline involving presentation, shared surfaces, or DWM composition implicitly forces renderer and display onto the same GPU.

And yes, it’s a drama, and outdated architecture breaking modern headless compute pipelines.
Key is Linux, but it’s well known here to have been forsaken

1 Like

Thanks for confirming that.

Which achitecture is outdated here that breaks the headless compute pipelines? Who responsibilies here? Windows 11 or Topaz?

My mitigation right now is connect GPU1 to monitor, GPU2 with dummy DP plug, then to start the Topaz, load the 1st video, in the windows display setting, set the GPU1 as main display and start the rendering. Then load the 2nd video, switch the GPU2 as main display and render in GPU2 while GPU1 continue to render, that’s the only way i can do parallel rendering in Topaz. Definitely not intuitive but also not the end of world.

1 Like

I selected Single GPU and specified one but in SLS, all GPUs work in parallel to produce the output (much faster, though not twice as fast). In SLM, I guess your workaround will have to do.

CUDA is not the problem: TVAI relies on an obsolete Windows pipeline (DWM/DXGI), which forces the display GPU and ruins multi-GPU compute.

At least, this time, Topaz is not to shame for this purpose. They are clearly suffering from this situation.