I “discovered” a behaviour which I cannot really understand.
Expect I would, that when I process two same length files parallel, that this would roughly take 2 x as much time as sequencing the same file in sequence, but, afaik: processing them in paralel saves me up to 1/3 of time.
Processing one file takes roughly 6 hours.
Processing two in paralel takes roughly 10 hours.
It seems like the processing is unable to fully utilize GPU for a single process.
Isnt there a way to improve resources usage for a single process?
Hi, having also discovered that a while ago, I split a film into 4 files using the free LosslessCut app, process those 4 files in parallel and then re-assemble the upscaled segments. I have a base spec Mac Studio with 32 GB RAM and 4 is the most I can do at the same time. Those fortunate enough to have a Mac Studio with 64 or 128 GB RAM could presumably process more in parallel - any feedback on that would be most interesting…
I eagerly await the time when TVAI is much more efficient on a single file (as we know is possible from manually contrived parallel processing). I can’t understand why it’s so inefficient at the moment.
You are correct. Some models use more of the GPU than others, so you do not gain quite as much running in parallel. On my machine, for Artemis, a 25 minute video takes about 30 minutes to upscale to FHD. Doing two at a time takes about 40 minutes. The Apollo model on FHD slomo 2.5x takes about 5 hours but about 7 hours with two.
I have to add: I only use GAIA (HQ), to upscale either from 720p to 1080p/4K or 1080p to 4K, just upscaling all the streaming content for my local media lib, as for this type of content, GAIA yields the most solid results for me, so cannot say anything about other models.
But hey, thanks for confirming that, so it’s not my “fault”
" Be careful with more than two queues if hardware-accelerated GPU scheduling is disabled. Software scheduled workloads from more than two queues (copy queue aside) may result in workload serialization."