Mac version works mostly well here: Mac Studio 2023 M2 ultra 24/60, MacOS 13.4.
One issue is the thing with decreased performance in SD-FHD upscalings when RAM is set to 100% , plus the strange behavior at the beginning of each conversion when RAM is set to 100% that Video AI is first filling ALL RAM (64GB) plus HD swap space of another 64 (which takes 2 mins, then releases the swap/RAM and only then starts the actual encode.
But that erratic behavior is not specific to the Beta but also occurs in the 3.3.1 release version.
This is the first thing I noticed when I first used TVAI when it replaced TVEAI, I’m guessing it’s by design, thankfully you can simply snag the bottom and drag it up to clear a space for the taskbar, which makes me believe that it was done by design.
I got a similar behavior for Artemis at least. It seems that the previous version of models - (Arthemis High Quality V10 for example) have much smaller initial memory bursts, than the recent V12 version for the same video/settings.
MAC M2 seems to handle it with swapping, but on RTX 3090Ti (Ubuntu) - it just errors with CUDA Out of Memory for V12 (and V11), vs running fine for the V10 model.
I was wondering about these initial memory bursts as well, as they seem to happen before actual processing begins.
One thing I find frustrating, and I would love to hear I’m doing something wrong, is setting different resizing for batch videos. For example I want one to be 2X, another to be 1920x1080, and the third to be original. Selecting the three and starting the batch will resize all of them to whatever the last choice was. Any way to do what I want?
Regrettably still seeing the ghosting/blending issues I’ve mentioned about Iris v1 in previous versions since the model was released. I really hope the devs are paying attention to this and other issues with Iris v1 and are actively working on it.
Original frame:
After Iris V1:
As you can see the black areas in-between the blue lines are now blue-ish, and the black area on the white portion to the upper left has a greenish tint to it. Iris v1 or v2 needs to not have frame blending/ghosting and MUST respect the input colors.
Yes, it doesn’t fix it or make it any better. I believe it’s an inherient problem with the Iris and the Proteus models.
Here’s an example of the ghosting issue:
Original frame:
Iris v1:
Proteus v3:
The pink section goes across the screen from right to left, and the video with Iris v1 leaves a “trail” whereas the original does not. The ghosting is less, but not 100% gone with Proteus v3 (note the light grey object, which has some pink tint to it), and unlike Iris v1, the Proteus result isn’t as faded, keeping the black bits black, rather than what looks like a softened bloom almost with Iris v1.
@xuan.liu I hope issues like these, which have been brought up by more than just me, will be addressed.
On low quality input the iris model makes creepy faces. It would be good if we could adjust the intensity of the face reconstruction or if it automatically detects when there is not enough detail and the face is not reconstructed so precisely and sharply.
" Software developers will now be able to take advantage of work graphs, a function that will enable asynchronous shader utilization. This will provide developers with an easy API to dispatch work with GPUs instead of forcing the CPU for this task."
“Machine learning algorithms will also get a GPU boost through Wave Matrix Multiply Accumulate instruction support. Modern GPUs are capable of accelerating such instructions which are mainly used by AI algorithms these days. These matrix-based calculations will speed up the most common calculations such as storing, rearranging, duplicating the data cross all threads in a wave.”