Well, you’re not on MacOS, so likely don’t have direct experience with this?
It was exactly like I described above and the most annoying part is that they would not even need a new model, but just revert to the old fast one (that now works without artifacts again).
I did do the hassle of copying the old model files over from a TimeMachine backup into new TVAI installations and got higher speed by that for some time. But, unfortunately, I don’t have a complete set of the old fast models for all resolutions / upscale factors, so in the end I mostly gave up on this.
See e.g. here:
…or the many other posts I (and also some others) brought this “issue” on the table. And, as so often with TL, after an initial “we will look into it” there was just deafening silence and never ever something actually done :-/
The same goes for quite a few other MacOS bugs that about never seem to be addressed here (the misbehaving cursor keys in the save dialog, the extremely buggy integration of TPAI in Apple Photos,…)
Indeed, I have both Nvidia 4090 (24GB) and M4 Pro (max config) and the GPU usage on the Mac is really ‘not used’…compared to my Nvidia (Pc is using 700 watts of power) the speed is really ‘turbo’.
I’m a paying client for many years and until today I never understood why Topaz is NOT making ANY effort to squeeze more MAC GPU power to their app…instead…they are for years being busy ADDING features (Adobe After Effects plugins…DaVinci plugins…why?)without addressing the speed issue. Why not make a complete rebuild on the CORE (using Metal) to have this addressed for the MAC version.
TVAI often uses the Neural Engine instead of GPU (this is why you get the impression that the chip is not used since most normal tools don’t report the use of the Neural Engine).
So it’s not that bad as it seems at first sight and, of course, those NVIDIA 4090 cards have MUCH higher processing power than the AppleSilicon chips.
Still, the use of Neural engine instead of GPU isnot a good choice performance wise in quite some scenarios and Neural engine often isn’t used to the max there (especially with those ultra models I have the impression that sometimes only one of those clusters is used) plus: why not use both, GPU and Neural engine (which some models in fact do)?
They did the switch-over to the Nvidia version of the neural engine before they attempted the same thing on Apple chips. It went really well and they got a big speed increase. I think they were hoping for the same kind of speed increase by taking the same approach on Apple chips.