Why is Mac Silicon so much slower than NVIDIA/Starlight Mini on Apple Silicon

Question;
With Apple Silicon, I’d expect the NVIDIA to probably do better, but not so much significantly better. Why is this? is there more focus on NVIDIA? are there any plans to improve the Apple Silicon version?
Also any updates as to when the Starlight Mini will be available on Apple Silicon? Estimated timeline etc?

  1. If you don’t have an Ultra variant of AppleSilicon, most even budget NVidia chips really are quite a bit faster.

A M2 Ultra should be about RTX 4060-4070 speed, so you do get more bang for the buck with a NVIDIA based PC (if you’re only looking at AI performance at least).

That Topaz software sometimes is even slower than that / than expected is down to widely missing optimisations for that architecture.
To be fair though, most companies in the AI world are mainly focused on NVIDIA as they’re clearly the “leader of the pack” and that by quite some margin.

  1. Starlight mini actually IS available for MacOS in the recent beta and albeit very slow roughly at expected speeds (even a bit higher here with M2ultra being about RTX4070 speed), but -and that’s a really big BUT - at the moment with absolutely abysmal quality.

Thank you for that. I’m curious as to if it’s a software issue - ie better optimizations or a hardware issue - lacks certain things that the NVIDIA chips do. If so, what?
If it’s a software issue, It would be nice for Topaz to get their Mac Versions more optimized so there can be better performance.

The AI chips of the Macs are as fast as from the iPhones.

As I tried to explain in the above post it’s actually a bit of both:

  • You need a top notch “Ultra” variant of the AppleSilicon chips to be able to compete with midrange NVIDIA GPUs from a sheer processing power standpoint.
    No Apple chip can only remotely come close to those top notch NVIDIA GPUs.

  • Sometimes due to missing optimisation of the software you’ll not even get this expected performance (e.g. the first “Recover” implementation on AppleSilicon was MUCH slower than what the chips were actually capable of).

P.S. Similar goes for AMDs GPUs btw…

Well, totally ignoring clock speed, number of cores, RAM speed/amount this is remotely true.

I actively ignored that because the mere fact that Apple installs phone hardware in $5,000 machines is bad enough.

Well when the “phone hardware” wiped the floor with those Intel CPUs at that time especially from a performance/power consumption ratio it’s totally logical to do so.
Note that due to their size Apple used notebook variants of Intels CPUs before for everything but the Mac Pro.

Besides this is how your holy NVIDIA and Intel chips are built, too and about any new chip.
Massive amounts of single not extremely fast cores to up the performance…

I did mean the AI part, not the CPU, GPU and Memory.

Interesting, I know apple just announced Neural Accellerators directly on the GPU for the new iPhone, hopefully (I Imagine they will) they add them to the next M series chips. I think they’ll be the equivalent to the CUDA cores on the NVIDIA.

Just confirming what others have said: a 90 second clip took 3 hours to render and the quality was terrible. Nothing like the cloud-based Starlight Mini quality.

For reference:

Mac Studio
Apple M2 Max Chip
96 GB RAM
Mac OS Sequoia 15.6.1