I just purchased Topaz Video AI. I am about to purchase a new Mac Studio, and was wondering about the difference in Topaz Video AI performance on the M1 Max vs. the M1 Ultra. For most of my video work, the Ultra is definitely overkill. But I am planning on running a lot (26) of older 90 - 120 minute documentaries through Topaz to increase resolution from the original SD video. How much of a difference would it make going from the Max’s 10/24/16 cores to the Ultra’S 20/48/32 cores? Thanks for any thoughts.
Hi - as an owner of a Mac Studio M1 Max (10 core CPU, 24 Core GPU) my immediate thought was don’t bother with spending at least twice the money on an M1 Ultra for upscaling SD to HD. I’m extremely disappointed with the performance combination of M1 Macs and upscaling SD. Other users or Topaz might suggest processing several videos in parallel for better performance. However, since version 3.1.x, there is very little, if anything, to be gained on an M1 Mac by parallel processes of SD upscaling. However, in versions 3.0.x (which are still available) there was a significant gain in overall performance when parallel processing SD upscaling. Anyway, keep a lookout for M1/2 SD upscaling in the User Benchmark Results section. You’ll see my Studio’s measly performance for SD input in comparison to HD input (about 25% slower per pixel). This performance penalty for SD appears to be unique to M1 / M2 Macs.
As a M1 Pro owner, I am disappointed to hear that v3.1.x is a step in the wrong direction. Is there any informed reason as to why that is the case and if any improvements are expected for M1 and M2 owners soon?
Just out of curiosity, does anyone know if VEAI’s processing is somehow aided by the ML cores on the M-Chips?
Just to clarify, there was a significant performance gain for single processes from 3.0.x to 3.1.x (at least 60%). However, SD upscaling is far from being optimised on M1 / M2. I’ve no idea if 3.2.x will address this. One very encouraging sign comes from the results I’ve found when first “stacking” SD videos using ffmpeg vstack (or hstack) to create the higher resolution favoured by TVAI and Apple silicon. For example, if I first stack 6 x 720x576 then upscale, I can get 4.2 fps = 25.2 fps per SD video. The best I can get with just one 720x576 is 16.5 fps. So that’s at least 50% performance gain by stacking SD clips. Perhaps Topaz could do that within TVAI…
I’m so curious about the Mac Studio Ultra and Video Enhance AI because it has double the neural cores, so it would make it faster? Someone said that with the Ultra Video Enhance AI was faster than a PC with a 3090.
Here’s the benchmarks results from my M1 Max:
Topaz Video AI v3.2.2 System Information OS: Mac v13.0301 CPU: Apple M1 Max 32 GB GPU: Apple M1 Max 21.333 GB Processing Settings: device: 0 vram: 1 instances: 1 Input Resolution: 1920x1080 Benchmark Results Artemis 1X: 8.46 fps 2X: 5.47 fps 4X: 2.20 fps Proteus 1X: 8.31 fps 2X: 5.33 fps 4X: 2.06 fps Gaia 1X: 2.56 fps 2X: 1.92 fps 4X: 1.45 fps 4X Slowmo Apollo: 7.76 fps Chronos: 3.03 fps Chronos Fast: 5.20 fps
Would be great if someone with a Mac Studio can run the benchmark within the app and post the results would be great.
Someone on a different forum posted their results from the M1 Ultra and it’s not a huge difference from the M1 Max.
Topaz Video AI v3.2.2
OS: Mac v13.0301
CPU: Apple M1 Ultra 64 GB
GPU: Apple M1 Ultra 48 GB
Processing Settings: device: 0 vram: 1 instances: 1
Input Resolution: 1920x1080
Artemis 1X: 13.02 fps 2X: 7.74 fps 4X: 2.87 fps
Proteus 1X: 12.07 fps 2X: 6.86 fps 4X: 2.37 fps
Gaia 1X: 4.87 fps 2X: 3.16 fps 4X: 2.34 fps
4X Slowmo Apollo: 8.48 fps Chronos: 4.12 fps Chronos Fast: 6.11 fps
If I read this, there is no much sense buying a Studio Ultra…
So this also means, (projection),that the software does not (yet) the full CPU/GPU performance these machines have.
Is this correct?
Seeing the power usage, Topaz does use most resources on the M ultra chips. It’s using up to 180w here on the M2 ultra which is near to the maximum. Also tools show GPU used to 98-100% all the time. This goes for the Iris model, with other models it could be different.
There still is a slight gain in overall fps if you do 2 encodes at the same time - from somewhat over 12 fps to 7.x+7.x = 14.x fps when upscaling SD to FHD with Iris.
Some things seem a bit rough/unoptimized though with performance not really fully even for different models and resolutions and the astonishing thing with the performance hit in SD upscales if you use recommended standard setting of 100% RAM usage. So yes, there still seems some room for improvement.
Hi. I think it depends on what you are upscaling. From my experience, Topaz are getting far below the full potential out of Apple Silicon when upscaling SD sources (e.g. 2x upscale 768x576). It would be great if Topaz would tell us why - for example: is it Apple’s fault for not optimizing their libraries / APIs / SDKs for SD sources? Otherwise, is it wrong to assume it’s somehow the fault of Topaz through lack of will, knowledge or resources?
Either way, the owners of Apple Silicon Macs and TVAI are disadvantaged when they want to upscale SD. After all, we’ve paid the same price for the software but are getting far below optimal performance through the fault of Apple and / or Topaz.
So to summarise, any Apple Silicon Mac, and especially an Ultra, is wasted if you’re mostly upscaling SD. If you’re mostly upscaling HD sources (or higher), that’s a different matter. To be fair, I think TVAI does appear to make far better use of the Apple Silicon resources in that case.
To that end, my own work-around is to “stack” SD clips together before upscaling (then “unstacking” afterwards), resulting in about 60% better overall performance. I do this using CLI scripts. There is no reason why Topaz devs can’t do this in their source code if they chose to. But it appears they have other priorities.
Hi. I’ve found that on Apple Silicon, the easiest way to get better performance upscaling two SD sources concurrently is to run one at 10% memory and the other at 90% or so (doesn’t actually matter the exact value). In effect, you’re getting better performance by running one upscale on the CPU and the other on the GPU. If you run both on the CPU (memory setting at 90%) or both on the GPU (memory setting at 10%), then it won’t be as fast.
This is only a quick and easy trick - it’s not the solution to get the highest possible performance for SD upscaling. That’s for Topaz / Apple to sort out. For now, as I’ve mentioned elsewhere, the more complex work-around I use is clip stacking.
Indeed and for that reason alone I decide NOT to purchase the Mac Studio Ultra M2 and go for a lot less money for a Max config PC (128GB Ram) with a 4090 card with 24GB with the latest Intel CPU.
I will have it in around 2 weeks time and will report back here of course on my findings.
Depending on how much you use the rig and energy costs, you might have the same costs in 2-3 years, plus a really loud environment.
A PC with that config costs about 2/3 of a Mac Studio M2 ultra here.
The M2 ultra is at roughly 10W for desktop tasks / webbrowsing. And maxes out at 200 Watt (3 TVAI encodes and 2 handbrake tasks at once).
If you use that machine nearly only for TVSI then that would make sense. But then you maybe don’t need to heat the room in winter.
No loud environment, liquid cooling will handle this, and yes you have a point the Mac Ultra M2 is very lower power but…;he needs to performs much longer to finalise the tasks…even doing parallel VEAI tasks you have to wait hours (depending on the workflow…) for each VEAI flow.
And 10W is really not correct, when Topaz VEAI is fully engaged on the Ultra then the power also goes to 250 -300 watt…
Anyway, I studied several cases (and tested in shops) with different Mac and PC models and the PC (with a 4090) is the clear winner IF and only IF Topaz is the only app running on this machine.
Anyway this is my view and this is for feedback to this forum, I’m not stating that my case is the correct one, just that this will work for my workflow…
Of course. I explicitely stated that the 10W are for desktop use (web browsing, Word, Excel, even RDP). And there the PC is from my experience still at >>100W (don’t have first hand experience with the newest Intel generation, though).
In my usage scenario (rig on 24/7 for remote access and mostly used for RDP and office tasks, only once in a while heavy tasks like VAI to handbrake) I save hundreds of Euro compared to a PC per year.
This is why I said the PC is a great option mainly if you nearly only use it for heavy tasks as TVAI.
But still the Ultra maxes out at 200W when at full power use (all CPU and all GPU) - while the PC will take at least 600W in that scenario, so even if it is twice the speed, the Mac still is power efficient.
On the other hand, if time is an issue, the PC will always win. And if you’re only remotely into gaming the Mac isn’t a viable choice anyways…