The AI processor is still being set to -2 (Auto) on my M1 Mac Studio in versions 5.2.2 and 5.2.3 despite this supposedly being fixed - by implication of the line “Fixed AI Processor reverting to Auto” in the 5.2.2 release header.
Andy
The AI processor is still being set to -2 (Auto) on my M1 Mac Studio in versions 5.2.2 and 5.2.3 despite this supposedly being fixed - by implication of the line “Fixed AI Processor reverting to Auto” in the 5.2.2 release header.
Andy
Could you send me the app’s logs in v5.2.3 so I can take a look please?
To gather logs, please select Help > Logging > Get Logs for Support and attach the zip file to your reply.
Feel free to share them via email if you prefer to send them privately help@topazlabs.com, please include this forum post link!
Andy
Hi Andy, this bug has been resolved in 5.2.3, but was there in a few previous versions. Can you double check that you are on the latest version?
With the logs, we would be able to confirm this and if it is still happening on select devices will help our team resolve this in a future update.
Hi Margaux, I have fixed this myself by renaming the Preferences file so a fresh one is generated. It is now using Device 0.
Andy
There is plenty more evidence this hasn’t been fixed by looking at the latest benchmark results. The are several instances of Mac (Intel and Apple Silicon) and Windows versions 5.3.1 showing device: -2 (Auto). It is possible some Windows users haven’t even realised the device is set to Auto or there is the remote chance they have set it to Auto on purpose, but I very much doubt it. However, there are 2 Apple Silicon Mac users, “SALVideoAI” and “Tomcat2048” running v.5.3.1 with Device: -2 which is impossible to achieve through the UI on these machines.
Thanks.
Andy
Hi Andy, thanks for noticing and pointing that out. The intel and windows machines likely have the setting set to auto, however, you are correct in that it should not happen on M chip computers. I have forwarded this over to the engine team to look into.
Edit: just heard back from the engine team, -2 and 0 are the same for m series chips. They will streamline it for a future update for them all the be the same, but this should have no effect on processing.
Hi Margaux
Firstly, regarding your edit: I can assure your engine team and everyone else that 0 and -2 are absolutely not the same for M-series Macs. When set to -2 (auto) performance is reduced in all the cases I have tried when working with standard definition sources.
e.g. on my M2 Macbook Air, Artemis 2x upscale of 720x528 source with max memory at 10%
device: -2 = 12.5 fps
device: 0 = 17.2 fps
One of the problems with device -2 on M-series Macs is the you can’t force the GPU to be used instead of the Neural Engine via the max memory setting like you can with device 0.
Regarding how we got the -2 in the first place, I’ve managed to reproduce the sequence which leads to this situation. The initial culprit was v5.2.0. I just installed this version again and it does indeed set the GPU device to -2. I see that version is no longer listed but is still available to download by constructing the corresponding web address. I then did the in-app update to the latest version - 5.3.1. The GPU device is still -2. I even installed version 5.2.2 from the DMG in case that might have fixed it (as it was supposed to) but no, it’s still at -2. So it appears that those M series Mac users with GPU device -2 were unlucky enough to have installed v5.2.0 at some point. No subsequent update I tried reverts it back to 0. There are two options currently - 1) Delete the preferences file or 2) Use an app capable of editing binary plist files (e.g. Xcode) to edit the file themselves. The file is:
~/Library/Preferences/com.topazlabs.Topaz Video AI.plist
and look for gpuDeviceID which will be -2. Change to 0 and save.
Or otherwise wait until a proper fix by Topaz is implemented!
Thanks.
Andy
Hi Andy,
Are you able to share some benchmark results? Apple’s M chips are treated as 1 unified chip.
Here are some screenshots from my system in 5.2.1 and 5.3.0 with minor differences. It’s possible I had more/less other apps running in the background while these ran.
You can definitely go ahead and delete the plist file to see if it reverts it back to 0, a new one will be created.
Hi Margaux, here are some benchmarks comparing GPU device ID set to 0 and then to -2, firstly for standard definition and then for HD. You will notice the biggest difference is for Artemis with a standard definition source, which is the combination I mostly use. Within the M-series chips are the GPU cores and the Neural Engine, either of which can be used for Machine Learning via CoreML. When the GPU device is set to -2 (auto) then the Neural Engine will be the preferred compute device regardless of the memory setting. However, when the GPU device is set to 0, the max memory setting affects which compute device is used. Because I find the GPU cores are almost always significantly faster than the Neural Engine, I purposely set the memory to the minimum 10% to force the GPU cores into use. However, this trick doesn’t work when the GPU device ID is set to -2.
I hope this helps. Thanks.
Andy
Topaz Video AI v5.3.1
System Information
OS: Mac v14.0601
CPU: Apple M1 Max 32 GB
GPU: Apple M1 Max 21.333 GB
Processing Settings
device: 0 vram: 0.1 instances: 1
Input Resolution: 768x576
Benchmark Results
Artemis 1X: 24.64 fps 2X: 17.48 fps 4X: 07.04 fps
Iris 1X: 18.78 fps 2X: 10.42 fps 4X: 03.54 fps
Proteus 1X: 36.25 fps 2X: 14.73 fps 4X: 06.20 fps
Topaz Video AI v5.3.1
System Information
OS: Mac v14.0601
CPU: Apple M1 Max 32 GB
GPU: Apple M1 Max 21.333 GB
Processing Settings
device: -2 vram: 0.1 instances: 1
Input Resolution: 768x576
Benchmark Results
Artemis 1X: 18.35 fps 2X: 11.50 fps 4X: 03.27 fps
Iris 1X: 17.02 fps 2X: 11.22 fps 4X: 01.84 fps
Proteus 1X: 17.02 fps 2X: 13.24 fps 4X: 03.04 fps
Topaz Video AI v5.3.1
System Information
OS: Mac v14.0601
CPU: Apple M1 Max 32 GB
GPU: Apple M1 Max 21.333 GB
Processing Settings
device: 0 vram: 1 instances: 1
Input Resolution: 1920x1080
Benchmark Results
Artemis 1X: 08.38 fps 2X: 05.01 fps 4X: 01.85 fps
Iris 1X: 05.61 fps 2X: 03.32 fps 4X: ERR fps
Proteus 1X: 07.13 fps 2X: 05.17 fps 4X: 01.76 fps
Topaz Video AI v5.3.1
System Information
OS: Mac v14.0601
CPU: Apple M1 Max 32 GB
GPU: Apple M1 Max 21.333 GB
Processing Settings
device: -2 vram: 1 instances: 1
Input Resolution: 1920x1080
Benchmark Results
Artemis 1X: 05.82 fps 2X: 03.54 fps 4X: 01.16 fps
Iris 1X: 05.75 fps 2X: 03.43 fps 4X: ERR fps
Proteus 1X: 08.27 fps 2X: 05.06 fps 4X: 00.80 fps
Typical “tech debt” issue that illustrates what “many” (yes many considering posts, so me + n) of us think
Bad forks syncing / and versioning not properly handled
Device 0/-2 issue is cross platformed, nothing to do with Mac
Thanks for that. Agreed this is a cross-platform issue, but I wanted to emphasise two points:
Apple Silicon users are even more inconvenienced by this bug as there is no way within the UI to work around it. With Windows and Intel Macs, at least the user can change (back) from Auto to GPU n. Apple Silicon users can’t. The only viable workaround is to edit the preferences “plist” file, which is binary, so needs a plist editor (which doesn’t come as standard on Mac). I happen to have Xcode installed so that wasn’t a major issue for me, but is beside the point.
That devs at Topaz say it doesn’t make a difference on Apple Silicon Macs - that’s concerning. Hopefully my benchmark examples speak for themselves and will convince the Topaz devs otherwise. I’ll be explaining in a separate reply the reason for the mostly similar figures, within expected variation, in Margaux’s benchmark example.
Thanks.
Andy
Hi Margaux
Regarding your benchmark comparison - would you mind doing another with max memory set to 10% please? I think you might then see the significant performance benefit with device 0, as in my example. Perhaps you’d like to add the 10% memory setting results (with device 0) to the general Benchmarks section.
I suspect what was happening in your case is that most of the processing was done by the Neural Engine in both tests, due to the max memory being set at 100% and having so much memory available. You can always verify this by using the Activity Monitor with ⌘-4 to open the GPU History window. If that is showing minimal activity during the benchmarks, particularly for Artemis and Proteus, then it’s the Neural Engine being used. (Some models may make use of the GPU cores anyway, regardless of the memory setting.)
Thanks.
Andy