Topaz Photo AI uses less than 20% of Mac M1 Max GPU, Open source alternatives utilise 100% of GPU

Topaz Photo AI uses less than 20% of Mac M1 Max GPU.

When upscaling individual images this isn’t much of an issue.

However some of my tasks are batch upscaling Stable Diffusion frames for my Deforum videos, as an example I needed to process 14,000 images and Topaz took far too long to process even a fraction of these frames.

I ended up having to resort to using an open source alternative which utilised 100% of the GPU and completed the task 5 - 10 times faster than Topaz.

This was not ideal as I prefer the quality of Topaz upscales, however the open source app’s results were good enough for what I needed.

(I also own Topaz Video, but it was not suitable for Deforum style videos vs upscaling frames individually and compositing the final result in DaVinci Resolve.)

Is there any changes I can make to force Topaz Photo AI to use 100% of the Mac M1 Max GPU?

If not, can this please be a feature request.

What was the open source utility that you perf benchmarked against?

Last year Topaz told me they use the M1 Neural Engine in their products, but from what I can tell it seems that they do not. DXO on Apple Silicon gives the choice of NE, GPU, or CPU. Using NE with DXO DeepPrime is much faster than GPU and GPU is much faster than CPU. With Topaz and M1 all the choices for AI processor give the same results (Denoise AI, Sharpen AI).

@InfiniteRecursion and @henry.richardson here is a test that I just ran moments ago. Since I don’t know the exact parameters (image dimensions, TPAI settings) I can’t replicate either of your experiences exactly, but I can at least 100% assure you that TPAI and TVAI make full use of CPU, GPU and Apple Neural Engines. How much each is able to be used is largely impacted by the settings in the app and the models being used, but generally speaking on all jobs I do in both apps, I see exceptional use of all three processing units.

That said, TPAI does more and higher quality work than most other tools out there. It does really have any plain upscaling options like just doing a straight nearest neighbour upscaling which it wickedly fast…but does nothing to improve image quality. However, for what it does I haven’t found anything faster yet. In my experiment I took 33 very high res photos and upscaled them 2x in TPAI 1.2.1. It took 182 seconds to upscale them all which equals 5.5 seconds per photo. Pretty amazing for applying ML models and generating phenomenal quality image that are generally better than the originals.

…ok…that sounds like I’m trying to preach…NOT INTENDED! :laughing: I truly feel your pain as I do massive processing jobs often with hundreds or thousands of 50 megapixel photos and every microsecond counts. My goal was really just to show that the CPU, GPU and Neural Engines are indeed used to their max on pretty much any workflow in the app. :blush::+1:

TPAI Upscale on M1 Max-

2 Likes

I should also add that often times when using system performance monitoring tools, the sampling rate is by default quite slow. I forgot to double check but I believe I’m running asitop with updates every one second. I can’t recall what Apple’s Activity Monitor default sampling rate it, but I think it might be a sample every two seconds. The reason I mention this is that often the neural engines are able to complete their work so fast when working with individual images like this that you might not even see it spike as it carrying out its work. The GPU can be the very same.

In the video (animated GIF) I posted you can see how CPU, GPU and ANEs drop down to almost nothing for a second or two out of each photo processed while the system is likely reading in the files and writing it out to disk which again may look like the system is just idling along if the timing of the samplings is far enough apart and picking up on those times as opposed to the peak system utilization times.

I called into question how much CPU, GPU and ANEs were being used for pretty much ALL the apps I used on any of my M1 (Standard, Max and Ultra) machines until I dug into how to use powermetrics which is a command line tool that ships with macOS that allows you to access data like you see in my video (GIF). I use the command line tool asitop which actually uses powermetrics under the hood to get access to the most accurate and detailed data you’ll ever see. For example, I don’t know of many tools that show users the Neural Engine activity loads as it is data that generally isn’t available.

Anyhow, as you can tell, I love really digging in and measuring things like this because it is VERY DIFFICULT for normal users to actually see the detail necessary to get answers to questions like the ones asked here. :blush:

1 Like

That is excellent information! Thank you very much. For more than a year I have been trying to find this sort of info about the Neural Engine with regards to Topaz. I like your animated GIF of the asitop.

1 Like