Gigapixel v8.2.0

Errors on M1 Max:

[2025-02-06 13:53:29.870, 51.00 μs] [16c7b3000] Info | [AIE] resizeLoadTime: 312 ms
[2025-02-06 13:53:29.870, 740.00 μs] [16c7b3000] Info | [AIE] gamma: 1.000000
[2025-02-06 13:53:29.870, 11.00 μs] [16c7b3000] Info | [AIE] param1: 0.109668
[2025-02-06 13:53:29.870, 7.00 μs] [16c7b3000] Info | [AIE] param2: 0.736902
[2025-02-06 13:53:29.870, 7.00 μs] [16c7b3000] Info | [AIE] param3: 0.000488
[2025-02-06 13:53:29.870, 136.00 μs] [16c7b3000] Info | [AIE] Updated model params
[2025-02-06 13:53:29.871, 764.00 μs] [16c7b3000] Info | [AIE] Skipped 0 block(s) out of 54 block(s)
[2025-02-06 13:53:29.897, 25.97 ms] [16c7b3000] Info | [AIE] Creating cache
[2025-02-06 13:53:30.265, 367.68 ms] [16c7b3000] Info | [AIE] resizeProcessTime: 395 ms
[2025-02-06 13:53:30.270, 5.06 ms] [16c7b3000] Info | [AIE] [XLResizeEnhancement] Encode skipped
[2025-02-06 13:53:30.270, 118.00 μs] [16c7b3000] Info | [AIE] [XLResizeEnhancement] Refine skipped
[2025-02-06 13:53:30.270, 33.00 μs] [16c7b3000] Info | [AIE] [XLResizeEnhancement] Begin decode
[2025-02-06 13:53:30.289, 19.04 ms] [16c7b3000] Info | [AIE] Model: rxl_decoder1 Device: -2
[2025-02-06 13:53:30.289, 75.00 μs] [16c7b3000] Info | [AIE] Selecting backend for device -2 from: coreml,
[2025-02-06 13:53:30.289, 16.00 μs] [16c7b3000] Info | [AIE] —TBlockProc 120x120 C: 2/2 R: 3/3 X: 60 Y: 60 inSize: 180 240 Pad: 0 0
[2025-02-06 13:53:30.289, 164.00 μs] [16c7b3000] Info | [AIE] Selecting backend for device -2 from: coreml,
[2025-02-06 13:53:30.289, 9.00 μs] [16c7b3000] Info | [AIE] Loading coreml backend 0002
[2025-02-06 13:53:30.289, 17.00 μs] [16c7b3000] Info | [AIE] TargetDevices: Apple
[2025-02-06 13:53:30.289, 8.00 μs] [16c7b3000] Info | [AIE] [ -3:1 ]
[2025-02-06 13:53:30.289, 7.00 μs] [16c7b3000] Info | [AIE] 1 instances for device -2
[2025-02-06 13:53:30.289, 25.00 μs] [16c7b3000] Info | [AIE] Target Device: -3 Count: 1
[2025-02-06 13:53:30.289, 9.00 μs] [16c7b3000] Info | [AIE] Loading default model file /Applications/Topaz Gigapixel AI.app/Contents/MacOS/…/Resources/models/rxl_decoder1-v1-fp16-120x120-mlc.tz
[2025-02-06 13:53:30.290, 106.00 μs] [16c7b3000] Info | [AIE] Using CoreML All Compute Units mode
[2025-02-06 13:53:30.408, 118.78 ms] [16c7b3000] Info | [AIE] Loading time for model file /Applications/Topaz Gigapixel AI.app/Contents/MacOS/…/Resources/models/rxl_decoder1-v1-fp16-120x120-mlc.tz is 118
[2025-02-06 13:53:30.408, 62.00 μs] [16c7b3000] Info | [AIE] decLoadTime: 120 ms
[2025-02-06 13:53:30.409, 681.00 μs] [16c7b3000] Info | [AIE] x_T size 240x180
[2025-02-06 13:53:30.409, 92.00 μs] [16c7b3000] Info | [AIE] Updated model params
[2025-02-06 13:53:30.409, 40.00 μs] [16c7b3000] Info | [AIE] Skipped 0 block(s) out of 6 block(s)
[2025-02-06 13:53:30.504, 0.00 μs] [16e707000] Info | [AIE] CoreML inference failed: Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model.
[2025-02-06 13:53:30.504, 63.00 μs] [16e707000] Info | [AIE] Not matching output sample
[2025-02-06 13:53:30.504, 14.00 μs] [16e707000] Info | [AIE] Error debug information
[2025-02-06 13:53:30.504, 10.00 μs] [16e707000] Info | [AIE] Image latent_sample Size: 120x120x4
[2025-02-06 13:53:30.504, 8.00 μs] [16e707000] Info | [AIE] Unable to run model with index 0 it had error:
[2025-02-06 13:53:30.504, 19.00 μs] [16e707000] Info | [AIE] Image Model inference failed: unable to run model with index 0
[2025-02-06 13:53:30.504, 95.03 ms] [16c7b3000] Info | [AIE] Error handling block: unable to run model with index 0
[2025-02-06 13:53:30.504, 35.00 μs] [16e707000] Info | [AIE] Model Backend state is invalidated due to previous errors
[2025-02-06 13:53:30.504, 35.00 μs] [16e707000] Info | [AIE] Model Backend state is invalidated due to previous errors
6 13:53:30.504, 17.00 μs] [16e707000] Info | [AIE] Image Model inference failed: unable to run model with index 0
[2025-02-06 13:53:30.504, 23.00 μs] [16e707000] Info | [AIE] Model Backend state is invalidated due to previous errors
[2025-02-06 13:53:30.504, 8.00 μs] [16e707000] Info | [AIE] Unable to run model with index 0 it had error:
[2025-02-06 13:53:30.504, 13.00 μs] [16e707000] Info | [AIE] Image Model inference failed: unable to run model with index 0
[2025-02-06 13:53:30.504, 101.00 μs] [16c7b3000] Info | [AIE] Error handling block: unable to run model with index 0
[2025-02-06 13:53:30.504, 101.00 μs] [16c7b3000] Info | [AIE] Error handling block: unable to run model with index 0
[2025-02-06 13:53:30.504, 14.00 μs] [16e707000] Info | [AIE] Model Backend state is invalidated due to previous errors
[2025-02-06 13:53:30.504, 14.00 μs] [16e707000] Info | [AIE] Model Backend state is invalidated due to previous errors
6 13:53:30.504, 12.00 μs] [16e707000] Info | [AIE] Image Model inference failed: unable to run model with index 0
[2025-02-06 13:53:30.504, 19.00 μs] [16e707000] Info | [AIE] Model Backend state is invalidated due to previous errors
[2025-02-06 13:53:30.504, 30.00 μs] [16c7b3000] Info | [AIE] Error handling block: unable to run model with index 0
[2025-02-06 13:53:30.504, 7.00 μs] [16e707000] Info | [AIE] Unable to run model with index 0 it had error:
[2025-02-06 13:53:30.504, 30.00 μs] [16c7b3000] Info | [AIE] Error handling block: unable to run model with index 0
[2025-02-06 13:53:30.504, 7.00 μs] [16e707000] Info | [AIE] Unable to run model with index 0 it had error:
with index 0
[2025-02-06 13:53:30.504, 22.00 μs] [16e707000] Info | [AIE] Model Backend state is invalidated due to previous errors
[2025-02-06 13:53:30.504, 10.00 μs] [16e707000] Info | [AIE] Unable to run model with index 0 it had error:
[2025-02-06 13:53:30.505, 15.00 μs] [16e707000] Info | [AIE] Image Model inference failed: unable to run model with index 0
[2025-02-06 13:53:30.505, 59.00 μs] [16c7b3000] Info | [AIE] Error handling block: unable to run model with index 0
[2025-02-06 13:53:30.509, 4.89 ms] [16c7b3000] Error | Exception: “Exception: AIProcessor failed: unable to run model with index 0”

Exactly! Yup, your example is perfect

Interesting idea. I haven’t seen this requested often before, but I can tell you that a ton of print shops need scaling by longest side.

Is there a reason you scale based on MP/area instead of to a certain pixel dimension?

1 Like

I deleted the CoreML folder and it seems to have helped so far
‘/Users//Library/Application Support/coreMLCache’

1 Like

@michaelezra
Could you please do a “benchmark” of this V8.2.0 vs. the old 8.0.2 on your M1 Max, e.g. with my low-res test picture from here:

I’d be interested if you see a speed gain with that new version. Here, I have mixed results: on the M1 Pro 10c/16c the 8.2.0 is faster but unfortunately also quite a bit slower on my Mac Studio M2 Ultra 24c/60c (see above).

I used Redefine Creativity 3 Texture 3, everything other off. If you do a “preview entire image” it’ll show the time it took after rendering is finished.

I noticed a small bug when opening this new version. In the “News features” window, there’s a bug that I’ve framed. A line of text is 3/4 of the way down. And therefore unreadable. And this is the case for the 2nd slide. But the 3rd was fine.
Capture d’écran 2025-02-06 200858

2 Likes

For me, going by megapixels is the easiest way to get upscales that match my sweet spot for viewing without going below 2 and beyond 4 as upscale factor. I did lots of stuff with GPAI 6 and found upscale factors above 4 to lose quality in relation to lower upscale factors (which may be different on 8.x, I haven’t compared yet). Most of the stuff I upscale is about .8 to 1.2 MP with a ratio of long to short side not exceeding 2 such as 1200x800, 1024x1024, 1024x768 and the like. If I make 9 MP from this, the upscale factor is always in the range I want it to be. I currently use a simple batch file to calculate the matching upscale factor but I would certainly appreciate this to be at hand in GPAI.

BTW, is there a way to switch the exiftool thing off? I do not want metadata in my upscales, and apart from that, the addition of the EXIF stuff seems to add significantly to processing time. I’d rather have my upscales “plain” and add metadata myself if I need them.

1 Like

Sitting at an Apple Studio M1 Ultra, 64GB RAM, Sonoma.

Running my WOMBO’s at 6X is still not realistically doable like it was on the PC with 4090.

I am doing a test now; whenever it finishes (…) I will post result comparisons with the PC version. I do not have a timer going but each PC render took a few minutes maybe; on the Mac this is MUCH longer.

(8 minutes later after the screenshot, the progress bar is up to the “4” in 9408…).

You definitely should repeat this test with Gigapixel 8.0.2 - it will likely be MUCH faster than 8.2.0

I now did an additional comparison with my high res image and there the situation was even worse: this new version with it’s alleged speed optimizations is not two times faster but two times slower.


(on the left is GP 8.0.2, on the right GP 8.2.0)

So, @dakota.wixom: whatever you did to the Redefine Model for MacOS please revert it ASAP to the one used in Gigapixel 8.0.2. Or at least give users the choice (maybe like in TVAI where you can enable old models in the prefs).

IMO this would not only be beneficial for the speed on the faster/higher-specced AppleSilicon chips but also for those having problems with blurry renderings as most users don’t see these in 8.0.2.

3 Likes

Thanks, I learned early on that Macs were not suitable for this type of task. Maybe it’s not just GPAI but 3D rendering in general is more normally done on PCs.

BTW, here is a new screenshot right now… Over 20 minutes in.

1 Like

Thanks, we will look into it. It may depend on the machine, but it is definitely faster for the majority of macs. Can you confirm this slowdown still happens on the second render? The first time rendering with the updated model may be slow.

If mine ever finishes I will try a second one to compare speed :wink:

I already finished this particular WOMBO project on the PCs but am always interested if the Mac version has sped up.

At home I have an M2 Mini but I’m not there right now…

1 Like

Thanks! Do you have a super wide monitor? Seems like that may be the culprit. We’ll see if we can track this down, that’s very helpful.

Yes, I tested this many times now. What I also found is that with the groundhog pic the Redefine is faster when Face recovery is turned on (?!?) - this on both versions, the fast 8.0.2 as well as the new one.

As stated before (already in the beta) the new version is in fact faster on my M1 Pro - but still even with the newer version the M1 Pro is just barely usable.

On the multi core M2 Ultra Redefine was quite fine for use on at least lower-res images taking about 2-5 mins, which is OK. But this has slowed down (reproducible) with the new version, both in the previous beta as well as in the now released version.

And it IS annoying that I get a speed gain is on the machine where you won’t really use Redefine anyways but at the cost of a speed hit on the rig where it was fully usable before.

And I do strongly believe that this behavior will carry on through the Apple Silicon line. Could it be that you only tested this on the more lower end AppleSilicon chips and not the high core count ones, especially the Ultra variants?

No, that’s correct - most of our models are tuned for 1x, 2x and 4x and anything in-between. The Redefine model can do up to 6x.

1 Like

Be prepared for a looong wait on the M2 mini…
You should only do a 2x upscale test there as otherwise it will take forever.

(Oh, and let me make a guess: On your M1 Ultra the old 8.0.2 version will very likely be faster, but on the M2 mini maybe the new one)

I actually know better than to try that on the Mini, ha!

I am officially pulling this render at 45 minutes:

I did many hundreds of renders at 6X with high Redefine settings on the PCs, no way I could do this on the Mac.

2X and 4X renders are not as good generally (details are less defined, though in certain cases better, as well as subjects being completely different among the various render sizes so maybe you want them all depending on what you’re doing).

Trying again with a teeny tiny (48kb) image…

I’m actually going to run an errand while this cooks! I’ll report back later…

1 Like

I think you should try this again with the 8.0.2 version. I did a similar task with a little lower resolution (785x1000 → 6x upscale: 4710*6000) and it was finished in about 25 mins on my M2 Ultra with the GP 8.0.2. With GP 8.2.0 from my previous tests I’d estimate it’d have taken about double the time :grimacing:

And I’d like to have someone confirm my finding that this new Gigapixel version is considerably slower on high end Macs than the old one.

And the results can be sometimes nearly unbelivably good, going from this:


to this:

(oh, the forum downsizes the images so you can’t really assess the full power of the dark si… err, the quality of the Redefine image)

P.S.: If you press the “preview entire image” button you don’t have to sit at your computer with the stopwatch as it’ll give you the time it took after it has finished (as seen in my above screenshots). If you like the result you can then save it without further waiting.

OK, somewhere along the way this finished.

Here is the original:

orig

The new Mac version from GPAI 8.2.0:

The Windows 4090 render from a month ago (same settings):

@ jo.vo:

So long as I have access to PCs I won’t rely on any Mac to do this Redefining. 25mins/per is just unusable. I’m used to maybe several minutes each in batch.

1 Like

That is only true for the extreme high end cards 4080 and 4090 that themselves cost a fortune alone and draw LOTS of power - but not your “average” PC.

Everything up to a 4070 or AMD cards, the high end Macs are not far or can compete.
Just not with those generative AI models that apparently aren’t optimized well for Apple silicon (because there’s no way a Nvidia 4060 is multiple times faster than an M2 Ultra, not from the sheer specs, nor from any other test, including Topaz apps).