Topaz Gigapixel v1.0.2

It’s really exhausting to have everything in one place at the moment; I completely missed the point about Adobe and the partner models.

It makes sense that other manufacturers are looking for access to Adobe’s ecosystem in order to attract customers and money.

I need to take a look again at ComfyUI because of image restauration.

____

With an online platform, one of many free ones where people upload their pictures, you’re lost, so you won’t make any money in the long run.

1 Like

Mac OS Sequoia 15.7.1, Photoshop 2025. The Gigapixel plugin won’t work in a managed user account.

  1. Open Photoshop

  2. Open any photo

  3. File menu > Automate > Topaz Gigapixel

    ^ functions normally

  4. Close Photoshop

  5. Sign in to any user account other than the primary one on the Mac. Fast user switching should be enabled.

  6. Open Photoshop

  7. Open any photo

  8. File menu > Automate > Topaz Gigapixel

    ^ Layers palette flashes briefly, no layer is created. Gigapixel is not launched.

I tried installing the previous version of Gigapixel AI (8.4.4). It creates the layer, but Gigapixel AI is not launched.

I need to be able to work in a managed user account, not the primary user account. Because we have a Photoshop workflow, Gigapixel is nonfunctional for me.

Later on I’ll creep up the setting further to see what it does.

1 Like

Yes. It’s Flux Kontext that was added to Ps beta. I guess there are other Flux models too. I will be trying more experiments using Giga to see how it deals with those models.

1 Like

Do you do any selective masking and sequential processing for areas of images that may need different levels of sharpening/denoising?

Or, do that by mixing models from one step to the next?

(I suppose those might be a less intuitive approach if the whole point is an automated, do-it-all-for-you processing model vs DIY).

Does Giga support that type of masked sequential processing as a Standalone (as you know, my workflow is fully Ps plugin with Ps layers & masking)?

@TPX You’ll have to test for yourself, but my initial perceptions are that the Flux Kontext AI model (in Ps beta) produces more photographic looking content, the Nano :banana: model is very detailed & flexible but smooths a bit to an almost more illustration-y look, and the Firefly models seem left in the dust.

Those output characteristics are important to me (especially as a photographer, not graphic designer) & also have implications for the types/features of models I’d have to be able to use in Giga. If I’m purposefully choosing a generative AI creation model to achieve a certain art style look (including photorealism) then I will want to pair the output from that generation with a supportive (for lack of a better word) scaling model.

Do those models work local?

No, keeping it simple, too many of these groups to dink around with too much.

I just tried 1X Wonder at 80% Face recovery (image is too big for 2X):

Screenshot 2025-10-07 at 10.45.44 AM

Hmm. It has so much potential!

I also don’t see a way to remove previous successful renders from the queue. I know we have 7 days to get them but if we already got them, we don’t need them in the queue.

PS: Failed again, I’m trying once more with Face recovery % lowered to 70, since GP is not telling me what the issue is.

Well, I had better luck yesterday! Today something is not right. I literally did this same image before.

Going to push my luck with a set of 3 different scans with Face recovery at 80% as a batch…

Update:

The queue of 3 succeeded! Here is 80% Face recovery:

First file name was 2023-03-27-0019_Radiant-cloud-wonder-1x-faceai v2, the others were less precise (2023-03-27-0020_Radiant-cloud-[ai-filters])

Next up! Cloud render of 32 images!!

Update:
I let the queue run while I went to a meeting. Came back, my Mac (Studio, M1 Ultra) had logged out somewhere along the way…

It got about 1/3 of the images done so I need to make a new queue…

New queue of 21, I seem to have to keep hitting Refresh to keep it moving… Otherwise it seems nothing is happening.

Started over a couple of times, the queue is mocking me:

Refreshing the queue makes all of my new uploads disappear, and I need to run them again – and get stuck uploading, again.

Older failed renders which I deleted at least twice keep coming back into the list.

We need a way to completely clear the queue out and start fresh.

Got past the upload with only 2 images, but stuck in queued mode. Notice also the queue says “0” when there are 2:

Since the Cloud seems broken I am going to try Wonder locally, hahahahahahahaha!

2 Likes

I’m using them on my laptop (NVIDIA 4090).

But that’s not necessarily the answer to your question…

I do not actively send anything to the Web. I work in the Ps beta UI.

That said, I don’t know if it’s like the Remove Tool (in which we can opt for faster onboard processing (maybe less accurate) or slower Web processing (supposedly more accurate). That gets designated as a Preference.

There is no comparable Pref (that I actively set..) for the Gen AI models. They just do whatever they do after one selects the desired model from the Contextual Taskbar.

Long winded way to say I don’t know if Adobe’s handling via Web servers b/c it behaves as if local.

I was wondering the same thing. Glad you asked! Maybe Esther can say…

I like your 80% setting better. It’s clearly sharp but not over-sharp.

1 Like

Local (non-cloud) Wonder:

I’m doing battle withe the queue again, one more cooking.

Never mind…

Screenshot 2025-10-07 at 6.11.20 PM

It’s not good that you can’t clear the queue because it re-renders ones you’ve already done.

Tough night…

Ouch. Even if not paying credits for the “Cloud” renders that still doesn’t seem efficient.

Does it work in the background if you want to do other things on your computer while it’s processing? If yes, does working on something else locally (or using browsers for whatever) slow the image processing that’s running?

Yes, this is painful…

For as long as each render takes, queuing is a must, unless you want to babysit the process all day.

I’m on a 2022 Apple M1 Ultra Studio with 64 GB RAM at the moment so it’s a decent machine. I was switching back and forth among apps but quit PS to save RAM. I am at 61% usage.

Last evening I had fewer issues. Maybe server traffic is heavy today?

I may have to try those 4090 PCs next time!

Now at home on the M2 Mac Mini, trying again. Seems I didn’t prep one of the images properly (too big), and got this dialog:

The answer is obviously No, but how do I say that here? To me, “Cancel” might mean either “cancel cloud rendering of this large image” or “cancel the queue”.

I chose “Cancel”, then tried to delete the large image from the queue. Then GP crashed…:

Crashed Thread: 37 QThread

Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000

Termination Reason: Namespace SIGNAL, Code 6 Abort trap: 6
Terminating Process: Topaz Gigapixel [71719]

Application Specific Information:
abort() called

One more time for today: I fixed the large image and dragged it and several others into GP’s interface. So why does the queue have 3/5 unchecked? What is the criteria for auto-check?

Let’s get them all checked and run this thing in Wonder (cloud)… Estimated time: 4:39.

It begins! Uploaded, queued and processing beginning. No way will it make anywhere near that estimated time.

You can see the last of my earlier failed attempts at bottom; many other finished ones are still there as well and I really want to clear them out):

Now we are at the estimated time and 2 images are still at 10% as shown above (just went to 20%, now suddenly at the looooong 100% – that needs to be fixed, it is totally inaccurate, unless save time is included in there somewhere). We’re 6 minutes past the time estimate with only 2 “nearly finished”.

But wait! More failure. And I was wondering if the rest of the queue would then activate, and it seems to have:

I’ll give it some time and come back in a bit.

OK, so in this batch I got 3/5. What was wrong with the 2? The sizes are similar.

I would like to see a way to re-try failed renders within the queue dialog.

Tonight’s final score:

Another day of failure (I re-sent the last 2). I feel like I’m taking a math test back in grammar school! :wink:

Back at it on the Mac Studio, brand new day…

Got 1 finished in this queue of 4! Waiting to see if the rest will start on their own:

This is interesting: I installed GP on one of the 4090 PCs and got a new queue going. I guess I should have expected this but it is showing the previous render currently running on the Mac, as well as the new queue. So I guess there is no point being on the PC if the Mac is in use. I was used to using GPAI and having separate local queues.

PS: I am getting more failures in this latest queue, and the estimated time is WAY off. It’s at least 10X as much. Even if the clock only starts once a render is in progress, it still seems way off. No way is the entire queue going to get done in a few minutes – though you’d think it would, being in the cloud.

Does Refreshing break the queue? I feel this is needed to keep things moving or make something happen, otherwise it seems super unusual slow.

And I’ve been noticing, sometimes the finished image is downloading twice:

Screenshot 2025-10-08 at 1.30.08 PM

Everything was displaying as queued until I refreshed and suddenly all images are ready for download. The cloud process requires too much babysitting…

I’m noticing something else when adding images in batches to a queue – the model settings do not apply to the entire queue (even though I thought all images were checked)! This may be why I’m getting failures (some trying to enlarge to 2X when it’s not supported). UPDATE: This latest batch is behaving as expected, all selected images changed to “Wonder” at once.

Images that already downloaded themselves are now “Ready for download”…

Images queued up and available, but button is inactive. I was able to trick it earlier but can’t activate it now by unchecking and rechecking list.

I’m having overall much better luck today but still seeing the greyed out Cloud render button as well as getting some crashes.

2 Likes

I noticed that the Wonder model is better when the images have a resolution of 1K. I haven’t tested it for 2K but I know that a 12 megapixel image on smartphones will look disgusting. Like it will do a scaling not really better than Photoshop tools which are really really outdated. But on small resolutions, it is as effective as the mix between Hight Fidelity and Recover V1 having a moderate sharpness strength. I tested it yesterday on a photo improved via Gemini by doing a x6 scaling. The rendering was really very clean and realistic.

Speaking of restoration, I know a really cool model for restoring details, enhancing them, and doing upscaling. It’s Hypir AI for ComfyUI. Very simple and easy to use interface.

The only parameters to touch are “Model_T” and “Coeff_T” That’s it.

1 Like

Is there a Workflow to download?

I think at the moment Wonder is not where it should be, since some part of it does make use of the CPU.

It’s downloaded along with the node in the ComfyUI manager. But I can give you mine because I customized it a bit. Especially with the “Before/After” window and the sound notification window indicating that the rendering is complete.

Hypir Restore and Upscale.json (4,8 Ko)

You will of course need to download the models

2 Likes

I thought it was 100% GPU

At the moment its not.