Gigapixel v8.4.2

Hello!

We’ve just rolled out a fresh update with improvements focused on performance, reliability, and behind-the-scenes cleanup. It’s a short and sweet update today =).


v8.4.2
Mac: [Download]
Windows: [Download]
Windows ARM: [Download]
Released June 18, 2025


Changelog:

  • Removed sRGB fallback option in preferences
    • All display color profiles are supported so it’s no longer needed
  • Improve installer stability regarding open files
  • Fixed some files creating invalid color profile transforms
  • Automatic Lensfun Update
9 Likes

Hi all,

I just upgraded to an RTX 5090. I’ve updated the VBIOS and did a clean driver install using DDU.

I then reinstalled Gigapixel AI to force it to download the new TensorRT models for the Blackwell GPU, but it keeps grabbing the onnxruntime (ox.tz) models instead (see folder list ).

My GPU is in a B650 motherboard (PCIe 4.0), and I’ve made sure the iGPU on my Ryzen 7950X is disabled. I’m guessing Photo AI and Video AI will do the same thing.

I’m attaching logs, DxDiag, and NVIDIA driver info.





Topaz Gigapixel AIcrashpad+logs.zip (8,8 KB)

Can anyone explain why this is happening?

Fixed issue, manually added 1200 to config files. Now it downloading correct tensorrt models and working blazing fast.

6 Likes

nice de-bugging! and thanks for letting us know!

2 Likes

Yes, but unfortunately, not all models are available for Blackwell. I was looking at the model scheme and the downloads, and noticed that while models like HQ and Standard are on the server, gmpv2-13 and others are not. As end-users, we’re left hoping that you will introduce full support for Blackwell soon. :slight_smile:

https://models.topazlabs.com/v1/ggnv2-v3-fp32-128x128-2x-rt1200-10800-rt.tz2
https://models.topazlabs.com/v1/ggnv2-v3-fp32-128x128-4x-rt1200-10800-rt.tz2
https://models.topazlabs.com/v1/ghqv2-v1-fp32-128x128-2x-rt1200-10800-rt.tz2
https://models.topazlabs.com/v1/ghqv2-v1-fp32-128x128-4x-rt1200-10800-rt.tz2
https://models.topazlabs.com/v1/ghqv2_ldn-v1-fp32-128x128-2x-rt1200-10800-rt.tz2
https://models.topazlabs.com/v1/ghqv2_ldn-v1-fp32-128x128-4x-rt1200-10800-rt.tz2
https://models.topazlabs.com/v1/gclc-v1-fp32-128x128-2x-rt1200-10800-rt.tz2
https://models.topazlabs.com/v1/gclc-v1-fp32-128x128-4x-rt1200-10800-rt.tz2

In the next version, it would be appropriate to correct the persistent contradiction about the type of method used – see the text content in the red-bordered ellipses in the attached image.

2 Likes

Version 8.4.2:

Windows 10 Pro, up-to-date.

For TPX and others and related to Personalization Data Training ?

Thanks for that very interesting post about AI training … artifacts … etc

Several users have suggested previously that there should be an option to USE or NOT USE image renders in Personalisation Data learning - the point being that one doesn’t want to have zany model settings included when experimenting.
Although I think Topaz agreed - have not see it mentioned recently ?

Link for Personalization search: Personalization

Still broken on Apple M3 (Macbook Air). Tried different Creativity and Texture levels.

Has anyone tested this on M4 (Max)? I’m thinking of getting Mac Studio. If it’s also broken on M4 then probably going to cancel Gigapixel and just keep Photo and Video.



1 Like

Apparently this was a download error, and a second attempt corrected the problem. Never mind!

1 Like

As far as I know from many previous forum post. It only happens to M3 based system. M1,M2 and M4 doesn’t have this issue. Just M3 users.

1 Like

Thx for the update, Esther!

2 mins. to install. Launched okay standalone (didn’t test Ps plugin). Tried Recover.

Ps Plugin launched too (File > Automate). Also Recover (2x). Saved to Ps layer stack in just over 1 min.


2 Likes

Don’t know.

The only thing i see that physX is missing.

Uninstall via DDU again and erase PhysX with it, do the uninstall twice, use the studio driver then.

Maybe this helps.

You mean I give Gigapixel or whatever from TL a picture and it uses it to swap the face etc. on a picture.

This is currently available at ChatGPT, is on the rise and I find the whole thing very interesting, also in view of the fact how the photo world is currently changing and I would like to go with it - but all locally.

I don’t want pictures of my customers’ children to end up in any databases.

I would, however, benefit from swapping out faces in different images here and there so that I simply have a usable version and can sell more.

The question is how well that works in the end.

2 Likes

Gigapixel or PhotoAI are no longer useful to me right now.

I have found other ways to remove the last of the noise and improve sharpness with better results than enlarging the images using Gigapixel / PhotoAI and then downsampling.

All without generative artifacts, changes in color and extremely faster.

Loading PhotoAI alone takes as long as running the other two programs one after the other.

I am curious to see where the journey will take us.

For social media, the output generated by all the AI programs seems to be useful on a viewing size of 7 x 5 cm.

4 Likes

Personalization Data and Reference Faces - rather different things

I think maybe “Reference Faces” (50 posts) is a better tag than “Personalization …” for what you are describing for improving faces or swapping them ?
Yes, I agree with everything you say about that and there is an idea/suggestion to Vote for it somewhere - I think. I understand the privacy issue as well.

Both ChatGBT and Gemini.Google interest me a lot now, and their dust - spots - scratches removal can be very good but at the expense of changing facial characteristics.

My first post was actually referring to the Auto mode setting for “Personalization” which says it will “learn how you edit images” and adjust parameters based on what you have done previously.

for my latest GPAI v8.4.2 release it says it’s using data from 113 previous images:


.
and for my current beta GPAI v8.4.0.2b it’s using previous data from 253 images

.
What is interesting is that it accumulates previous data when you upgrade to a new version but sometimes it drops quite a lot of data, because I have done way more images than the current numbers - possibly because the model options change and older learnt data isn’t relevant?
You can also reset previous data - which I have not tried.

For Personalization data - quite some time ago I asked Topaz to add options to include model settings OR to exclude them from “learnt data” because I don’t want experimental (extreme settings) to distort my “average” preferred setting - no apparent action from Topaz ?

I have also noticed that if you click the Thunderbolt icon and activate Auto mode, the render result seems to be subtly different from rendering without it clicked but using exactly the same individual settings … Of course it’s possible that I’m just seeing a small random difference between consecutive renders but I think it’s a systematic difference - and if I just try exactly repeated renders they look the same to me ?

All very intriguing !!!

1 Like

Oh, very interested to know what you are using - if it’s not a commercial secret ?

This week the latest Lightroom Classic tool for removing people and objects seems to have improved a lot - have not yet had time to double check the LrC/ACR neural filters to see whether they have been updated.

Things are changing so fast now it’s hard to keep up !!!

Dragonfly - Showoff Spot / Keeping it Natural - Topaz Community

That’s because the companies all train with garbage data.

The contours are fine, but the content of these contours changes.

i’ll have to get back to you on these but thank you for letting us know