In this release for Gigapixel we have fixed several important issues with our Photoshop plugin workflows, double-processing after loading an image, monochrome conversion accuracy, output resolutions sometimes being ignored, and other important issues.
In future updates we will continue to further improve color accuracy, the performance and results of some of our new and popular upscaling models, and more. So please stay tuned for those exciting updates!
Gigapixel is getting better every single day behind the scenes.
As always, a full change log is below.
Comment below to share any feedback or report any issues you encounter with this release. We’ll be updating Gigapixel regularly to address your feedback and any issue reports. We would also love to see some your great upscaling results!
I just took a look at one of my images, and noticed that in the dark areas Gamma Correction added in the square grid artifacts! I don’t usually see that issue, but then I hardly ever have Gamma Correction turned on. I wonder what it is actually intended to do?
Gamma Correction processes the image in a higher gamma color space. For images with highly saturated colors, this can enhance AI outputs with more details and less color bleed.
I tried Gigapixel 7.1.4 for my modest needs and thankfully it still works, no UI changes either. An old shot (from 2016) of a tiny common tit bird in the distance on a larch tree. That’s what Gigapixel is made for. A small cutout with a tit, smoothing and focusing with “Very compressed” and 2x magnification with Recovery (still only Beta). It took a very long time (Acer Nitro 5 laptop), almost 10 hours (9.89 h). And then “Export image” ran mysteriously at a snail’s pace throughout the night. It finally ended and I was satisfied with the result. The initial very limited information (pixels) must be inflated like a balloon to make the result large enough, additional information is not available. For detailed feathers, generative AI would have to be used, which may or may not be suitable depending on the processing goal.
I was very surprised that GAI shows in yellow that “Rendering on CPU”, but my GPU (Nvidia) was working at 90-100%, temperature sometimes up to 90°C (194°F), while CPU only a few percent. So does GAI use GPU or not?
And why such a long export time? I understand the slowness of Recovery, hopefully it will improve significantly when it’s only Beta, otherwise it wouldn’t be very useful.
But in principle GAI does what I need. But improvement and acceleration must come. A couple of screenshots for illustration.
The recovery model will work on the gpu when it can, but because you only have 6GB of vram compared to the recommended 8GB for recovery, you’re getting a warning message. The wording of the message is confusing though. As long as you have no other programs consuming vram, it should use the gpu. I have a GTX1060 6GB and I limit temps using MSI Afterburner for long processing tasks.
You’d probably also be better off doing x4 recovery first on the smaller cutout image to reduce processing times (x2 isn’t any faster than x4), then do any other upscaling.
As to" * Fixed PS plugin not saving changes back to PS on some devices" That is not correct when I use Gigapixel 7.1.4 on my Apple M3 2TB iMac running Sonoma 14.5 and Photoshop 25.9.0. Gigapixel goes through the 2x “processing stage” and the resulting image size stays the sam when it returns to Photoshop!
The “Recovery” model generally works very well on close-up, sharp shots, but in areas of blur due to depth of field, it leaves unwanted stains.
Here scaled to 8064x6048
Note in passing that the image is generated with Bing image Creator and then use Fooocus generative fill and finalize with a subtle variation to smooth everything between the fill and the base image and refinement of the face and finally, use the Recovery model from GAI 7.1.4.
Examples of stain on image
Otherwise, in terms of the character’s result on facial quality, it’s just impressive. After all, the basic image was already very good. It was scaled to x3.78. The basic size was 2133x1600.
Another example of an image where the background, which should remain blurred, was rendered sharper with a strange grain or crackle effect, like cracked paint.
Seen in full screen, it seems to work well.
But if you zoom in, you can see this kind of grain at the bottom of the image, when it should be smooth because there’s normally a depth-of-field blur.
It should also be noted that, apart from a few minor problems with the model still in Beta, the rest of the model is really good. In fact, it still brings out the flaws of the base image generation .