Gigapixel 7.0.5

Thanks for the info guys
I did read the system requirements and it seems my rig can handle it.
I have a Ryzen 3600 CPU, 16GB of RAM and Radeon RX5700XT 8GB Video Card. Any known issues with my hardware?

At 300 ppi the biggest you will get, without making 2 passes, is 36 inches.

The Low Resolution Model seems to be the best option for preserving most of the crack patterns in the sky and light areas, but the darker colors just don’t have enough contrast for the pattern to be detected.

2 Likes


I ofen compare G7 sv1 sv2 models with G6.1 standard. G6.1 standard is relatively faithful,if he don’t know what they are, he will not add something to there,only upscale.

1 Like

I’ve tried GPAI 7-0-5 on Paintings and you really need to try all the models and see what that is important to you, especially faces even with Face refinement off.
I tried your image and thought Standard v2 with a 4x upscale seemed the overall best, but very compressed was interesting and perhaps doing both and mixing them (maybe selectively over different parts of the image) might be worth a go.

I wouldn’t class the result as “good”, but it depends what you want removed/left. Maybe try the trial?

In Photo AI enabling Remove Noise removed the cracks.

1 Like

not accurate. High Fidelity v1 is different then HQ in v6.3.3.
v7.x HF1 does less sharpening and cleans up a bit more noise which is a bit more natural.

From the user guide:

Standard v2 AI Model is the overall recommended model to use across a variety of images. This model has been trained to see a variety of files but works best for photos, graphics, and generated images. Standard v2 reduces blurry patches that occurred in the Standard v1

Low resolution files are commonly identified as having fewer pixels. This model works best with screenshots, images pulled from the web or images with 72 pixels per inch (ppi).

High Fidelity AI Model is trained to study high quality images with high resolution. Images from high end cameras are best. This filter can maintain the original look of images, without creating distortions.

Art/CG AI Model is best for artwork, drawings, illustrations, and computer generated graphics like Midjourney.

Lines AI Model is good for architecture, typography, cityscapes, and any image with thick lines.

The Very Compressed AI Model is best used for images with a lot of compression artifacts. Use this for images saved at a small size, scanned images or old digital images.

Additional Settings: To customize your settings, select the dropdown arrow.

Minor Denoise - slightly suppresses noise

Minor Deblur - minimizes motion or lens blur

Fix Compression - reduces compression artifacts to improve details. This setting is enabled only when “Standard” and “High Fidelity” models are used. (Note: Fix Compression values are not visible in the list)

So I went to 100% and I see a small change of white specs throughout the treetrunk and a very slight change in some areas where there is more of a contrast using High Fidelity v1. I swithced to High Fidelity 2 and see more improvements.

I wil play with this a bit more but so far it is no where near the dramatic improvements that were shown in the promo video on the website.

~WRD0001.jpg

Thank you for your sugestions. Another Topaz meneber reached out and suggested that I do the trasnition at 100%. I tried that with High Fidelity v1 and V2. At 100% V1 added some white specs to the photo. V2 gave better results, not the dramatic change that was shown in the promo video on the website but a bit better.

I have been playing with swithcing on the “AI MODEL” but when I do that it selects “Low Resolution” by defulat and I am unable to select a different AI Model.

Again I will continue to play with this.

Jim

~WRD0001.jpg

So I tried the trial with that same old oil painting that I posted above, and I’d say that the result was actually impressive!
Art/CG mode with very low Denoise saves about 95% of cracks/imperfections. Increasing the noise to 100 though almost removes all the cracks from the sky.
Think I’ll be going for it after all.

I remember a few years ago before the introduction of AI and people would say about enhancing old pictures that “you can’t add the quality that isn’t there”
Guess that’s not the case anymore

2 Likes

, including the addition of an alert upon export completion… nope not on win10

Please fix tiling artifacts, this renders this software useless:

1 Like

Enjoy!

Glad it can serve your purposes.

Sometimes it just takes trial & error with settings to find the proper balance for an image.

2 Likes

It depends what “Standard V2” is. The new Standard model that appeared in 6.3.3 could be V2 or that could be V1 and this is a later one.
In either case I really dislike the tendency of the 6.3.3 and later Standard models to make pixilated noise-like areas in images, you really have to go through each one to look for it.
I’ve seen it a lot with Standard V2 since going to 7.0.5…
If V1 isn’t from 6.2.0 could we have a V0 option too please?
I usually find it’s a good idea to try all the models with a source image that is in any way challenging… I miss the 2x2 grid view…

2 Likes




please look carefully, the brick wll. HDv1 ,Sv1, Sv2 ,Sv0

I install 6.1 and 7.04 on my win10 system,both works OK, but in Photoshop,I can use Automate plugins only one version(6.1), If i delete the 6.1 's 8bi and 8bf, PS will load the 7.04 8bi. I hope Topaz bring back the standard V0 to G7, or change the new 8bi file’s name,load them togather in Photoshop.

Exact same here on Windows 10. Installation of v.7.0.4 was fine. I exited, installed the Photo AI update successfully, then tried Gigapixel 7.0.5 update again, same error.

You have to be really impressed with the skills of the brickmakers involved in the Standard V2 wall!!!

2 Likes

There is a bug in this release:

Batch processing, when exporting with option to add applied filter names to files, file name does not always match what filter was actually applied:

For example, after processing a batch of images:

Actual exported filenames:

image

Please investigate and resolve, thanks.

I will also post this in the Bugs section.

2 Likes

New to the group, I made a first try with the beta version but honestly did not find it so much different from the standard one. What I like is that there is a sharpen feature now which seems to work well. Regarding the recovery feature, I tried it on a smaller image and it took more than one hour to render, so in the end I had to quit it. I have 32 GB VRAM. Tried it now with an even smaller image, but after running smoothly in the beginning, after a while, it froze and did not continue anymore. So I can not say much about that feature. But I’d like to share some general thoughts.
For me, and that is maybe not what the average user needs, illustrative images are my main subject. Whether they are my own original designs or those I have generated with AI, I almost only enlarge images of this type, rarely photos. While it works very well with original acrylic paintings or watercolor, both in the beta version and in the standard one, I wish we had a special mode for AI images. These often come with weird artifacts in the details or in the background. But you can’t simply iron those out with a high amount of denoise, because then the main object loses too much detail. And I would love to have subcategories in the “Art/CG” model with art styles such as watercolor, acrylic, oilpainting and graphic. So that it would create a small effect on the image, like a filter, which would give the AI-generated images the finishing touch and conceal the unwanted artifacts. At the same time, it could also be used to give photos a painterly look if required.


Left a detail from the original client-provided blurry source file (shot with mobile phone camera).

Middle is a downscaled version (1000 pixels along longest dimension and then scaled with recovery applied and noise, and deblur cranked up to 100.

Third is a 4x upscaled version using the automated settings and Standard v1 model applied.

As with many of my Gigapixel upscaled and sharpened images from poor-quality client-sourced images, I end up running the image through multiple passes, combining them into a layered Photoshop file and then masking the layers to retain the best parts of each pass. In some instances, for example, I get much better noise reduction and AI fills for things like clothing and natural backgrounds if I run it on a version of the image that I’ve deliberately scaled down and that I’ve lowered the resolution on, but that may introduce artifacts that look terrible with other elements in the image, so I’ll mask out those badly rendered bits to show the better-looking source file elements.

In this particular example, none of the AI models were able to salvage the poor-quality source file.