Just my note on face recovery in Gigapixel: I’m not enthusiastic, although in most cases it’s better with Face recovery than without. In my case, these are cutouts from larger photos that I want to enlarge. No Face recovery model is perfect, some are terrible. The worst thing is when the face continues with the neck – it is often sharply demarcated from the improved face and is also quite bad (differently for different models). Sometimes something like bumpy skin or so. The Low resolution 2 model probably works best, although it’s no miracle. The Redefine model is not very good. Below I show a few of my examples (clipboards) from Gigapixel 8.1.1. But even attempts with (demo versions) of competing tools didn’t give me better results, some were downright terrible. Well, what can be done, maybe over time…
https://videocardz.com/newz/nvidia-launches-geforce-rtx-50-blackwell-series-rtx-5090-costs-1999
From what i did get from the CES presentation is that the 5090 should be much faster than 4090. (no i dont know how much in reality).
The presentation Teraflops for AI is measured in FP4.
FP32 Teraflops = 125 (5090) - (82 - 4090)
They did change a lot on the AI side.
But i cant believe hat the 5070 should have the AI performance of the 4090, at the moment.
From what i did read is that AI is mostly limited by memory bandwidth.
As Topaz products are pre-trained there is no AI ability needed I think.
I did not mean the gaming related stuff.
They said even the Shaders are now able to do AI workloads not just the tensor cores.
But who knows if TL is using the Tensorcores anyhow.
Interesting point that AI is mostly looking for memory bandwidth.
My 5 year old RTX 2070 Super has a bandwidth of 448 GB/s. Not too shabby for an old geezer!
So from a “only really looking for better performance in PAI & GAI”-perspective (I don’t game) I should be looking at RAM and it’s bandwidth when I compare the 40- and the 50- series?
Haven’t gone looking for data yet, but would it perhaps be fair to say that if I’m currently looking at the 4080 Super I would probably get equal performance out of the 5070ti?
Hahahaha xd . You can imagine a lot of things with a 3rd arm. No pun intended.
Already, I’ve seen that the GDDR7 Ram will be 30% more powerful than the 4000 series.
Gigapixel 8.1.1, testing Redefine BETA on a 70-year old photograph:
Wow! I got an absolutely awesome result in the Preview frame. A dream-come-true. However: the Exported Image is very much different, disappointing. See screenshot attached.
I’m looking forward for a corresponding fix. Would be a brilliant reason to extend my license.
My experience – the preview is practically never identical to the same part of the photo after a total (maybe not only?) Redefine. For example, the duck, not only the differences on its beak, but also elsewhere can be found. Sometimes it can be quite harmful (like in your example).
I don’t know how it is implemented in Gigapixel, but it may be because the pre-view and full-view work with different information (the preview has less of it), which may ultimately have a more significant effect (in the full-view, the “around the middle” can affect the “middle of the photo”). The preview can probably only be taken as a more or less approximate idea of how it might look. It is probably worth trying several partially overlapping previews, especially on details, or trying the largest possible preview area and comparing it with a smaller area.
I believe this is expected behavior for generative AI models like Recover Model, Redefine Model, and Super Focus in TPAI.
Unless the software processes the entire image during the preview stage, the preview will never match the exported image exactly. This happens because of how generative AI works. Even with the same seed number, upscaling a small portion of an image will always yield slightly different results compared to processing the full image.
Here are the Detailed explanation
- Context Dependency: Diffusion models rely on the entire image context to generate details. When processing a small part, the model lacks the full surrounding information, leading to subtle differences in how it predicts and refines details.
- Noise Application: Noise is applied and removed iteratively during the diffusion process. When upscaling a small region, the noise patterns and their interactions with the surrounding pixels may vary slightly compared to processing the whole image, causing minor discrepancies.
- Local vs. Global Optimization: The model optimizes for the entire image globally when processing it as a whole. For a small part, the optimization is more localized, which can result in slightly different outputs.
These factors combined lead to small variations, even with the same seed, because the model’s behavior is influenced by the scope of the input it processes.
I completely agree, it’s just as you write. Some time ago I tested how the Topaz AI generative functions work on a simple monochrome square (blue, but the color doesn’t matter). It was in connection with the unpleasant generation of artifacts. Well, as expected, the artifacts in the “preview” area differed from those that appeared after “full-view” in the same area (they were different, in different places).
Thank you for your replies, I think I understand all your points (and agree). The preview result depends on the selected image section – slightly shifting the preview frame will change the result.
Anyhow: I’ve been testing Redefined for the first time now, and I see a significant difference between preview quality and the final image – on my system. Yes, there is no such difference with unrelent’s duck image.
BTW: I’m pretty hopeful about Redefined, that’s why I made a complete “Redefined” version of my test image, making screenshots of previews and stitching the previews (although the previews’s quality do not exactly match, see unrelent’s comment).
So I just got an invite to pre-order (expected 22nd Jan) the ?free? iOS version of GPAI, but I see no page here to post about it… As I have a M4 13" iPad and an iPhone 16 Pro Max I’m very excited about this, but is it actually free, linked to a GP update subscription (mine expires Feb-sometime) or something different. Basically where do I post about this?
But if they stop the process after preview generation and continue it afterwarts, the image should not change? (same process but with a stop)
Or do they have to finish the process for showing the preview.
And, i had images processed after another (same image) where only some small parts did change.
So maybe its just s bug.
Do you hope that we could ever get a black and white to color option in this app?
These differences make me question the value of the preview function. If the export doesn’t match the preview, why bother with the preview?
This needs fixing to keep Gigapixel up to its full potential.
Although the preview may not perfectly match the exported image, it still serves as a helpful guide to gauge the degree or nature of the changes that will occur.
For example, when using the Redefine model, if the preview shows a single person with two heads, four arms, and six legs, it’s immediately clear that the creativity level is set too high. While the preview may not be an exact match to the final export, it still provides a general sense of how the image might turn out. This is especially useful for users with lower-spec PCs, as generating the full image can take a significant amount of time. For those who need it, the preview is a valuable tool to save time and resources.
Personally, I often skip the preview and export the image directly, as I typically work with the lowest creativity setting.
Randomly jumping in to agree. The lowest settings on redefine still overcooks images and adds detail when it shouldn’t be there / too much for the selected resolution. This is particularly noticable if you uspcale a person’s face, then zoom in to note the skin has been given a rough or rocky texture because that’s ‘more detailed’. I keep seeing hands randomly become elderly, because it’s ‘more detailed’, and hair become impossibly sharp, etc.
It gives similar results to something like diffusion upscale techniques when the CFG is set too high / model is overfit (i.e. trained too many times). Probably that is the issue behind the scenes.
Basically we need a ‘weaker’ version of Redefine that tries to add less detail when upscaling, as the minimum settings are still trying too hard to impress.