Gigapixel preview has more details than the output

I recently (3-4 weeks ago) purchased Gigapixel and have used it for around 120 pictures.
I have encountered a problem where the preview very often has more details than the actual output which is strange because it should be the other way around or be the exact same.
In the example I will provide you can see the preview showing off single strands of hair that don’t appear in the original image (That’s great! It’s exactly what I needed!) but in the output you won’t be able to see any of those.


And here’s the result

These are my settings for the picture above:

But I can assure you that I tried every single option available and at one point having 64 different pictures with different sizes and settings, all of them are basically the same except for the speed at which they are made and maybe some minor details.
Not sure what else could be the problem, at this point I could just take screenshots of the preview and stitch them together to make a full picture.

Also let me get this straight: The result is amazing regardless of problems I’m having, but it does look like an old polaroid photo put through a scanner, the one in the preview looks more genuine.

2 Likes

There might be an issue related to rendering of different programs used for editing and subsequently viewing. What program do you use to view final output? I have had similar problems on my notebook in TS1 - just opposite, In TS1 the final output at 100% was bit blurred (rather lost some details) while in PS the same output was more crisp. Using plugin with SH AI gave me what I asked for howvere returning back to TS1 the result was far from it.Fair to say that this happens only with TS1 (not TS2) and on my Notebook only (NVidia 8600 or so) not on PC (Nvidia 1080 GTI).

Well for now I’ve only used the normal image viewer that comes with Windows 10 to check the pictures, but I do use PhotoShop to edit.
I export them directly as PNG and my PC has a GTX 1060 3GB and a i5 6500 so not the best of lineups.
I don’t really care about the speeds that much so I wish there was an option to have the picture quality so high that you can even see the atoms by letting the image process for hours. Obviously that’s an exaggeration but I see so much potential for this.

Did you try exporting as a jpg file? In your example I would expect the output to look like the preview, its what we all want.

Here’s what it looks like without converting it to PNG (The original is JPG):
https://cdn.knightlab.com/libs/juxtapose/latest/embed/index.html?uid=517bdf9a-d403-11e9-b9b8-0edaf8f81e27

While I agree that it would be great if the output looked as good as the preview, Gigapixel did a very good job of cleaning up the picture from the original. I did a screen capture of the original shot from your link and tried using Sharpen AI on it. I tried all three algorithms (sharpen, stabilize and focus) and none even came close to the results you go from Gigapixel. The eyes and mouth were a bit sharper but the rest of the picture was far worse.

I see you have your Settings - vs the Preferences snip you attached - on “Auto”.

Did you experiment turning the Face Detection setting to “Off” and try the “Manual” settings at all?

Not that that should affect what you see in a preview (which, you’re correct, should be representative to be useful) vs. an output. But it’s just something else to try.

Also, you retained the file format type and didn’t get tricky with that (for ex., you didn’t change from a JPG to a TIFF) so it should be a pretty straightforward preview and output to generate.

I experienced this too. I solved this issue by switching the processing mode from GPU to CPU. I used the MSI GeForce GTX 1070 Ti Gaming 8 GB. Maybe it is not the best choice for doing this type of work. My new RTX 2070 delivers better results but with a strange texturing pattern over the whole image.

3 Likes

This issue has been around for some time. However, applying a bit of Unsharp Mask sharpening in your photo (Photoshop, Affinity Photo, etc) program will easily bring back the additional sharpness.
I would use my GPU and add the unsharp mask if needed. Unfortunately, this doesn’t solve your texture problem.

I also noticed this pretty soon after using. For some reason some images just show really good details in the preview, and then come out blurry when exported. Would like to see it fixed.

Take a look at some of the other threads on this topic. In short, some other users have reported that the machine learning algorithm in GigaPixel was trained on small images and when applied to the small preview, you get one set of results and when applied to the entire image, you get another set of results. I believe Topaz is working on this issue for a future release. As you stated, the product already gets some very good results but if it can achieve what we see in the preview window, it would be outstanding.

1 Like

I have cut a large image into pieces and let Gigapixel work on them. But I don’t like what Gigapixel does with the small pieces. It looks oversharpened and with a strong hollow shine around edges. So atm a large image processed as it is looks not bad to me.

Did you use any effect in the photo, it looks like for the final output some effect has been placed so that the picture looks a bit soft.

Nope, that’s the raw output.

I just installed the free trial about 2 days ago, but immediately noticed this same thing. The way I see it, is that the preview uses the area seen in the “before” section of the preview and only processes that area of the photo, so the AI is working in a very limited context. I might be previewing the eye or the lips, for example. In the preview, I get fantastic skin detail on the lips, but when I run the full photo, it’s barely there. That makes the manual mode a lot less useful, because you can’t really use the preview as a reliable preview of the real thing. The full photo will get less specific handling and you lose some of the detail.

I bet that the developers are fully aware of this and that there’s very little they can do, due to the neural networks used to do the processing.

Even the preview zoom seems to behave in a way that suggests that they scale the source image to the preview panel size and then process those pixels and show the results, which isn’t a 200% zoomed version of what you get when you process the file.

The preview you get is 50%-400% of the original, not of the output. It is indicative of the processing that will happen on the image. It would be impossible to show a preview of 100% of the output as the processing involved would be the same as the output phase.

1 Like

As far as I know the AI has been trained with images of a limited pixel size. I read about something like 500 * 500 pixels at maximum. And yes, the developers are aware of this. That’s all we “know” so far I am afraid.

I did tests before, cutting the image into 16 pieces then processing each one and putting them back together, gives the same result.
I also find it hard to believe that the AI knew that the image in the preview was supposed to be a bunch of hair, if you showed that picture to anyone they’d say it’s a swamp or a closeup of a tick. The AI also knew the “Cow Lick” haircut style without having the context.
I considered taking screenshots of the preview and piecing those together in the past, but it would be extremely time consuing. Could we get an option to save image from preview or automatically split it?

I have been using gigapixel since it came out. I have a GPU 1070 GTI with an AMD CPU. GigaPixel will put a disturbing noise pattern on the output, especially annoying on portraits. You can use the AI DeNoise on it, but then you lose sharpness. I think they need a much improved release.

That is pure conjecture on your part, you have absolutely no idea unless you are a topaz labs employee.