Gigapixel AI The final image is far worse than the preview in any GPU or CPU processing modes

The program is currently essentially useless for processing large images. You spend time to tune in the preview and then the results have nothing to do with it. The only way currently to get a decent result on enlarging an already large image is to do it in pieces. The last one I did I had to do as screenshots, piece by piece, combined in photoshop into a final image.

I should note that now the increase x2 in the preview is almost identical to the result file x2, you can already work with it. x4 and more not work fine yet. For myself, I decided not to try to face the wall of misunderstanding anymore, communicating with managers and rarely occasionally with developers. What they have achieved with x2 is quite enough for work.
Of course, Iā€™m glad that they even listened to my opinion about the sliders and now they interact with the scrolling of the mouse, it became more convenient to work, but even adding such a trifle took an incredible amount of time.

Period my updates of several programs, suddenly for me, came to an end, so I will not even try to push further updates and improvements to the program. Perhaps I will update with a significant leap in the quality of processing, but in my humble opinion, this is not feasible in the next three years.

Topaz Labs has only data-depleting AI for now. They are not able to increase the amount of detail infinitely. With each processing, my uncompressed TIFF files only decrease in size but should grow.

If I manage to find a program capable of restoring data on an image, restoring gradients and textures, then this would allow me to continue enlarging the picture, or perform other operations with it in other Topaz Labs programs.

1 Like

In my case, there is almost no difference between the original image and the AI enhanced one. Iā€™m trying to improve the resolution of the black and white screen grabs of public domain horror movies. The preview is not much different that the output. Is there a limitation of what the AI can do with black and white photos? Or maybe thereā€™s something Iā€™m doing wrong. Iā€™ve played with the settings and it doesnā€™t make any differences at all.

Edit: I found the solution: I have to make my images much smaller to make it works. Then, I can increase the image size y 4 or 6 times and I have a significant improvement in image quality.

Thatā€™s right. The first tip I give in this case is indeed to make the pixel size of the video smaller. :slight_smile:

1 Like

I wrote about this in the summer here:
https://community.topazlabs.com/t/gigapixel-v5-0-0/15342/64

But nobody understood to me that this is a bug that the developers should fix. I even made a separate button in the interface design to solve this bug.

I will copy part of the post to this topic:

Only developers know what resolution the incoming image must-have for the algorithm to recognize it as satisfactory and adequately increase it.

If you plunge into history, Bayer filters, and the like, it turns out that the actual resolution of the incoming image is much less than you think. Therefore, a blurred image stretched by the camera, and the algorithm cannot swallow such a soapy picture.

If you can reduce the image to real resolution, where each pixel carries real information, the algorithm will work much better.

1 Like

Well explained. :slightly_smiling_face: