Gigapixel v5.0.1

I have a newer Ryzen 7 3700X and see a similar result:

CPU 88 sec, 75-85% CPU
OpenVINO 27 sec, 65-75 % CPU
GPU (RTX 2070 Super) 10 sec, 15-30% CPU

Gigapixel version 4.4.5 and/or 4.4.6 still has better image quality compared to the latest version. Also, very stable. Although, may be a bit slower. One can request an installer from support at the main website, to compare the difference with the new version. Both applications can co-exist as they install into different directories.

In my experience, quality declined after version 4.4.6.

Interesting with my i7/GTX1050 this is the best release for quality and speed.

Iā€™ve done a lot of testing quality wise, and those are my findings. Of course, there is nothing wrong with you being happy with it. Enjoy!

1 Like

Hello, this is my first time posting, im a long time user of Gigapixel. I hope i dont break some form rule or get attacked for this but i want to bring up the argument that i just tried the internet based AI upscaler called ā€˜letsenhanceā€™ and on a test image of roughly 600x600 in a 4x enlargement it hands down beat Gigapixel.
No matter what mode or setting i tried with gigapixiel. It beat it in that it retained more of the original image shapes within the image , lost less detail and created more higher resolution and believable details.

Now i checked their pricing plans and i dont like the look of it, they limit the amount of images per month to 100 for $108 every year. For more images its much more expensive. Topaz with its unlimited images and ability to just batch load a lot of images and have it crunch through them in no time is far betterā€¦ BUT ultimately im looking for the best output image qualityā€¦

Can anyone here from Topaz Labs give any insight if the quality of Topaz Gigapixels output will in fact increase in the future or not?

I havenā€™t used ā€˜letsenhanceā€™ and the cost you quoted seems too steep. Also, I wouldnā€™t upload my personal images to some website. But if, as you say, the image quality is indeed better, I also hope Topaz will not only try to match it, but aim to outdo it.

I agree with you johnnystar, v 4.4.5 produces in most cases better results after the tests I made.
I think I will stick with this version.
But for the first time I made real tests on original images downsized 4X then Upscaled 4X on Gigapixel AI
and after comparing the results with the original images Gigapixel AI has still a long way to go recover the fine details.

@sono2000 I looked at the letsenhance facebook page and it is interesting. It is certainly less expensive than AI Enlarge for most users if you have up to 100 images a month or less. If you sign up you can get five images processed for free. Here is the facebook page for those who want to read more. Redirecting... I didnā€™t test any pictures so I canā€™t compare results.

As for Gigapixel, I believe that Topaz will continue to improve it as they have done so far. I think concentrating on quality (especially small faces) would be a good idea.

I agree with you, but I donā€™t think AI upscales can add real details that match or come close to the original. At best, the AI model creates convincing new details, but they are essentially fake, hallucinated from the thousands of other images it was trained on. Very likely, your original image was not one of those.

1 Like

For sure, its scientifically impossible to generate the original data by way of an AI. But generating convincing details that pass a kind of visual human Turing test, is ideally what we want to achieve. In the future i can imagine some kind of AGI that has a kind of common sense and access to many visual AI algorithms and can figure out which ones are best to use in what order to achieve whatever goal you set it. For example it might start out upscaling and then realize it needs to run object recognition on the pixels and then use another AI to figure out what angle these objects are to the camera and what depth, and what age, what color, and whats the lighting in the scene what colors are bouncing around(raytracing), before it upscales the next level of detailā€¦ its never ending.

1 Like

Years ago I read an article on the subject where the researchers thought they had discovered a breakthrough in AI image reconstruction. They had been feeding the software images that had been reduced in size, and then comparing the enlargements to the original to train the AI until the results were essentially identical.

Unfortunately, when the AI got an independent review they discovered that what the software had actually learned was how to reverse the reduction process they had been using to create the training images. Images that used a different reduction process, or were low rez to start with fared rather poorly in producing appropriate improvised detail. Instead it created artifact patterns that were a reflection of the reduction process it had been trained with (thatā€™s how they figured out where the AI training had gone wrong).

If you think about that, visual AI algorithms are doing exactly that, they are learning to remove commonly found noise such as jpeg artifacts, learning to upscale based on downscale algorithms that have been applied to the original like bi-cubic down-sampling, learning how to sharpen lensblur etc. This is what we need, its just that we need a smart AI that has access to many algorithms and knows when to use them, so if you have an image that is an edge case it will have an algorithm just for that and know to apply it.

1 Like

Perhaps Iā€™m wrong, but I wouldnā€™t think that most people need to enlarge images that have been downscaled from the original. Short of tying to turn someone elseā€™s low rez proofs back into something like the originals, Iā€™m not sure why anyone would need to do that. The whole idea of Gigapixel is to enlarge your old archived low rez images, or to blow up your already high quality images to make very large prints.

I know that people here have tried downsizing as a first step, in an attempt to artificially improve the enlargement result, but that may be just the point I was making. Theyā€™re adding in the sort of scaling/compression artifacts that Gigapixel has accidentally been trained to remove by teaching it with images that are not the same as the unadulterated images weā€™re actually trying to deal with.

why limit it to one form of use? Thatā€™s not good for business. A good use i find for gigapixel is upscaling textures for CGI, i can go through my entire texture library and upres any 512x512 to 1K or 2K. There are many use cases the sky is the limit. Ideally a smart AI program will be able to detect all use cases and apply different algorithms to get the desired result of the user.

1 Like

Agreed. In your case the CGI textures are not downsized, just small to start with. You most likely would need a different algorithm than the AI training ā€œmistakeā€ the paper was warning about.

Like you, most of what Iā€™m using it for is taking small dds texture files and enlarging them to at least 2048x2048. The dds format has a unique type of compression artifact that Gigapixel doesnā€™t anticipate or understand, so I usually have some cleaning up to do.

@andymagee-52287 Let me give you an example of mine. My son and his family live far away so I mainly get to see then on Instagram posts. I do a screen capture of their posted pictures (about 600 x 600px) which include my grandsons and then upscale them in Gigapixel. This is much easier than constantly asking them to send me the original files. Just one use but these are pictures that have been down sized and then I have to up size again.

1 Like

Interestingly, during my initial testing I downloaded some low resolution stock images from Google images. They were small and fuzzy and had awful compression artifacts, yet when scaled up 4x with Gigapixel, some (not all) of the results were astoundingly good. Convincingly realistic. I surmised that the originals of those must have been included in the AI model training.

Is it normal that processing take more than 5 minute for a 1000x990 pixel JPEG picture?
If yes the software isnā€™t interesting at all for me.

PS: After finishing I compared the output with the file out of this thread and must say the result of the ā€˜Photo 2.0ā€™ is horrobil.
The ā€˜smart enhance (beta)ā€™ is even worse.
For this picture Gigapixel wins.


Left is Gigapixel / Right is ā€˜smartenhanceā€™

On my system that would take somewhere between 5 and 9 seconds.

The image on the right has sharper more resolved details. Its superior in that reguard.