Pixelmator Pro ML Super Resolution vs Gigapixel

Here’s the article: https://www.pixelmator.com/blog/2019/12/17/all-about-the-new-ml-super-resolution-feature-in-pixelmator-pro/

It’s nice to see some competition in the AI/ML space, hopefully it will push Topaz to take advantage of Apple’s new chips with all of their products!

Does anyone here own both Gigapixel and Pixelmator Pro? I’d love to see a comparison!

When it comes to low-res photos, Gigapixel vs Pixelmator ML, Gigapixel is a monster at what it does. Perhaps by a factor of 20. Pixelmator ML isn’t even close - perhaps because of lack of details for the ML engine to process. However, when enhancing HQ photos with tons of details, both results look astonishing. However, Pixelmator comes out on top when you zoom in on the much finer details.

1 Like

To give you some perspective, a 4000x4000 HQ image was blown up by the ML to 32000x3200. To match that, I used Giga settings of 4X then adjust the size to 32000x32000. Settings: compression, face, auto. I then zoomed in both image equally until pixelation started to occur. It was Giga that started first, while ML engine managed to maintain quality even when zoomed in further.

1 Like

So is Pixelmator basically converting it into a very detailed vector?

It is still bitmapped – but both images will pixelate – as stated above, Giga caved first. It appears these posts does support image uploading – they are just not allowed. I do not have an image hosting account to link images.

I was curious so I looked at the website. This is only for Macs not Windows. Gigapixel works on both.

UPDATE: I did a few more images with Gig vs Pix and concluded that Pix is a very good competitor. Now, as for which product is superior (apart from Pix doing tons of other things) and provides the best enhancing and image quality for low-res images, the answer is neither.

I am overwhelmingly convinced that getting great results is image dependent – that is, every image is different, and the programs will interpret them differently, sometimes into a beautiful enlarged masterpiece – and sometimes not. It is important to note that Pix can scale by resolution as well as by size.

Based on those results, it seems the best path to upscaling to crystal clear high resolution images is storing high resolution image data as algorithm paint, then when these dynamics are detected in an image, paint the new image accordingly.

For example, gather hi-res data on eyes, skin, teeth, etc. When these items are detected in an image, apply the actual algorithm, perhaps with an overlay mix of the original to the new image. This is similar to clone painting where you copy a similar piece of skin and paste it over an acne in a portrait photo. So, the best way to get great hi-res skin, is to use hi-res skin as a resource to paint the new clone. Now, there are variables such as lighting, color and environment (dust, particles, etc) which can be added back in to create the image when present.

Otherwise, as you may have observed, eyes get washed out and dimmed when enlarged and you lose the light and brightness in their eyes, etc. Moreover, enhanced images should be brighter, more detailed, more realistic, etc. In doing so, we can best avoid photos that appeared to be hand drawn, the product of a machine, or something that just doesn’t look real.

You’re right.

But, the best solution is to use separate models for the face, trees or cars (and even for the face: eyes, mouth, hair, etc.). This is how the brain center works in humans (and not only). As an artist, I also deal with neural networks, but also neurology and AI programming, which is very useful :slight_smile: for this job.

Unfortunately, existing commercial upscaling solutions do not implement this complex processing in home applications because home computers are too weak to support all these resources. (Besides, you can’t require every photographer or graphic designer to install a cluster of computers in the basement just to get a better quality image calculated on a better model).

For this reason, until there are home quantum computers, the implementation of any neural networks for commercial applications of this complexity will always be a poor compromise between quality and convenience.

There are of course scientific works and even experimental applications, e.g. for painting or noise reduction, that detect the type of object, but this is still crude and expensive. And the client is looking for cheap and quick solutions, because the client is lazy.

Besides, many people are more about fun than worthwhile software, which is why there are so many stupid applications for phones (eg. Prisma like or Instagram, Tinder, hundreds or even thousands of such toys), and little valuable software for professionals.

Telephone developers have noticed this phenomenon and that is why there are more than a dozen worthless new toys for phones every day, because it gives money (subscription, advertising, or selling personal data and user activity).

However, there is very little valuable software, because you need to involve more people, time and money to do it. Well, from time immemorial, trash and kitsch always sell better, because that’s the nature of mankind.

That’s why I admire Topaz Labs that the company still makes such great programs and keeps improving them.

Regards from Poland,
Lech Balcerzak