I really wish you’d add a “scale to longest edge” option, like in Photo AI. It would give me control over output dimensions, especially for larger batches, and prevent unpredictable upscaling, which I’m struggling with now.
Currently, I have to pre-sort images by orientation to scale by width or height. Even then, some images exceed the 6x limit, degrading quality.
Gigapixel AI is primarily for upscaling, unlike Photo AI. This feature seems like a no-brainer; it would make a huge difference. Please consider adding it!
Hey Fotomaker - in some cases we may be providing limited Pro access to Beta users in order to test new features. Other than seat management or offline auth though - there’s not much needing to be tested in this first release except for the CLI.
There are some exciting new features coming to Gigapixel personal, so stay tuned.
Hello Mario - depending on your use case, you may not need a Pro license. Just a heads up, the Pro license was changed for Gigapixel in our EULA and on our website several months ago - it sounds like it may have already been present when you purchased the software from your note. If you have any questions, feel free to contact us.
I have installed the new version (v.7.3.0) but dont running. I have remove and reinstalled, othing. I have repair also nothing. I have Windows 11 Pro 23H2, processor Intel(R) Core™ i7-8850H CPU @ 2.60GHz 2.59 GHz and 32,0 GB ram and NVIDIA Quadro P1000. Becasue dont running? Do you help me. Thanks
I’m seeking clarification on the “Preview/Export” Engine of Low Resolution v2 model and its differences compared to other models.
As previously reported, the v2 model introduces artifacts on certain images, making it unreliable for batch processing. Interestingly, manually configuring the “resize-normal-4.json” and “hqv2” models to utilize the “gmpv2 v13” tensorrt engine files (default for the v2 low-res model) consistently resolves these artifacts, even with identical denoise/deblur/decompress settings. (see screenshots)
The graphics card model is RTX 3090 Zotac Trinity OC. Nothing has been changed in the Nvidia control panel, everything is at default values. I get the same result when I force the program to load the gmp v2 v13 fp16 ox.tz model. When I delete the tensorrt lines from the configuration files, I get artifacts with the low resolution v2 model selected in Gigapixel AI, while with standard v2 and hq v2 selected in the application’s graphical interface (with the gmpv2 v13 fp16 ox.tz models loaded) I get the expected results. Maybe it would be a good idea to share the file with other forum users to see if anyone else with an RTX 3090, Ampere architecture is experiencing a similar issue?
I know the documentation for the CLI mentions a json with all the possible values for the model, is that json included in the installer and if so, where can we find it to review?
This is happening in both cases. Even when I use CPU instead of GPU. I don’t have this problem with the v12 revision of the GMP model. I really don’t understand what could be causing it. I’m sending you the log files of the program instance where I loaded the “problematic” photo with the original configuration files of the model, using both GPU and CPU and saved the upscaled image. Let me know if the log file can help. Logs for support.zip (18,0 KB)
I’m working on a front end batch processor and just want to make sure I have all the switches available as options in the front end. The specific one I was looking for was the line art (I think that’s the name?) model. I’ll take a look in the folders when I’m back at my personal machine tonight to see if I can find it. Thanks for the info!