When processing images from cheap phone cameras or old digital cameras they often have been digitally interpolated by the image processing. Usually this is not a big deal when viewing the image but in Gigapixel AI it produces blocky artifacts in the size of the interpolation factor.
If the image has been interpolated by a factor 3 by the camera processor for instance and the image is scaled by 2x it will have 6 pixel large blobs (especially jarring on hair) and detail will suffer.
Using the lowres or highly compressed models does not fix this!
A workaround for this is to scale the image down by the amount it has been interpolated by and use this in Gigapixel AI.
Please add this functionality to the program by incorporating a setting to use regular (down) scaling before the operation happens. This should be easy to implement and yields good if not optimal results because the extra pixels in the interpolated image does not contain any new information current algorithms can use.
PS: Sharpen AI and Denoise AI can not be used instead they even have the same problem.