To make it fair comparison, the image should get 4x upscaled. The recover model runs 4x internally. However, redefine model runs whatever scale factor you set.
So in this comparison. You are comparing 4x recover output with 1x redefine output.
I had used gigapixel previously and have been really been impressed with the AI enhance features. Though, it seems the AI tries a little too hard. Here is a photo of the original and of the post. The detail recovery in the branches (the photo was of a moose in the trees) was incredible, though I actually scaled it back for the final image since it wasn’t the subject. However, it interpreted one area as a bird. Perhaps AI was trained on Portlandia episodes?
I can understand how the AI would interpret that area though. The bud scales and the overall shape do resemble a bird! And it is quite an appealing bird at that.
If that was my pic I’d be tempted to use it as a desktop background for a while, it’s rather charming.