Let’s be honest here — Wonder v2 isn’t quite performing as expected, even on smaller images. From my testing, the 1x mode is the most usable since artifacts are nearly invisible, but at 4x the quality drops noticeably. Wonder v1 actually produces better results at 4x, though it has its own artifact issues as well.
What I think we really need most is a properly working Recover v3, and — perhaps even more importantly for those of us with capable hardware who are currently experiencing issues — Wonder v2 delivering pixel-perfect results that match the quality we see in the web render.
The honeycomb artifacts with Recover v3 still haven’t been addressed in either Photo AI or Gigapixel. These have been present since the Gigapixel beta. It’s also worth noting that Photo AI 1.3 didn’t go through a beta testing phase at all. Given the significant under-the-hood changes (the neuro server architecture, etc.), I personally feel a proper beta period with user testing would have been really beneficial. That might explain why we’re seeing such a large number of reports about model loading issues and quality problems with both Recover v3 and Wonder v2.
Another concern is that instead of releasing a patch to fix model functionality (such as the .dll fix that some of you have mentioned), the team hasn’t put out either a patched version or that specific file. There also hasn’t been much official acknowledgment of the artifact issues, which are quite noticeable. Many users, myself included, have submitted detailed examples.
I appreciate everyone sharing their findings here — hopefully this feedback helps move things in the right direction.





