Your Recovery time to completion estimate issues (adventures) sound like ones I reported and made suggestions about the product team addressing early during pre-commercial release testing…
Ditto with Original image not appearing anymore in Comparison View (as desired) when 3 models are selected for 4 way Comparison purposes.
Thanks for your reply. I’m relieved that it’s not just my problem (like I’m doing something wrong). Same with viewing/comparing the original as you type – I miss that a lot. It’s not a major problem for me, the main purpose of the Gigapixel function (for my non-professional needs) works quite well, but the weirdness shouldn’t last that long; it gives a weird impression about the software implementation. Apparently Topaz has a big bunch of really difficult issues to fix, but I don’t understand why they keep releasing new and new versions and why they don’t occasionally release a version with outliving minor bugs fixed.
(As a former systems programmer, I have to admit that it still holds true: “Each error you remove creates two new ones.” But that it would be so difficult to remove the pseudo-info window telling me that “Recovery time…”
is 15s, after a while 17s, then 16s again, then 19s… without telling me anything real… maybe it’s a forgotten “tuning/debugging” monitoring statement – due to time units of low seconds perhaps only for extremely powerful parallel supercomputers with 4096 GPUs?)
Yes, looks really nice.
I’m quite a bit puzzled now that Gigapixel seems to be better in quite some cases than the pricier TPAI and also gets faster development. Text, also the Low-res and recovery models seem quite a bit above than what TPAI users get.
What should be the sense in having two quite competing Apps where the cheaper does things better than the full-fledged expensive thing goes beyond my imagination…
Just FYI plugs - unfortunately previews for Recovery indeed don’t match the output. We don’t really explain it anywhere in the application, it is a limitation with the new type of model.
In the future, there may be a way to get a new result though if you don’t like the output …
Hey Jo - Photo AI isn’t yet built to handle the Recovery model flows, but the Low Res v2 model, especially the new one, might make sense to add… Great point!
We are also working to improve Text Recovery quality in TPAI, among others - to match the quality of the text model in Gigapixel.
It certainly can - but the seed is currently locked so you will be unable to generate a different output to get it there. Since the image in the preview is smaller, it is actually a different generation at the moment. Please keep an eye on the Beta thread, as I know you will.
That’s another weird mystery thing. Why is there a preview function, which should preliminarily show what the result (or part) will be, when it shows something else? I noticed that too. It would be useful because I can set my view on some critical part of the photo and judge whether the time-consuming Recovery tool for that photo is worth it or not – especially when my laptop takes hours to come up with a result where I’m just seeing it for real.
Perhaps it is related to the fact that the algorithm for Recovery sees (quite understandably) the sample cutout more or less differently than when it is en bloc – due to the closer/lesser boundary of the cutout, neighboring pixels of a certain point run out earlier (they are beyond the border) and extrapolation then returns a different result than while continuing with real pixels? Perhaps complex images suffer from this difference more than simple ones (e.g. a single-color rectangle)? Who knows… Just speculation, I don’t know the specific implementation in Gigapixel.
I’m not sure Low resolution v1 should have been banished to the Legacy models section quite yet. From my testing, it still handles fine detail significantly better than Low resolution v2. Example:
Low resolution v1
Low resolution v2
Perhaps we could have an option to add individual Legacy models back into the main pool rather than having to reveal all of them or none of them.