At the moment I would say that this thread has been completely, and I think deliberately, ignored!
At the moment I would say that this thread has been completely, and I think deliberately, ignored!
Amazing: if this is the result of LRC, it’s a good idea for Topaz to ask themselves some questions and get moving, as suggested in this thread!
I also think bug reporting and tracking using a forum is sub optimal. There should be a dedicated bug tracking website (like Jira/DevOps) where at least beta users have access to. Maybe there is already such a thing, I’ve asked in another thread about it:
Sadly there is still insufficient quality control…
The CLI is completely broken in 2.0.7 once again
Error | [CLI] Engine canceled processing
It shouldn’t be too difficult to write a simple test that starts the tpai.exe and checks if at least an output image was created (That’s a 5 minute task for a developer). And running that test prior to every release…
Sadly this is not the first time the CLI was completely broken, there were 1.3.3, 1.4.0, 2.0.0, and so on… It’s like flipping a coin; will it work or not this time.
I continue to believe that one release a week is something for beta testers and not for a finished product: I don’t think that more than three/four releases can be published in a year, unless exceptional events occur, and after a serious cycle of testing non-regression and quality control of new features
I did not sign up to be a Beta Tester, yet I am being treated like one!
It’s very frustrating not knowing what release to get so I can have something reliable and not worry about weirdness rearing its ugly head.
Topaz, please produce a thoroughly tested stable release and then wait a long time before another. And of course I would also like you to adhere to the subject of this thread.
Further to my post above, for many years I have used JRiver Media Center. When you go to their ‘Update Channels’ to find updates there are 3 selections:
“Stable (recommended)”, “Latest (May be less stable” and Beta, which is only available to the Beta Team members. I think they have a good approach in that they are not suggesting the latest and greatest until it’s more proven.
That might work.
The thing is - the software might work well in one area (sharpen/upscale) but might fail completely in another as is right now with 2.0.7 and broken CLI (already the 4th or 5th time the CLI is completely broken in a new release), where I have not yet gotten any response from them if they will fix it in 2.0.8.
Therefore the most important thing is that automated functional tests should be developed - initially and continually. Only then it is possible to determine, if a release is worthy to be recommended (as all major features work).
It can’t be all on our backs to try out every release and every functionality of it and then say, “that may be a good one, flag it recommended”. One can do that with employed beta testers, but not with users that payed for the product and do testing voluntarily…
I have done this work for years: I think that a serious company should make a plan for the next versions, decide which contents to develop, plan the necessary tests and decide the release date. If the release date is not respected, it is a failure for the company.
During the last phase of development and testing, beta testers can play an important role but once the product has been released, beta testers must start to be involved in the next release and not continue testing the one just released.
The worst thing you can do is try to do something day by day and then decide whether to release it or not: unfortunately this seems to me to be Topaz’s way of developing the product, we users never know what content will be available next time release (usually next week ).
As for the tests, whether they are automatic, obviously better, or manual is only a question that concerns the economics of the company and technically whether or not a test can be automated.
It seems we already lost one of our comrades, who actively supported this thread:
Please simply stop introducing new beta functionality until you resolve any remaining issues with the basic functions of noise reduction, sharpening, resizing and the current beta lighting and beta color functions. At this point I feel I have something that is half almost done and half ‘here’s something else we could do’. I have a good while before I have to decide if I will renew or not, but if I had to renew today I honestly could not do it. Your three base products were, for me, excellent. Keep tuning base functions in Photo AI and introduce new beta functionality only after current betas are clear of defect. I am trying to be helpful with these comments and I hope that comes across.
Just as an info, since Topaz Labs removed Sharpen V1 in favor of V2 in 2.1.0 and many users emphasized that they don’t understand this move I’ve asked if they could re-integrate V1 as a chooseable model in future versions:
I agree that development is focussing on the wrong goals and is going in the wrong direction. Developers working on new features should be retasked to fixing the existing ones. For one, I don’t like the oven analogy, and secondly, there seem to be a lot of bugs without an owner, or they would probably be gone by now. Some of those single owners are either way overworked or in way over their heads. Sometimes a second pair of eyes can really help.
My time for renewal is coming up, and I do not see myself doing it this time.
Looks like same old, same old; the first sentence I read in the new 2.1.1 release notes were about Object Removal
Current pain points for me personally: Reported CLI problems not fixed: Problem with overwrite filter settings via the CLI
Had they implemented an “Always On” preference for sharpen I could have used that as a workaround: Additional Autopilot preferences
But that request is also being ignored for many releases now (not even a response to many posts I wrote in that direction).
So for me CLI is not usable again, neither in 2.1.0, nor in 2.1.1…
I don’t understand. We have already 83 votes saying we are not happy the way it goes now. No transparency on what will come in the next release, no transparency on open bugs - who is on them, when they will be fixed, how are they prioritized, seldom any response to direct questions/requests, and so on.
Some users have already left by not renewing. Do you want us 83 users to also go away?
read the release notes for 2.1.1. Object removal really is nowhere near to a priority there.
object removal is indeed a thing where AI can do “wonders” compared to traditional approaches so yes, this does somehow belong into this app and is a worthwile added feature. Read the feedback for the newer versions. Most people are happy with it (apart from the quite bad first attempt in the first Beta).
After all the App is called PhotoAI and not upscale/denoise/sharpen.
This doesn’t mean, of course, that problems with existing features shouldn’t be solved and yes, they are sometimes REALLY slow with it.
But even there your main issue with the CLI is for sure (unfortunately for you) one that’ll only affect a minimal part of users and thus doesn’t have the highest priority.
I guess they should first finally iron out the “blurry patches” and the “grid lines” issues as these are much more severe problems. And if you look, there is something done with those issues, having even special Beta versions just for that (unfortunately up to now without a real solution).
I agree on that part.
The thing that mainly bugs me is not knowing (the transparency issue). If users report new issues that were just introduced in a previous version and report it, it would be nice to know, if it is being fixed in the next version or not (priority). A real bug tracker (Jira/DevOps) where at least the beta testers also have access to would remedy that situation.
Right now many users don’t even know which version is a “good version” to use. We download a new version every week and have no idea, what lurks beneath.
Yes, that is definitely badly handled here. Not only is there quite often only very sparse feedback but sometimes you’ll even get two statements that differ quite much.
As with the speed issue in TVAI with the Iris model being slow as a dead snail on Apple Silicon since several weeks: first statement was: issue is known and we’re working on it, some time after I got another statement that sounded more like there currently are no plans whatsoever to solve the problem.
And, of course, some kind of bug-tracker would be fine. Also consistency in the release notes so that issues that were in earlier versions don’t suddenly somehow “vanish” in later versions - but are also not in the list of fixes (and also still exist).
Some time ago a user did share a link where somebody did metion that the AI engine of Apple silicon is broken.
Its was with DxO deep Prime.
Don’t know if this is still actual, but most people with problems seem to use Apple Silicon.
It did work well until Sonoma. And even with Sonoma it still did work without the artifacts if you chose to run the App under Rosetta (interestingly enough without a speed loss to notice).
Just after Topaz’ fix for visual artifacts the speed of Iris is that ridiculously slow (and there even more so for the 2x model which is in some configs even slower than 4x which should NEVER be) - but the other models run normally.
Oh, and not to forget that Iris in the versions before 3.4 didn’t have artifacts on Sonoma, either.
So this is definitely a software problem that IS solvable. And even if they don’t have time to fix it (seems as if there’s little to none development for MacOS atm) then they could at least provide the older fast models as a separate download for those to use that were not affected by the problem.
At least with PhotoAI this looks different for us Mac folks. There the app simply flies on an M1 Pro compared to a Windows PC with i7 13770 / GTX 4600.
P.S.: And the only problem with Apple Silicon and TVAI is the speed of Iris. Other than that it does run rock stable - none of those crashing / overheating / system hogging that I always read from people running especially Intel/NVidia PCs (and I can confirm the heating and system hogging even on that not so extremely high end PC I have here).
I’ve a 7950X and a Quadro RTX 5000, i dont have hangs, overheats and so on.
But thats a workstation, both cpu and gpu are using ecc ram.
Nothing is OCed.