Fixing these issues and further advancing the core features (upscale, DeNoise, Sharpen) is where the focus needs to be.
And not on new photo editing features, that are already covered in numerous other products (Adobe PS/LR, CaptureOne, Luminar, Lama Cleaner, and so on).
Building a jack of all trades device is not a good idea, Upscale, DeNoise and Sharpening will suffer greatly from that (they already do).
Many forum users have already stated, that they will not renew their subscription, the way it goes now. I would think that saving the users you already have (and that have experience with the product and give feedback) should be the highest priority.
IF ANYONE FEELS THE SAME WAY, PLEASE VOTE BY CLICKING ON THE TOP LEFT VOTE COUNTER.
P.S. If I was in charge of the product (which obviously I am not), I would immediately halt all new features and re-plan future iterations with the focus of fixing the existing problems and enhancing core functionality only. E.g. a few weeks just focused on blurry patches, a few weeks on batch processing and so on. I would gather feedback from the users/beta testers on each issue (with a seperate thread) to get things right. And also develop extensive integration/functional tests for all core features, so that they “stay ok” in every future release to come. And only, after that is complete and working properly (for the majority of the user), I would permit new features in planning of next iterations.
This issue is very important to long time users of the 3 original products and then PhotoAI.
I feel we have become test users for the developers. Rather than using a stable product we have to endure poor performance of the core functionality, each iteration having recurring issues or new ones as have been listed above.
I find now I go back to Noise Reduction AI in Lightroom, which gives repeatable and reliable results, more often that I want to.
Please TOPAZ take this as a genuine concern of users. We are tired of what we are seeing.
It’s been interesting to watch TL over the years trying to find their place in the market. They’ve drifted back and forth between discrete products (plug-ins, and apps) that perform individual specialty functions better than anyone else, and larger integrated platforms that bring all their tools together so the user workflow is all doable within their single product. The market will keep pushing them toward the latter, but they will continue to find the competitive landscape punishing them for the effort because that is where the big players will always dominate. Sticking with the more focused model of producing leading edge single products is an expensive race to stay far enough ahead that people will pay for the specialty tool that’s not all that convenient to use.
p.s. The cost of trying to continually make software better enough to maintain sales revenue is what drives companies to adopt the subscription model.
I don’t want Photo AI to turn into an image editor—I already have apps for that (Affinity Photo and Adobe Photoshop). I’d much rather Topaz work on improving the upscaling functionality over adding more editing features.
I assume that most users of Topaz software already have access to image editing apps, even if it’s a default app like Apple Photos. So in my view there’s little need to reinvent the wheel.
I’ll add a few more features that I think should be added for a better user experience:
Gigapixel 4.4’s UI was my favorite, as it had the list view (although I think not as compact as the link) and you could add additional images while the app was processing. So I could add a few hundred images at once, start processing them, and add a few hundred more images during processing. Repeat several times to process thousands of images in one session without having to select all of them at the start.
A way to reorder images (and videos in VAI). That’s a basic feature, folks! Handbrake got it right with the file browser-esque queue for two reasons: 1. It lets you reorder videos, and 2. It is similar to the list view in a typical file browser, so people automatically know how to use it even if they’ve never opened Handbrake before. In contrast, Topaz’s interface has a learning curve which is in my view unnecessary.
A setting to automatically skip Recover Faces for faces above ~100 pixels high (unless manually overriden). In my experience, this feature generally makes faces better if they’re smaller than ~100 pixels and worse if they’re larger than ~100 pixels. My proposed setting is far from perfect, but better than what we have currently.
I really get the impression that the Topaz developers, in general, don’t seem like heavy users of the software. Even minor additions such as adding a frames per second display in VAI (which I believe is now the default) is an indication of the misalignment. Seconds per frame is the more reasonable and natural display of processing speed.
Here’s my answer to the “if I were in charge” question. My priorities would be, in order,
Add features and improvements that are directly related to upscaling, noise reduction, and sharpening.
Add features (not in step 3) that help turn a photo to a closer depiction of reality.
Pause all new features.
Fix most of the bugs and serious issues (e.g. fix the blurry patches, add the abovementioned Recover Faces workaround).
Start to add new features, including some that were paused in step 1, based on their proximity to the “core features” of upscaling, noise removal, and sharpening. For each feature or improvement, I would ask the following question: Would this feature improve image upscaling? If yes, then it would be a high priority. If no, then it should be pushed back.
For example, improving the upscaling models would be crucial.
What about noise reduction and sharpening? Well, overly noisy images cannot be upscaled well, so improving the denoise models also improves upscaling. Ditto with sharpening. A very blurry image that is upscaled simply results in another very blurry image. Basically, good upscaling requires some denoising and sharpening.
Adjusting color and lighting generally does not help improve the upscaling, so these features can be placed on the back burner. Instead, they could be replaced with some related features I would prioritize after the usual upscaling/denoise/sharpening but before the color/lighting adjustments.
AI-based conversion to HDR increases the “color resolution” of an image, which I would say is a kind of upscaling. I would use this feature to fix images with the sky blown out, etc.
Using AI to create a depth map increases the “depth resolution” of an image from 1 depth level (in a normal image) to several. (I think VAI’s frame interpolation feature is good for the same reason.)
Once the features in step 3 are introduced and fine-tuned, my secondary, broader goal of Photo AI would be to turn a source image to something as close to reality as possible. Everything in step 3 is in service of this goal, but it also encompasses several other features such as the following:
Image extension using generative AI.
Object removal is harder to justify (even under this broad goal) as it involves removing an object in reality. One could argue that one wants to see the “reality” of what’s behind the object, or to remove moving people and cars near a monument at a tourist site to show the “reality” of the monument without human interference, but in my opinion, this feature is better suited for an image editing app.
I am voting with my wallet and not resubbing. Though i surely will be churned out by someone else that fell for facebook sales ad and influenced press copy “reviews”. Maybe they will learn it cost more to acquire new customers than to keep old.
You mean white balance for jpegs, for people who don’t spend 300€ a year on software for image editing
Yes, Adobe has this and its killer, a every day almost every image tool for removing unwanted objects, but Adobe needs to call its cloud to run it, bc not every one does have a 10 Teraflof GPU with 12 GB Vram to run it.
It’s the new and only “feature” of Photo AI 2.0.3, see here:
I personally don’t spend 300€ a year on cloud software. I use products, that I can actually buy without the constraint of a subscription (like Affinity Photo, Capture One and Luminar). But that’s a personal choice. If you need the best of the best and it’s your core profession I suppose there’s nothing to be said against cloud and subscriptions - after all, it’s just another business expense.
One can use Lama Cleaner, it’s incredible and its absolutely free. It has dozens of models and it’s core feature is object replacement. Of course, if one needs the best thing out there, it’s something like your sample image. I would think Adobe has entire development teams just on this matter - because they can And I don’t see that here with Photo AI… Not by a long shot.
Why would anyone want to re-invent the wheel in Photo AI for this? (you won’t get anywhere near products already out there). And implementing that in Photo AI takes away focus on the important issues, the core features. And raises complexity in your application.
Gigapixel AI, Sharpen AI and DeNoise AI were incredible programs. Great UI, multi editing, decent batch processing. All we needed was one application that combined these three existing AI features, so that one doesn’t need to use 3 programs in a row for each picture. And that’s what I hoped Photo AI to be.
But one year has already passed, and Photo AI is not anywhere near the 3 old programs. And development on the old programs completely stopped. Then every new week new and also reoccurring old bugs surface and there is no reliable version in sight. Major problems like the blurry patches issue are continually swept under the carpet.
And then in every release I read about even newer features (keyboard, coloring, …)
How can you develop new features, when there are still so many open issues.
New features just increase the program’s complexity even further.
I bought this product, like many of the people that voted in this post. I am not a beta tester and I am not payed by Topaz Labs. But the way it goes the last 12 months, I feel like a beta tester. And that shouldn’t be the case. 18 votes in just one day says something; I am not the only one thinking this way.
For the interest of the commenters on that thread, here’s why I believe that SPF is the more “natural” unit over FPS.
Topaz displays, based on the user’s choice, frames or timecodes.
With SPF it is easy to figure out how long a process will take: just multiply the SPF by the total number of frames (or by the number of seconds in the video times the FPS—of the video, not the processing FPS).
With FPS, the calculation is almost identical: divide the total number of frames by the FPS. But multiplication is a more fundamental operation than division, so it makes a bit more sense to use multiplication.
The more important benefit of SPF is that it makes finding calculations easier with multiple videos. Suppose that I have two videos:
Video A with 100 frames that is processed at 0.1 SPF or 10 FPS.
Video B with 100 frames that is processed at 0.2 SPF or 5 FPS.
What is the average speed of both videos and how long will it take to process them sequentially?
With SPF the calculation is simply the arithmetic mean of the two SPF values, (0.1 SPF + 0.2 SPF) / 2 = 0.15 SPF average, and the total time is (0.15 SPF) × (200 frames) = 30 seconds.
The same formula for FPS gives a (10 FPS + 5 FPS) / 2 = 7.5 FPS average, which is incorrect—it results in (200 frames) / (7.5 FPS) = 26.7 seconds. The correct formula uses the harmonic mean, which is the average that one should use for rates, like VAI processing speeds. Then one obtains the correct value of 2 / [1 / (10 FPS) + 1 / (5 FPS)] = 6.67 FPS and (200 frames) / (6.67 FPS) = 30 seconds.
A quicker way to find the time taken using FPS is by the following formula: (100 frames) / (10 FPS) + (100 frames) / (5 FPS) = 30 seconds. That’s the same as the SPF formula but with division signs instead of multiplication signs.
The same situation occurs if we change the length of Video B so that it has 300 frames.
With SPF, we have an average of [(0.1 SPF) × (100 frames) + (0.2 SPF) × (300 frames)] / (400 frames) = 0.175 SPF and a total time of (0.1 SPF) × (100 frames) + (0.2 SPF) × (300 frames) = 70 seconds.
With FPS, we have an average of (400 frames) / [(100 frames) / (10 FPS) + (300 frames) / (5 FPS)] = 5.71 FPS and a total time of (100 frames) / (10 FPS) + (300 frames) / (5 FPS) = 70 seconds.
The calculations for the total time are similar whether SPF or FPS is used, but calculating averages is more complicated for FPS due to the use of the harmonic mean vs. the arithmetic mean.
The online version seems to be limited to a single model (If I click on settings I can only select “lama”), I recommend you install it locally.
I’ve had ok results with the tool. But don’t expect a level of object removal like with the picture TPX posted. For something like this you need a cloud service like Adobe.
P.S. If you are already unsatisfied with the result of Lama Cleaner, which is a project years in the making and focuses solely on object removal, imagine what the implementation in Photo AI will be like. I have no doubt, that if Topaz Labs were to build an extra product that focused on object removal (with resources, …) they would do a good job. But I can’t image such a functionality being developed alongside Photo AI. It will just take resources away from the things that matter and in the end one will have a basic object removal and people will post bugs and suggestions for that additional feature as well. Resulting in even less time for the core features and its bug fixes.