My observations and recommendations

I apologize to the moderators for opening another topic after yesterday’s. I didn’t express myself well in the previous one and didn’t showcase good examples or my current workflow, so I deleted the post.

In essence, I believe all Topaz AI programs are excellent and well-designed for their specific purposes, and they struggle to attract end users with the capabilities they offer.

However, I have some “concerns” and requests for the Topaz team:

  1. In a future PhotoAI version, if the user chooses the upscale option for every image, please introduce a new denoise model specifically and exclusively trained for upscaling purposes.

Currently, it seems to me that including the denoise model together with the hi-fi, standard, and low-resolution models can sometimes yield unpredictable results in terms of texture/detail loss and require longer experimentation with settings, which can be tiring if you’re working with a batch of photos.

I’d like this to be similar to the “blind” denoise model I’ll present in the post. It should be controllable in batch mode with the option to save details and blending in a desired percentage, and it should remove noise while also retaining details without over-sharpening them. This would be most beneficial for textures and smaller images that really lose detail when denoise is enabled (with current models). It would also be useful for high-noise images that give unpredictable results when the AI denoise model is automatically enabled with upscaling.

  1. Alternatively, you could introduce more robust upscale models with a wider range of noise removal and detail preservation.
  2. In my opinion, it would be a big boost to further train the remaining Gigapixel models that carry over from version to version (except Standard and HQ).
  3. It would be fantastic to bring back the 1x upscale models because they can be really helpful. I understand to some extent why they’re not in PAI, but why aren’t they in Gigapixel 7.0? The difference in 1x upscale between PAI/GPx 7.0 and GPx 6.3.3 is significant, especially on smaller images where downscaling is noticeable.

Currently, until I find a better way, I’m using open-source models for blind denoise (Scunet GAN model and U-former model) in Chainer, where I roughly set the percentage, sometimes favoring Scunet, which tends to remove more noise. For older and scanned images, I definitely allow a higher percentage for the Scunet model. This is my pre-loading process in Gpx-ai. I mostly use the HQ model there, but sometimes the Standard model. I’ve noticed that the low-resolution model creates small artifacts that are visible when the output image is zoomed and compared in a higher-quality viewer like Adobe Bridge or XnView. I will attach some examples.

Beyond Upscaling:

  1. Grain: This option, long present in Topaz Labs’ Video AI and Sharpen AI, would be a valuable addition for adding subtle texture and realism to upscaled images.
  2. Multi-Page Preview: This feature would allow for a side-by-side comparison of different AI models and settings, streamlining the selection process and improving efficiency.
  3. Basic Color Grading Presets: Presets for common color adjustments like warmth, vibrancy, and contrast would enhance user-friendliness and provide a starting point for further customization.
  4. Optional White Balance/Exposure Correction in Autopilot: While not fully trained, these corrections could be helpful in batch mode, with the option to easily disable them.
  5. Output Color Space Selection: As featured in previous versions of Gigapixel, Sharpen AI, and Denoise AI, this option would allow users to choose the color space for the output file, ensuring compatibility with their preferred workflows.
  6. DNG and Non-RAW Export: Expanding export options beyond RAW would cater to a wider range of users and software.
  7. Advanced Autopilot Sync in Batch Mode: Going beyond simple copying of settings, this feature would offer more granular control in batch processing. Users could select specific sections (e.g., sharpness, noise reduction) and have the Autopilot operate in a sub-mode for those sections, allowing for fine-tuned adjustments while maintaining the efficiency of batch processing. This aligns with the approach taken by many modern image editors. It’s crucial to retain manual control over face recovery, enabling users to precisely refine facial details as needed.

Don’t forget to vote

1 Like

I added a few things to the post, hope that’s okay.