Hi all,
I’ve been using the Topaz Photo AI cloud API (api.topazlabs.com/image/v1) to batch-process images via the async endpoint (/enhance-gen/async), and I’m finding the results are consistently lower quality than what I get from the same models in the desktop Mac app with equivalent settings.
I’d appreciate any guidance on whether there are additional parameters I’m missing, or whether the cloud API genuinely runs different model weights / processing pipelines than the desktop app.
My API Setup
Wonder 2 (upscale)
| Parameter | Value |
|---|---|
model |
Wonder 2 |
output_width |
original width * 1.5 |
output_height |
original height * 1.5 |
output_format |
jpeg (for high-res sources) / png (for low-res sources that go through a second pass) |
Recovery V2 (face recovery, applied as a second pass for low-resolution sources)
| Parameter | Value |
|---|---|
model |
Recovery V2 |
face_enhancement |
true |
face_enhancement_strength |
0.20 |
face_enhancement_creativity |
0.0 |
face_enhancement_include_hair |
true |
face_enhancement_include_neck |
true |
output_format |
jpeg |
Workflow
- Submit all images to Wonder 2 at 1.5x upscale via the async endpoint
- Poll
/status/{process_id}until complete - For low-resolution sources (under 800px wide), download the Wonder 2 result as PNG, then submit it to Recovery V2 for face enhancement
- Download final results via
/download/{process_id}
What I’m Seeing
- The desktop app produces noticeably sharper detail, better texture preservation, and more natural-looking face recovery than the API with the same model names and comparable settings
- Fine detail (hair, fabric texture, skin pores) comes through much better in the desktop app
- Face recovery from the desktop app looks more natural — the API results sometimes look slightly softer or more smoothed-over in comparison
- The difference is consistent across multiple images and source resolutions
Questions
- Are the model weights used by the cloud API the same version as the current desktop app? Or does the API lag behind in model updates?
- Are there additional parameters available on the API that I’m not using (e.g., sharpening strength, noise reduction level, specific model version selection) that could close the quality gap?
- Does the desktop app apply any additional post-processing steps (e.g., auto-sharpening, auto-denoise) that aren’t exposed or aren’t enabled by default on the API?
- Is there a way to specify which model version to use (e.g., if the desktop app has updated to a newer Wonder 2 revision)?
- For the Recovery V2 face enhancement — are there parameters I’m missing that could improve the result? The desktop app seems to do a better job even at the same strength setting.
Any pointers from the team or other API users would be really helpful.
Thanks!