Hi,
I’m wondering whether the GPU model has any effect on the final image quality, not just processing speed, especially when using the redefine model in Gigapixel AI.
For example, if I run the exact same image with the same settings on an AMD 7900 XT and on an NVIDIA 4070 Ti, will the final result look identical, or can there be visible differences in sharpness, detail, or AI interpretation due to differences in GPU architecture or supported features?
I’m not asking about render time, just final output quality.
Any insight or official confirmation would be appreciated!
Due to computation differences at the platform level, results from certain hardware may vary slightly. We work with our partners to minimize these differences where possible. Models in the cloud run on optimal hardware setups that closely match our research environment.