Discussion | Compatibility | Performance Compared: 7900XT vs 4070 Ti

Hi,
I’m wondering whether the GPU model has any effect on the final image quality, not just processing speed, especially when using the redefine model in Gigapixel AI.

For example, if I run the exact same image with the same settings on an AMD 7900 XT and on an NVIDIA 4070 Ti, will the final result look identical, or can there be visible differences in sharpness, detail, or AI interpretation due to differences in GPU architecture or supported features?

I’m not asking about render time, just final output quality.
Any insight or official confirmation would be appreciated!

Thanks.

Normally the result SHOULD look the same.

But, seeing those quite different artifacts we get on different platforms I tend to question that a bit as well.

I don’t think anyone has made a thorough research in this field, though.

When i switched from W6800 to RTX 4090, the image did look better with the 4090.

Better in terms of sharpness and detail.

I think because the 4090 was using a Tensor FP32 Model (upscaling) and the W6800 was using a FP16 Model.

To this time using a Tensor FP32 Model was the same speed but higher precision compared to a FP16 model with lower precision.

So the nvidia did use a model with a higher precision, the AMD GPU was using a lower precision model.

I don’t know how this has changed for the AMD 7000 and 9000 series.

By the way, i would get a R9700 XT GPU instead of a 7900 or 40XX GPU.

Hello!

Due to computation differences at the platform level, results from certain hardware may vary slightly. We work with our partners to minimize these differences where possible. Models in the cloud run on optimal hardware setups that closely match our research environment.

Enjoy :tada:

1 Like

Image will be the same but nVidia have better AI performance.