TVAI friendlier with RDNA 3?

Apologies if this has been addressed elsewhere in the forums and I just haven’t found it, but could someone help me understand what’s “going on behind the scenes” in the software that seems to have (previously) resulted in the RDNA 3 architecture (meaningfully) outperforming Ada Lovelace?

Specifically referencing this analysis by Puget Systems from earlier this year: https://www.pugetsystems.com/labs/articles/topaz-ai-suite-nvidia-geforce-rtx-40-series-performance/

I’m really just curious to understand, broadly, why TVAI performed better with RDNA 3 despite there having been various TensorRT optimizations noted in the release changelogs.

Yes, I’m aware of the benchmarking section, but given the multitude of variables that impact performance from one system to another, I find them less insightful, and generally would just like to understand if TVAI is fundamentally more friendly to the RDNA 3 architecture despite having functionality that is exclusive to NVIDIA architecture.

The benchmark shown in Puget systems for TVAI is outdated. They were using the TVAI V3.0.11 for their benchmark.

TVAI use a new processing pipeline since V3.1 and speed have improved significantly in newer version.

1 Like