Video Enhance AI best NVIDIA GPU specs?

Short Version:

What NVIDIA GPU specs are most important for Video Enhance AI performance (quickest run times)?(Consider two cases: (1) 480p-1080p and (2) 1080p-8K) (My guess is that different use cases are bottle-necked by different specs. But that’s just a wild guess…)
Ideally, rank the top three specs.
(For example: 1: Number of tensor cores; 2: Video RAM ; 3: Memory interface (bandwidth).)

Long Version:
I’ve been looking for this information, and it does not appear to be available on Topaz forums or 3rd party review sites. I’m well aware of the suggested minimum requirements, but these recommendations don’t include any information as to “why” specific cards are better (or even a frame of reference for theoretically comparing cards).

NVIDIA will be releasing a new line of cards starting next month, Sept. 2020, and many users might be considering upgrading soon. Without knowing what specs affect performance we have no way of making an informed decision about what the best upgrades will be. For example, maybe the RTX 2080’s will perform similarly to the "3080"s (or whatever they will be called). So, it would make more sense to find a good deal on a 2080 than to buy the “newest” model.

Secondarily, maybe, all things being equal, RAM ends up being the bottleneck if your workflow tends to include 4K and 8K upscaling. So, you might be better off prioritizing a model with more RAM (or increased memory bus). But, I don’t really know enough to make an educated guess, at least not enough to warrant spending hundreds of dollars “just to find out”.

I got a response from support (Thanks, Ben!), and I thought I’d share here what he said would be making it’s way to the FAQ.

The speed and number of CUDA cores on the GPU are the most important specs. “CUDA” cores are sometimes denoted “stream processors” but that term appears to be more often used for AMD GPUs. CUDA is analogous but a entirely different processor architecture.

(He also mentioned efforts to use more of the GPU processing power in future updates to the program. So, there may be even more untapped potential in some of the newer architecture cards.)

I currently have a GTX 1080 with 2560 CUDA cores. I’ve read that the current gen (20XX’s), the Turing architecture GPUs, have made several improvements to the CUDA architecture itself (So, may not be apples-to-apples if you’re just comparing number of cores between 1080 and 2080). So, hopefully that helps anybody who is considering an upgrade.

See the tomshardware for the latest rumors on what Nvidia may reveal 8/31