I’d tackle the problem completely differently. If I had to produce 100fps on 2K or 4K upscale, then I’d not go with a single GPU, but with a bunch of them on many different machines with the configuration that has the best cost/performance ratio, even if that’d be measly GTX 2080 ones.
This is also the defacto cloud (rent) setup, since it’d be on-demand, meaning you spin up the machines when you need some processing, and then tear them down when you’re done. As such you still get the “pay per use” cost model you’re looking for (renting).
Assuming a real-time use case where you’d want to stream the output live at 100fps, and could accept a second or so delayed streaming start (still a whole lot lower than the typical buffering delay in broadcasting/streaming), then I’d just send chunks of the video to different machines, have them output PNG or low quantized jpeg images (or split outputs per GOPs if I had the time to do it properly) and then do a fan-in (scatter gather pattern) where the frames are encoded to the required bitrate (or just spliced if using encoded GOPs) and streamed out to the distribution point (or a single viewer, of not for live broadcast but just wanting a personal rendition).
It’d be a whole lot cheaper, faster and more efficient than trying to buy / rent an over-the-top monster of a machine / GPU, a machine that will still be bottle-necked and won’t be able to meet OP’s stated performance objective…
For such a use case cost/frame becomes incredibly important. That’s why most SaaS services that perform video processing charge per unit of time (like per minute) of processing, since resolution, frame rate and type of processing yield a cost profile they know in advance what it’ll cost them. And they can then offer a given price per minute for that type of processing.
That was a bit of a tangent, but in short, if we’re talking about order(s) of magnitude higher throughput than any given GPU and memory subsystem can deliver, like OP mentioned, then there’s no other option than going with a distributed setup.
But a distributed setup is a bit of a gray area in terms of this specific software. I’m sure Topaz is looking for a way to monetize this use case, and are playing with licensing options that would not be too expensive, nor leave “too much money on the table”. So even though it’s possible to do the above now already (I’ve done it on a smaller scale in the lab, and use other software that exact way in prod), Tony would likely wag a finger if he saw it deployed in the wild before his team has time to catch up on the licensing front. It is the proper solution to the problem of scaling though; horizontal and not vertical.