So, if you put two GPUs in one machine, you can open two instances of TVAI and tell them each to use a different GPU. This could work well if you have a 16 core or more CPU since TVAI needs lots of CPU power too.
There is the option to use both GPUs on one instance of TVAI, but it might under-preform. I don’t know of anyone that has shared detail speed results from such a setup, so I only have my setup to go off of. If I enable the iGPU on my CPU and tell TVAI to use all GPUs, I get slower results by a significant amount. That could be because the iGPU is so much slower and is slowing the dedicated one down.
I use a dual GPU setup: 2x 3080ti.
With GAIA it brings double speed. If I start 2 instances with 1x GPU each it is rather slower overall.
When I look at the benchmarks here, 2x 3080 Ti beat a 4090, but only at GAIA.
Using two GPU’s on one instance should be a lot faster for high resolution videos, since you are throwing double the amount of VRAM at it (assuming the two GPUs are equal). For low resolution videos, using separate GPU’s for each instance would end up being faster. Just educated guesses on my part.
I’ve found the 4k upscale with SD material is much faster than HD with the 3.3.10 and prior versions.
Thanks for responding. That’s really interesting to know. It says your CPU has 8 cores. I wonder how it would fair if you had one with 16. I have 12 cores, and it looks like TVAI makes heavy use of 8 and lighter use of 2 more (Like ~75% instead of ~100%). Because of that, I suspect 20 cores would be ideal for two GPUs. (With a few ultra fast storage drives, of course.)
Neat Video is great plug in, yes. Although it works mostly by sampling noise patter and removing it, rather than reconstructing details like Topaz does after heavy compression artifacts. Resolve studio version has a pretty good noise reduction as well, but not as fast or versatile as Neat Video.
Resolve studio version has a pretty good noise reduction as well, but not as fast or versatile as Neat Video. Or as easy to use. With Resolve you have to tweak the settings for each shot and eye ball it, with neat video you can sample from an area and let the plug in build profile for you, with bunch of other settings you can include to clean up various problems in video, so while simple noise reduction is less of a problem, eventually you find that extra money fro neat video is worth it. Its also better optimized so its faster to render, which can be a big time saver on longer noisy sequences.
Please show me the part where this is true ![]()
Topaz Video AI v3.2.2
System Information
OS: Windows v11.2009
CPU: AMD Ryzen 9 7950X 16-Core Processor 63.14 GB
GPU: NVIDIA GeForce RTX 4090 23.59 GB
GPU: AMD Radeon(TM) Graphics 0.47446 GB
Processing Settings: device: 0 vram: 1 instances: 1
Input Resolution: 1920x1080
Benchmark Results
Artemis 1X: 41.76 fps 2X: 14.36 fps 4X: 3.61 fps
Proteus 1X: 36.04 fps 2X: 17.44 fps 4X: 3.56 fps
Gaia 1X: 15.78 fps 2X: 10.81 fps 4X: 4.59 fps
4X Slowmo Apollo: 25.73 fps Chronos: 33.05 fps Chronos Fast: 37.30 fps
Topaz Video AI v3.2.2
System Information
OS: Windows v11.2009
CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz 127.91 GB
GPU: NVIDIA GeForce RTX 3080 Ti 11.816 GB
GPU: NVIDIA GeForce RTX 3080 Ti 11.816 GB
Processing Settings: device: 2 vram: 1 instances: 1
Input Resolution: 1920x1080
Benchmark Results
Artemis 1X: 23.97 fps 2X: 9.79 fps 4X: 2.69 fps
Proteus 1X: 16.31 fps 2X: 8.51 fps 4X: 2.54 fps
Gaia 1X: 14.15 fps 2X: 9.23 fps 4X: 3.31 fps
4X Slowmo Apollo: 18.70 fps Chronos: 13.75 fps Chronos Fast: 18.75 fps
Are you seriously comparing an I7-10700K with a 7950X?
8 cores versus 16 cores of the latest generation?
What do you think 2x 3080Ti do in combination with a 16 core? .)
And as I explicitly said: it’s about GAIA! This is mainly GPU heavy.
In addition, my system is undervolted, which I had also written in one of the 2 benchmarks here.
Ok so why did you bother ranting about the CPU differences then? Show me 2 3080Tis beating a 4090 on any CPU. Otherwise your claim is nonsense. ![]()
My current benchmark without overclocking; I don’t know anything about that. Look at the GAIA values!
Topaz Video AI v3.3.10
System Information
OS: Windows v11.22
CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz 127.91 GB
GPU: NVIDIA GeForce RTX 3080 Ti 11.816 GB
GPU: NVIDIA GeForce RTX 3080 Ti 11.816 GB
Processing Settings
device: 2 vram: 1 instances: 1
Input Resolution: 1920x1080
Benchmark Results
Artemis 1X: 20.20 fps 2X: 09.63 fps 4X: 02.64 fps
Proteus 1X: 17.01 fps 2X: 07.23 fps 4X: 02.45 fps
Gaia 1X: 16.24 fps 2X: 09.42 fps 4X: 02.70 fps
4X Slowmo Apollo: 17.01 fps APFast: 40.53 fps Chronos: 17.58 fps CHFast: 20.88 fps
Edit
David.123: “Show me 2 3080Tis beating a 4090 on any CPU”
Here is someone with a 4090 and an 8 core CPU:
Well then, that’s fast, especially for Gaia 1x!
Thanks for posting proof.
Hello Imo,
It would be very interesting to see what your results look like when running 2 instances of Topaz AI at 24 cores.
It seems that especially at high resolutions the number of cores has a lot of positive influence and the pure GPU performance decreases.
Just for feedback, I recently purchase a PC with the latest tech, I share my benchmark here.
Topaz Video AI v3.3.10
System Information
OS: Windows v11.22
CPU: 13th Gen Intel(R) Core(TM) i9-13900KF 127.79 GB
GPU: NVIDIA GeForce RTX 4090 23.59 GB
Processing Settings
device: 0 vram: 1 instances: 0
Input Resolution: 1920x1080
Benchmark Results
Artemis 1X: 40.53 fps 2X: 16.74 fps 4X: 04.64 fps
Proteus 1X: 34.45 fps 2X: 15.61 fps 4X: 04.46 fps
Gaia 1X: 15.47 fps 2X: 10.60 fps 4X: 04.88 fps
4X Slowmo Apollo: 42.07 fps APFast: 74.15 fps Chronos: 33.75 fps CHFast: 34.97 fps
@gregory.maddra Would it be possible to introduce 1.5x upscaling on the upscaling filters themselves?
I’d have some stereoscopic 180 degree videos with 5760x2880px but upscaling 2x would result in >8192px which the nvidia encoder can’t handle, but upscaling to 11.520x5760 and resizing to 8192x5760 before encoding would be waste of some ressouces.
This topic was automatically closed after 69 minutes. New replies are no longer allowed.