In my standard tests in v3.3.10, Dione and Dione X2 both processed 576i->1080p at around 150+fps (i13700K with RTX4090, 64Gb RAM running at 5.4GHz, and all flies on high-end NVMe drives).
Now, in v 3.4.0, exactly the same processing runs at only 57fps.
I did not notice whether 3.4.0 downloaded replacement models or not (it only takes a couple of seconds on my system, and the download notice is not especially prominent).
VEAI max processes is set to 4 with 100% use permitted (where I’ve left it since commissioning the RTX4090 system).
I think (but not 100% sure), watching the GPU load, that 3.4.0 runs it at about 75%, while I think under 3.3.10 it was running GPU at 100%. CPU is not stressed, only 50%. Nor is RAM, 20% used.
So I just ran a Benchmark with the following results. They appear to be quite on par for my system config. But the Benchmark does NOT contain a Dione model test. Would be useful to add one.
Topaz Video AI v3.4.0
OS: Windows v10.22
CPU: 13th Gen Intel(R) Core(TM) i7-13700K 63.771 GB
GPU: NVIDIA GeForce RTX 4090 23.59 GB
GPU: Intel(R) UHD Graphics 770 0.125 GB
device: 0 vram: 1 instances: 0
Input Resolution: 1920x1080
Artemis 1X: 40.08 fps 2X: 19.42 fps 4X: 05.26 fps
Iris 1X: 19.13 fps 2X: 09.91 fps 4X: 03.40 fps
Proteus 1X: 34.74 fps 2X: 17.93 fps 4X: 05.19 fps
Gaia 1X: 15.44 fps 2X: 10.52 fps 4X: 04.97 fps
4X Slowmo Apollo: 38.96 fps APFast: 82.22 fps Chronos: 30.48 fps CHFast: 32.56 fps
the deinterlacing option seems to make only use of the cpu. the number of processes does not matter. it is just for the case that you want to process multiple videos at the same time. deinterlacing is done with 140 fps on my pc no matter which graphics processor I choose in the preferences!